Nov 24 11:09:07 crc systemd[1]: Starting Kubernetes Kubelet... Nov 24 11:09:07 crc restorecon[4763]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:07 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 11:09:08 crc restorecon[4763]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 11:09:08 crc restorecon[4763]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Nov 24 11:09:08 crc kubenswrapper[5072]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 11:09:08 crc kubenswrapper[5072]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Nov 24 11:09:08 crc kubenswrapper[5072]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 11:09:08 crc kubenswrapper[5072]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 11:09:08 crc kubenswrapper[5072]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 24 11:09:08 crc kubenswrapper[5072]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.739713 5072 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.744749 5072 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.744777 5072 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.744787 5072 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.744796 5072 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.744804 5072 feature_gate.go:330] unrecognized feature gate: Example Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.744814 5072 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.744823 5072 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.744831 5072 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.744839 5072 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.744847 5072 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.744856 5072 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.744866 5072 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.744876 5072 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.744885 5072 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.744902 5072 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.744914 5072 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.744924 5072 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.744933 5072 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.744941 5072 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.744950 5072 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.744959 5072 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.744968 5072 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.744976 5072 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.744987 5072 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.744997 5072 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.745006 5072 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.745014 5072 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.745022 5072 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.745030 5072 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.745039 5072 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.745047 5072 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.745055 5072 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.745063 5072 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.745071 5072 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.745079 5072 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.745086 5072 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.745094 5072 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.745102 5072 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.745110 5072 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.745117 5072 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.745125 5072 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.745133 5072 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.745141 5072 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.745148 5072 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.745156 5072 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.745164 5072 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.745171 5072 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.745179 5072 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.745187 5072 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.745195 5072 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.745203 5072 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.745211 5072 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.745218 5072 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.745226 5072 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.745233 5072 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.745241 5072 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.745254 5072 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.745264 5072 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.745273 5072 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.745281 5072 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.745290 5072 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.745299 5072 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.745307 5072 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.745316 5072 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.745324 5072 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.745332 5072 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.745340 5072 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.745348 5072 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.745356 5072 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.745365 5072 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.745395 5072 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.746477 5072 flags.go:64] FLAG: --address="0.0.0.0" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.746499 5072 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.746514 5072 flags.go:64] FLAG: --anonymous-auth="true" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.746525 5072 flags.go:64] FLAG: --application-metrics-count-limit="100" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.746537 5072 flags.go:64] FLAG: --authentication-token-webhook="false" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.746546 5072 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.746558 5072 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.746573 5072 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.746582 5072 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.746591 5072 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.746601 5072 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.746610 5072 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.746620 5072 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.746629 5072 flags.go:64] FLAG: --cgroup-root="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.746637 5072 flags.go:64] FLAG: --cgroups-per-qos="true" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.746646 5072 flags.go:64] FLAG: --client-ca-file="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.746655 5072 flags.go:64] FLAG: --cloud-config="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.746665 5072 flags.go:64] FLAG: --cloud-provider="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.746674 5072 flags.go:64] FLAG: --cluster-dns="[]" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.746686 5072 flags.go:64] FLAG: --cluster-domain="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.746694 5072 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.746704 5072 flags.go:64] FLAG: --config-dir="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.746713 5072 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.746723 5072 flags.go:64] FLAG: --container-log-max-files="5" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.746734 5072 flags.go:64] FLAG: --container-log-max-size="10Mi" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.746743 5072 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.746752 5072 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.746761 5072 flags.go:64] FLAG: --containerd-namespace="k8s.io" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.746771 5072 flags.go:64] FLAG: --contention-profiling="false" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.746780 5072 flags.go:64] FLAG: --cpu-cfs-quota="true" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.746789 5072 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.746798 5072 flags.go:64] FLAG: --cpu-manager-policy="none" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.746807 5072 flags.go:64] FLAG: --cpu-manager-policy-options="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.746850 5072 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.746859 5072 flags.go:64] FLAG: --enable-controller-attach-detach="true" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.746868 5072 flags.go:64] FLAG: --enable-debugging-handlers="true" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.746877 5072 flags.go:64] FLAG: --enable-load-reader="false" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.746887 5072 flags.go:64] FLAG: --enable-server="true" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.746896 5072 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.746907 5072 flags.go:64] FLAG: --event-burst="100" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.746917 5072 flags.go:64] FLAG: --event-qps="50" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.746926 5072 flags.go:64] FLAG: --event-storage-age-limit="default=0" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.746935 5072 flags.go:64] FLAG: --event-storage-event-limit="default=0" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.746945 5072 flags.go:64] FLAG: --eviction-hard="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.746956 5072 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.746965 5072 flags.go:64] FLAG: --eviction-minimum-reclaim="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.746974 5072 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.746984 5072 flags.go:64] FLAG: --eviction-soft="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.746993 5072 flags.go:64] FLAG: --eviction-soft-grace-period="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747002 5072 flags.go:64] FLAG: --exit-on-lock-contention="false" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747011 5072 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747020 5072 flags.go:64] FLAG: --experimental-mounter-path="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747029 5072 flags.go:64] FLAG: --fail-cgroupv1="false" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747038 5072 flags.go:64] FLAG: --fail-swap-on="true" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747048 5072 flags.go:64] FLAG: --feature-gates="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747058 5072 flags.go:64] FLAG: --file-check-frequency="20s" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747067 5072 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747077 5072 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747086 5072 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747095 5072 flags.go:64] FLAG: --healthz-port="10248" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747104 5072 flags.go:64] FLAG: --help="false" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747113 5072 flags.go:64] FLAG: --hostname-override="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747122 5072 flags.go:64] FLAG: --housekeeping-interval="10s" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747131 5072 flags.go:64] FLAG: --http-check-frequency="20s" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747140 5072 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747149 5072 flags.go:64] FLAG: --image-credential-provider-config="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747158 5072 flags.go:64] FLAG: --image-gc-high-threshold="85" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747167 5072 flags.go:64] FLAG: --image-gc-low-threshold="80" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747175 5072 flags.go:64] FLAG: --image-service-endpoint="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747184 5072 flags.go:64] FLAG: --kernel-memcg-notification="false" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747193 5072 flags.go:64] FLAG: --kube-api-burst="100" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747202 5072 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747212 5072 flags.go:64] FLAG: --kube-api-qps="50" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747221 5072 flags.go:64] FLAG: --kube-reserved="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747229 5072 flags.go:64] FLAG: --kube-reserved-cgroup="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747238 5072 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747247 5072 flags.go:64] FLAG: --kubelet-cgroups="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747256 5072 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747265 5072 flags.go:64] FLAG: --lock-file="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747275 5072 flags.go:64] FLAG: --log-cadvisor-usage="false" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747284 5072 flags.go:64] FLAG: --log-flush-frequency="5s" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747296 5072 flags.go:64] FLAG: --log-json-info-buffer-size="0" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747309 5072 flags.go:64] FLAG: --log-json-split-stream="false" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747318 5072 flags.go:64] FLAG: --log-text-info-buffer-size="0" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747327 5072 flags.go:64] FLAG: --log-text-split-stream="false" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747336 5072 flags.go:64] FLAG: --logging-format="text" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747345 5072 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747355 5072 flags.go:64] FLAG: --make-iptables-util-chains="true" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747363 5072 flags.go:64] FLAG: --manifest-url="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747401 5072 flags.go:64] FLAG: --manifest-url-header="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747413 5072 flags.go:64] FLAG: --max-housekeeping-interval="15s" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747423 5072 flags.go:64] FLAG: --max-open-files="1000000" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747434 5072 flags.go:64] FLAG: --max-pods="110" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747443 5072 flags.go:64] FLAG: --maximum-dead-containers="-1" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747452 5072 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747462 5072 flags.go:64] FLAG: --memory-manager-policy="None" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747471 5072 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747481 5072 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747489 5072 flags.go:64] FLAG: --node-ip="192.168.126.11" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747498 5072 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747517 5072 flags.go:64] FLAG: --node-status-max-images="50" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747526 5072 flags.go:64] FLAG: --node-status-update-frequency="10s" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747535 5072 flags.go:64] FLAG: --oom-score-adj="-999" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747544 5072 flags.go:64] FLAG: --pod-cidr="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747553 5072 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747567 5072 flags.go:64] FLAG: --pod-manifest-path="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747576 5072 flags.go:64] FLAG: --pod-max-pids="-1" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747585 5072 flags.go:64] FLAG: --pods-per-core="0" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747594 5072 flags.go:64] FLAG: --port="10250" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747603 5072 flags.go:64] FLAG: --protect-kernel-defaults="false" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747612 5072 flags.go:64] FLAG: --provider-id="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747621 5072 flags.go:64] FLAG: --qos-reserved="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747630 5072 flags.go:64] FLAG: --read-only-port="10255" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747639 5072 flags.go:64] FLAG: --register-node="true" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747649 5072 flags.go:64] FLAG: --register-schedulable="true" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747666 5072 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747680 5072 flags.go:64] FLAG: --registry-burst="10" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747690 5072 flags.go:64] FLAG: --registry-qps="5" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747699 5072 flags.go:64] FLAG: --reserved-cpus="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747708 5072 flags.go:64] FLAG: --reserved-memory="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747719 5072 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747728 5072 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747737 5072 flags.go:64] FLAG: --rotate-certificates="false" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747747 5072 flags.go:64] FLAG: --rotate-server-certificates="false" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747755 5072 flags.go:64] FLAG: --runonce="false" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747764 5072 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747773 5072 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747783 5072 flags.go:64] FLAG: --seccomp-default="false" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747792 5072 flags.go:64] FLAG: --serialize-image-pulls="true" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747801 5072 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747810 5072 flags.go:64] FLAG: --storage-driver-db="cadvisor" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747819 5072 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747829 5072 flags.go:64] FLAG: --storage-driver-password="root" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747838 5072 flags.go:64] FLAG: --storage-driver-secure="false" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747846 5072 flags.go:64] FLAG: --storage-driver-table="stats" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747855 5072 flags.go:64] FLAG: --storage-driver-user="root" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747864 5072 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747873 5072 flags.go:64] FLAG: --sync-frequency="1m0s" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747883 5072 flags.go:64] FLAG: --system-cgroups="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747891 5072 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747905 5072 flags.go:64] FLAG: --system-reserved-cgroup="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747914 5072 flags.go:64] FLAG: --tls-cert-file="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747923 5072 flags.go:64] FLAG: --tls-cipher-suites="[]" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747935 5072 flags.go:64] FLAG: --tls-min-version="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747944 5072 flags.go:64] FLAG: --tls-private-key-file="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747952 5072 flags.go:64] FLAG: --topology-manager-policy="none" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747962 5072 flags.go:64] FLAG: --topology-manager-policy-options="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747971 5072 flags.go:64] FLAG: --topology-manager-scope="container" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747980 5072 flags.go:64] FLAG: --v="2" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.747991 5072 flags.go:64] FLAG: --version="false" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.748002 5072 flags.go:64] FLAG: --vmodule="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.748013 5072 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.748023 5072 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.748222 5072 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.748233 5072 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.748242 5072 feature_gate.go:330] unrecognized feature gate: Example Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.748250 5072 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.748259 5072 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.748267 5072 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.748275 5072 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.748283 5072 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.748291 5072 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.748299 5072 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.748306 5072 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.748314 5072 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.748322 5072 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.748332 5072 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.748343 5072 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.748352 5072 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.748361 5072 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.748370 5072 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.748403 5072 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.748412 5072 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.748420 5072 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.748428 5072 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.748436 5072 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.748444 5072 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.748452 5072 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.748461 5072 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.748469 5072 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.748477 5072 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.748484 5072 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.748492 5072 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.748500 5072 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.748507 5072 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.748516 5072 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.748524 5072 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.748532 5072 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.748543 5072 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.748552 5072 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.748560 5072 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.748569 5072 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.748576 5072 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.748584 5072 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.748592 5072 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.748600 5072 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.748608 5072 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.748616 5072 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.748624 5072 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.748634 5072 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.748644 5072 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.748653 5072 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.748662 5072 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.748670 5072 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.748679 5072 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.748695 5072 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.748704 5072 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.748713 5072 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.748721 5072 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.748729 5072 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.748740 5072 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.748748 5072 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.748757 5072 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.748765 5072 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.748772 5072 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.748780 5072 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.748788 5072 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.748796 5072 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.748804 5072 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.748812 5072 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.748819 5072 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.748828 5072 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.748836 5072 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.748848 5072 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.748873 5072 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.762489 5072 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.762546 5072 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.762671 5072 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.762687 5072 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.762697 5072 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.762710 5072 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.762724 5072 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.762734 5072 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.762742 5072 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.762753 5072 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.762764 5072 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.762775 5072 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.762785 5072 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.762795 5072 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.762805 5072 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.762856 5072 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.762866 5072 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.762876 5072 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.762887 5072 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.762897 5072 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.762909 5072 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.762920 5072 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.762930 5072 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.762940 5072 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.762951 5072 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.762961 5072 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.762971 5072 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.762981 5072 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.762991 5072 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.763001 5072 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.763011 5072 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.763021 5072 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.763031 5072 feature_gate.go:330] unrecognized feature gate: Example Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.763043 5072 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.763053 5072 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.763063 5072 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.763077 5072 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.763088 5072 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.763098 5072 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.763105 5072 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.763137 5072 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.763146 5072 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.763157 5072 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.763167 5072 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.763177 5072 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.763188 5072 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.763197 5072 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.763208 5072 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.763216 5072 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.763224 5072 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.763231 5072 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.763239 5072 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.763248 5072 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.763264 5072 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.763276 5072 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.763285 5072 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.763295 5072 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.763305 5072 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.763313 5072 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.763322 5072 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.763330 5072 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.763338 5072 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.763346 5072 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.763354 5072 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.763362 5072 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.763400 5072 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.763410 5072 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.763420 5072 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.763431 5072 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.763440 5072 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.763450 5072 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.763459 5072 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.763472 5072 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.763489 5072 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.763714 5072 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.763728 5072 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.763736 5072 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.763746 5072 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.763755 5072 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.763764 5072 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.763772 5072 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.763781 5072 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.763788 5072 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.763799 5072 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.763807 5072 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.763816 5072 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.763823 5072 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.763832 5072 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.763845 5072 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.763861 5072 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.763872 5072 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.763882 5072 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.763893 5072 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.763904 5072 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.763914 5072 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.763925 5072 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.763934 5072 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.763942 5072 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.763950 5072 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.763959 5072 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.763967 5072 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.763976 5072 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.763986 5072 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.763999 5072 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.764012 5072 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.764024 5072 feature_gate.go:330] unrecognized feature gate: Example Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.764034 5072 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.764047 5072 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.764060 5072 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.764071 5072 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.764081 5072 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.764094 5072 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.764108 5072 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.764119 5072 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.764131 5072 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.764142 5072 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.764152 5072 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.764164 5072 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.764176 5072 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.764189 5072 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.764200 5072 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.764212 5072 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.764223 5072 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.764233 5072 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.764244 5072 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.764254 5072 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.764264 5072 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.764275 5072 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.764285 5072 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.764295 5072 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.764305 5072 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.764316 5072 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.764325 5072 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.764339 5072 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.764351 5072 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.764361 5072 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.764401 5072 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.764410 5072 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.764418 5072 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.764426 5072 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.764434 5072 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.764442 5072 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.764449 5072 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.764457 5072 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.764468 5072 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.764481 5072 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.764754 5072 server.go:940] "Client rotation is on, will bootstrap in background" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.773614 5072 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.773754 5072 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.775663 5072 server.go:997] "Starting client certificate rotation" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.775713 5072 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.775946 5072 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-26 15:26:46.499146127 +0000 UTC Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.776076 5072 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 772h17m37.72307568s for next certificate rotation Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.807697 5072 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.810820 5072 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.832202 5072 log.go:25] "Validated CRI v1 runtime API" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.876199 5072 log.go:25] "Validated CRI v1 image API" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.878665 5072 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.886320 5072 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-11-24-10-59-16-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.886439 5072 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.914217 5072 manager.go:217] Machine: {Timestamp:2025-11-24 11:09:08.91068736 +0000 UTC m=+0.622211906 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:d0383649-b062-48ed-9fc1-5e553cb9256a BootID:a41d3a9c-0834-482e-9391-dff98db0f196 Filesystems:[{Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:ce:57:18 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:ce:57:18 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:66:01:9e Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:78:7c:db Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:5f:78:9d Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:93:d6:c5 Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:f8:59:da Speed:-1 Mtu:1496} {Name:eth10 MacAddress:36:52:76:3f:81:25 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:42:18:09:c2:d0:23 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.914982 5072 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.915185 5072 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.915653 5072 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.915983 5072 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.916034 5072 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.916331 5072 topology_manager.go:138] "Creating topology manager with none policy" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.916350 5072 container_manager_linux.go:303] "Creating device plugin manager" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.916995 5072 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.917692 5072 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.917953 5072 state_mem.go:36] "Initialized new in-memory state store" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.918079 5072 server.go:1245] "Using root directory" path="/var/lib/kubelet" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.924121 5072 kubelet.go:418] "Attempting to sync node with API server" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.924160 5072 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.924208 5072 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.924229 5072 kubelet.go:324] "Adding apiserver pod source" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.924247 5072 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.928552 5072 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.929194 5072 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.110:6443: connect: connection refused Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.929222 5072 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.110:6443: connect: connection refused Nov 24 11:09:08 crc kubenswrapper[5072]: E1124 11:09:08.929345 5072 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.110:6443: connect: connection refused" logger="UnhandledError" Nov 24 11:09:08 crc kubenswrapper[5072]: E1124 11:09:08.929510 5072 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.110:6443: connect: connection refused" logger="UnhandledError" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.929675 5072 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.932294 5072 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.933817 5072 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.933884 5072 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.933900 5072 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.933913 5072 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.933934 5072 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.933947 5072 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.933959 5072 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.933981 5072 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.933995 5072 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.934008 5072 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.934033 5072 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.934046 5072 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.935107 5072 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.935773 5072 server.go:1280] "Started kubelet" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.936922 5072 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.936885 5072 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.937468 5072 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.110:6443: connect: connection refused Nov 24 11:09:08 crc systemd[1]: Started Kubernetes Kubelet. Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.942781 5072 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.947557 5072 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.947677 5072 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.947896 5072 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 20:02:43.284648339 +0000 UTC Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.947951 5072 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 656h53m34.336702743s for next certificate rotation Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.948090 5072 server.go:460] "Adding debug handlers to kubelet server" Nov 24 11:09:08 crc kubenswrapper[5072]: E1124 11:09:08.948600 5072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.948655 5072 volume_manager.go:287] "The desired_state_of_world populator starts" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.949034 5072 volume_manager.go:289] "Starting Kubelet Volume Manager" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.949316 5072 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 24 11:09:08 crc kubenswrapper[5072]: W1124 11:09:08.950308 5072 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.110:6443: connect: connection refused Nov 24 11:09:08 crc kubenswrapper[5072]: E1124 11:09:08.950434 5072 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.110:6443: connect: connection refused" logger="UnhandledError" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.950933 5072 factory.go:55] Registering systemd factory Nov 24 11:09:08 crc kubenswrapper[5072]: E1124 11:09:08.951117 5072 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.110:6443: connect: connection refused" interval="200ms" Nov 24 11:09:08 crc kubenswrapper[5072]: E1124 11:09:08.950267 5072 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.110:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187aecc8434d60f6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-24 11:09:08.935729398 +0000 UTC m=+0.647253904,LastTimestamp:2025-11-24 11:09:08.935729398 +0000 UTC m=+0.647253904,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.951781 5072 factory.go:221] Registration of the systemd container factory successfully Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.952512 5072 factory.go:153] Registering CRI-O factory Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.952556 5072 factory.go:221] Registration of the crio container factory successfully Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.952651 5072 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.952683 5072 factory.go:103] Registering Raw factory Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.952706 5072 manager.go:1196] Started watching for new ooms in manager Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.953622 5072 manager.go:319] Starting recovery of all containers Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.965517 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.965594 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.965617 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.965637 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.965659 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.965677 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.965695 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.965783 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.965806 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.965823 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.965849 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.965868 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.965886 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.965910 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.965927 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.965949 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.965969 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.965986 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.966003 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.966021 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.966039 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.966058 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.966075 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.966094 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.966113 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.966130 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.966153 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.966268 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.966287 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.966305 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.966323 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.966340 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.966495 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.966531 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.966553 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.966583 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.966602 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.966623 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.966661 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.966679 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.966697 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.966715 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.966757 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.966774 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.966792 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.966811 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.966830 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.966849 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.966868 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.966887 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.966904 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.966924 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.966948 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.966969 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.966990 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.967010 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.967031 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.967050 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.967070 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.967087 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.967105 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.967123 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.967142 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.967160 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.967179 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.967197 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.967215 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.967720 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.967743 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.967761 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.967778 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.967795 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.967812 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.967828 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.967849 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.967866 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.967883 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.967902 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.967919 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.967941 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.967975 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.968015 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.968039 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.968068 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.968091 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.968111 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.968130 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.968146 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.968163 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.968181 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.968198 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.968216 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.968233 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.968252 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.968269 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.968287 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.968305 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.968323 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.968340 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.968360 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.968408 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.968427 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.968444 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.968462 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.968486 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.968508 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.968527 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.968547 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.968566 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.968585 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.968605 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.968623 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.968644 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.968664 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.968682 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.968700 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.968718 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.968735 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.968753 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.968770 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.968788 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.968808 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.968825 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.968842 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.968864 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.968883 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.968953 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.968972 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.968990 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.969007 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.969023 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.969046 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.969065 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.969084 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.973776 5072 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.973875 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.973908 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.973931 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.973963 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.973986 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.974014 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.974034 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.974056 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.974082 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.974102 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.974127 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.974148 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.974166 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.974194 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.974213 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.974238 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.974257 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.974304 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.974332 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.974355 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.974461 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.974494 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.974523 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.974562 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.974588 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.974631 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.974661 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.974691 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.974766 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.974802 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.974833 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.974876 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.974911 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.974955 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.974986 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.975019 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.975060 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.975154 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.975202 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.975242 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.975271 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.975314 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.975345 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.975417 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.975451 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.975487 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.975525 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.975554 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.975595 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.975623 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.975651 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.975690 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.975720 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.975762 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.975795 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.975824 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.975860 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.975888 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.975927 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.975962 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.975991 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.976033 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.976059 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.976099 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.976129 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.976157 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.976199 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.976225 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.976263 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.976291 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.976317 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.976354 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.976416 5072 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.976444 5072 reconstruct.go:97] "Volume reconstruction finished" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.976461 5072 reconciler.go:26] "Reconciler: start to sync state" Nov 24 11:09:08 crc kubenswrapper[5072]: I1124 11:09:08.987217 5072 manager.go:324] Recovery completed Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.003048 5072 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.004601 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.004655 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.004673 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.005695 5072 cpu_manager.go:225] "Starting CPU manager" policy="none" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.005726 5072 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.005766 5072 state_mem.go:36] "Initialized new in-memory state store" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.011907 5072 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.015016 5072 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.015077 5072 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.015112 5072 kubelet.go:2335] "Starting kubelet main sync loop" Nov 24 11:09:09 crc kubenswrapper[5072]: E1124 11:09:09.015178 5072 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 24 11:09:09 crc kubenswrapper[5072]: W1124 11:09:09.017355 5072 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.110:6443: connect: connection refused Nov 24 11:09:09 crc kubenswrapper[5072]: E1124 11:09:09.017696 5072 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.110:6443: connect: connection refused" logger="UnhandledError" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.025998 5072 policy_none.go:49] "None policy: Start" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.027053 5072 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.027099 5072 state_mem.go:35] "Initializing new in-memory state store" Nov 24 11:09:09 crc kubenswrapper[5072]: E1124 11:09:09.049520 5072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.093081 5072 manager.go:334] "Starting Device Plugin manager" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.093427 5072 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.093457 5072 server.go:79] "Starting device plugin registration server" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.094049 5072 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.094076 5072 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.094275 5072 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.094496 5072 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.094532 5072 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 24 11:09:09 crc kubenswrapper[5072]: E1124 11:09:09.107821 5072 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.116053 5072 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc"] Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.116166 5072 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.117477 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.117533 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.117555 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.117748 5072 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.117898 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.117974 5072 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.119198 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.119220 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.119247 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.119258 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.119264 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.119282 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.119495 5072 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.119652 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.119706 5072 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.120524 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.120553 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.120564 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.120713 5072 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.120894 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.120930 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.120946 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.120949 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.121011 5072 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.121627 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.121678 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.121699 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.121897 5072 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.122020 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.122070 5072 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.122071 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.122182 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.122226 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.123136 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.123179 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.123195 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.123240 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.123266 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.123305 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.123399 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.123433 5072 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.124325 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.124356 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.124386 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:09 crc kubenswrapper[5072]: E1124 11:09:09.152253 5072 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.110:6443: connect: connection refused" interval="400ms" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.179089 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.179150 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.179186 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.179218 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.179248 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.179354 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.179427 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.179460 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.179492 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.179525 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.179617 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.179673 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.179706 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.179739 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.179779 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.194920 5072 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.196293 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.196342 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.196359 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.196422 5072 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 24 11:09:09 crc kubenswrapper[5072]: E1124 11:09:09.197020 5072 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.110:6443: connect: connection refused" node="crc" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.281032 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.281122 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.281157 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.281219 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.281255 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.281283 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.281334 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.281333 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.281457 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.281467 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.281534 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.281581 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.281613 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.281618 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.281664 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.281670 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.281708 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.281713 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.281747 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.281788 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.281826 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.281858 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.281936 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.281946 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.281979 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.281994 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.282055 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.282119 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.282160 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.282359 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.397321 5072 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.399488 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.399547 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.399564 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.399634 5072 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 24 11:09:09 crc kubenswrapper[5072]: E1124 11:09:09.400121 5072 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.110:6443: connect: connection refused" node="crc" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.463236 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.474353 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.505301 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 24 11:09:09 crc kubenswrapper[5072]: W1124 11:09:09.527612 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-e929e40939f5dabc452ddcdabaeb089592358d38c21f8abb7df9a612a5ed36f4 WatchSource:0}: Error finding container e929e40939f5dabc452ddcdabaeb089592358d38c21f8abb7df9a612a5ed36f4: Status 404 returned error can't find the container with id e929e40939f5dabc452ddcdabaeb089592358d38c21f8abb7df9a612a5ed36f4 Nov 24 11:09:09 crc kubenswrapper[5072]: W1124 11:09:09.531037 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-dbfb62ea3a97aafffcab4dd6b6e31f215fbd8bb32bae3e48ca1f5e84519368e9 WatchSource:0}: Error finding container dbfb62ea3a97aafffcab4dd6b6e31f215fbd8bb32bae3e48ca1f5e84519368e9: Status 404 returned error can't find the container with id dbfb62ea3a97aafffcab4dd6b6e31f215fbd8bb32bae3e48ca1f5e84519368e9 Nov 24 11:09:09 crc kubenswrapper[5072]: W1124 11:09:09.538610 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-4198f7fdd0323efe53f2c8bea4300cf0a6578b932563f83dbe7e98e5e2fb6940 WatchSource:0}: Error finding container 4198f7fdd0323efe53f2c8bea4300cf0a6578b932563f83dbe7e98e5e2fb6940: Status 404 returned error can't find the container with id 4198f7fdd0323efe53f2c8bea4300cf0a6578b932563f83dbe7e98e5e2fb6940 Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.544671 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.552519 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 11:09:09 crc kubenswrapper[5072]: E1124 11:09:09.552929 5072 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.110:6443: connect: connection refused" interval="800ms" Nov 24 11:09:09 crc kubenswrapper[5072]: W1124 11:09:09.576354 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-fac29f682ea4dd95c5ea1f0bcc3a7105c9c0ccbdedace9c2ff2ab3c26db54481 WatchSource:0}: Error finding container fac29f682ea4dd95c5ea1f0bcc3a7105c9c0ccbdedace9c2ff2ab3c26db54481: Status 404 returned error can't find the container with id fac29f682ea4dd95c5ea1f0bcc3a7105c9c0ccbdedace9c2ff2ab3c26db54481 Nov 24 11:09:09 crc kubenswrapper[5072]: W1124 11:09:09.580435 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-a189b78ccdf6343731414aff6590b34e4717e826d7d5567bdf9201ce502b3353 WatchSource:0}: Error finding container a189b78ccdf6343731414aff6590b34e4717e826d7d5567bdf9201ce502b3353: Status 404 returned error can't find the container with id a189b78ccdf6343731414aff6590b34e4717e826d7d5567bdf9201ce502b3353 Nov 24 11:09:09 crc kubenswrapper[5072]: W1124 11:09:09.752864 5072 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.110:6443: connect: connection refused Nov 24 11:09:09 crc kubenswrapper[5072]: E1124 11:09:09.752956 5072 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.110:6443: connect: connection refused" logger="UnhandledError" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.801114 5072 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.803058 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.803099 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.803131 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.803159 5072 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 24 11:09:09 crc kubenswrapper[5072]: E1124 11:09:09.803582 5072 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.110:6443: connect: connection refused" node="crc" Nov 24 11:09:09 crc kubenswrapper[5072]: W1124 11:09:09.908816 5072 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.110:6443: connect: connection refused Nov 24 11:09:09 crc kubenswrapper[5072]: E1124 11:09:09.908990 5072 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.110:6443: connect: connection refused" logger="UnhandledError" Nov 24 11:09:09 crc kubenswrapper[5072]: I1124 11:09:09.944665 5072 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.110:6443: connect: connection refused Nov 24 11:09:10 crc kubenswrapper[5072]: I1124 11:09:10.020878 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4198f7fdd0323efe53f2c8bea4300cf0a6578b932563f83dbe7e98e5e2fb6940"} Nov 24 11:09:10 crc kubenswrapper[5072]: I1124 11:09:10.022228 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"dbfb62ea3a97aafffcab4dd6b6e31f215fbd8bb32bae3e48ca1f5e84519368e9"} Nov 24 11:09:10 crc kubenswrapper[5072]: I1124 11:09:10.026101 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"e929e40939f5dabc452ddcdabaeb089592358d38c21f8abb7df9a612a5ed36f4"} Nov 24 11:09:10 crc kubenswrapper[5072]: I1124 11:09:10.029104 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a189b78ccdf6343731414aff6590b34e4717e826d7d5567bdf9201ce502b3353"} Nov 24 11:09:10 crc kubenswrapper[5072]: I1124 11:09:10.030751 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"fac29f682ea4dd95c5ea1f0bcc3a7105c9c0ccbdedace9c2ff2ab3c26db54481"} Nov 24 11:09:10 crc kubenswrapper[5072]: W1124 11:09:10.106281 5072 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.110:6443: connect: connection refused Nov 24 11:09:10 crc kubenswrapper[5072]: E1124 11:09:10.106453 5072 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.110:6443: connect: connection refused" logger="UnhandledError" Nov 24 11:09:10 crc kubenswrapper[5072]: E1124 11:09:10.193686 5072 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.110:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187aecc8434d60f6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-24 11:09:08.935729398 +0000 UTC m=+0.647253904,LastTimestamp:2025-11-24 11:09:08.935729398 +0000 UTC m=+0.647253904,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 24 11:09:10 crc kubenswrapper[5072]: W1124 11:09:10.233853 5072 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.110:6443: connect: connection refused Nov 24 11:09:10 crc kubenswrapper[5072]: E1124 11:09:10.234014 5072 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.110:6443: connect: connection refused" logger="UnhandledError" Nov 24 11:09:10 crc kubenswrapper[5072]: E1124 11:09:10.354750 5072 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.110:6443: connect: connection refused" interval="1.6s" Nov 24 11:09:10 crc kubenswrapper[5072]: I1124 11:09:10.604584 5072 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:09:10 crc kubenswrapper[5072]: I1124 11:09:10.606637 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:10 crc kubenswrapper[5072]: I1124 11:09:10.606674 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:10 crc kubenswrapper[5072]: I1124 11:09:10.606686 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:10 crc kubenswrapper[5072]: I1124 11:09:10.606710 5072 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 24 11:09:10 crc kubenswrapper[5072]: E1124 11:09:10.607179 5072 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.110:6443: connect: connection refused" node="crc" Nov 24 11:09:10 crc kubenswrapper[5072]: I1124 11:09:10.944725 5072 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.110:6443: connect: connection refused Nov 24 11:09:11 crc kubenswrapper[5072]: I1124 11:09:11.034571 5072 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5a6b0468c00ca40213d12dd7b80c9f0dcfb93509a44ae37414053672e674f9f2" exitCode=0 Nov 24 11:09:11 crc kubenswrapper[5072]: I1124 11:09:11.034704 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"5a6b0468c00ca40213d12dd7b80c9f0dcfb93509a44ae37414053672e674f9f2"} Nov 24 11:09:11 crc kubenswrapper[5072]: I1124 11:09:11.034784 5072 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:09:11 crc kubenswrapper[5072]: I1124 11:09:11.036858 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:11 crc kubenswrapper[5072]: I1124 11:09:11.036891 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:11 crc kubenswrapper[5072]: I1124 11:09:11.036902 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:11 crc kubenswrapper[5072]: I1124 11:09:11.037952 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"631c19835680cfbfc94d8d2864f79bb327a834aae717a2c9c525383029e44001"} Nov 24 11:09:11 crc kubenswrapper[5072]: I1124 11:09:11.038022 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"03a299161b21fb4a4bc255d765f39eaafa3c87549cc62d458d28ff57fbb4b5fd"} Nov 24 11:09:11 crc kubenswrapper[5072]: I1124 11:09:11.038297 5072 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:09:11 crc kubenswrapper[5072]: I1124 11:09:11.039189 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:11 crc kubenswrapper[5072]: I1124 11:09:11.039211 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:11 crc kubenswrapper[5072]: I1124 11:09:11.039219 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:11 crc kubenswrapper[5072]: I1124 11:09:11.040024 5072 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="fbe0eb41ca08614efa2e3fa0af8362b0490a809470803a2e683711ac082dc7e8" exitCode=0 Nov 24 11:09:11 crc kubenswrapper[5072]: I1124 11:09:11.040079 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"fbe0eb41ca08614efa2e3fa0af8362b0490a809470803a2e683711ac082dc7e8"} Nov 24 11:09:11 crc kubenswrapper[5072]: I1124 11:09:11.040182 5072 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:09:11 crc kubenswrapper[5072]: I1124 11:09:11.041151 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:11 crc kubenswrapper[5072]: I1124 11:09:11.041175 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:11 crc kubenswrapper[5072]: I1124 11:09:11.041183 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:11 crc kubenswrapper[5072]: I1124 11:09:11.041713 5072 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="b9772df13d553a560593560db376cb84f9ea9cb3dac735b48d2adb290c3d0e76" exitCode=0 Nov 24 11:09:11 crc kubenswrapper[5072]: I1124 11:09:11.041769 5072 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:09:11 crc kubenswrapper[5072]: I1124 11:09:11.041779 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"b9772df13d553a560593560db376cb84f9ea9cb3dac735b48d2adb290c3d0e76"} Nov 24 11:09:11 crc kubenswrapper[5072]: I1124 11:09:11.042643 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:11 crc kubenswrapper[5072]: I1124 11:09:11.042674 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:11 crc kubenswrapper[5072]: I1124 11:09:11.042686 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:11 crc kubenswrapper[5072]: I1124 11:09:11.043875 5072 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="91aa9d18d2efa1c3559a3a17858453a13c76b7567ffb215046c57556b661890c" exitCode=0 Nov 24 11:09:11 crc kubenswrapper[5072]: I1124 11:09:11.043966 5072 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:09:11 crc kubenswrapper[5072]: I1124 11:09:11.043901 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"91aa9d18d2efa1c3559a3a17858453a13c76b7567ffb215046c57556b661890c"} Nov 24 11:09:11 crc kubenswrapper[5072]: I1124 11:09:11.047940 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:11 crc kubenswrapper[5072]: I1124 11:09:11.047986 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:11 crc kubenswrapper[5072]: I1124 11:09:11.048007 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:11 crc kubenswrapper[5072]: I1124 11:09:11.944474 5072 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.110:6443: connect: connection refused Nov 24 11:09:11 crc kubenswrapper[5072]: E1124 11:09:11.956401 5072 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.110:6443: connect: connection refused" interval="3.2s" Nov 24 11:09:12 crc kubenswrapper[5072]: I1124 11:09:12.050829 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f89e652bfaac124e13e0b3dfd3f167688a6b417b3613fb94d5422e2134ad95a9"} Nov 24 11:09:12 crc kubenswrapper[5072]: I1124 11:09:12.051203 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"59c9b314ea6e67a2866adfd0dc2e429523b6db6dab450a1a95fe5528548a0fcb"} Nov 24 11:09:12 crc kubenswrapper[5072]: I1124 11:09:12.051227 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6597a19c8ed876fea1aaa8077315a8f39d0a79dee6af94970a3abcd552d673e3"} Nov 24 11:09:12 crc kubenswrapper[5072]: I1124 11:09:12.055664 5072 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:09:12 crc kubenswrapper[5072]: I1124 11:09:12.055650 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"28c581f99dcf7d549d235350230e7c3ef380dfeb4fdff577353410642700cb1b"} Nov 24 11:09:12 crc kubenswrapper[5072]: I1124 11:09:12.055727 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"25ce4f3c52e2096622385f0bd213a058de7ddd3967ed8ba918e79fc63b00429c"} Nov 24 11:09:12 crc kubenswrapper[5072]: I1124 11:09:12.056830 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:12 crc kubenswrapper[5072]: I1124 11:09:12.056864 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:12 crc kubenswrapper[5072]: I1124 11:09:12.056872 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:12 crc kubenswrapper[5072]: I1124 11:09:12.058143 5072 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="d299986df8243aa52e1ca08fff9cac0db589f25b646f32366e304cf4fc915214" exitCode=0 Nov 24 11:09:12 crc kubenswrapper[5072]: I1124 11:09:12.058213 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"d299986df8243aa52e1ca08fff9cac0db589f25b646f32366e304cf4fc915214"} Nov 24 11:09:12 crc kubenswrapper[5072]: I1124 11:09:12.058348 5072 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:09:12 crc kubenswrapper[5072]: I1124 11:09:12.063501 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:12 crc kubenswrapper[5072]: I1124 11:09:12.063554 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:12 crc kubenswrapper[5072]: I1124 11:09:12.063574 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:12 crc kubenswrapper[5072]: I1124 11:09:12.064758 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"473adc67bdfd905b16f570cb175b1e550ed0929162d0d6c9903c855e069fc30c"} Nov 24 11:09:12 crc kubenswrapper[5072]: I1124 11:09:12.064965 5072 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:09:12 crc kubenswrapper[5072]: I1124 11:09:12.066173 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:12 crc kubenswrapper[5072]: I1124 11:09:12.066207 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:12 crc kubenswrapper[5072]: I1124 11:09:12.066224 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:12 crc kubenswrapper[5072]: I1124 11:09:12.070293 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"24ca0cd9727c9f25252266ba758cfa75b6d48b1f683f97b36bc3a40d6e4d9346"} Nov 24 11:09:12 crc kubenswrapper[5072]: I1124 11:09:12.070518 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"802b58c2bb92a1887147eee76414a66c948e077ad8a3835bccd344ae67562b89"} Nov 24 11:09:12 crc kubenswrapper[5072]: I1124 11:09:12.070367 5072 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:09:12 crc kubenswrapper[5072]: I1124 11:09:12.070601 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"1845d620994797b0fad3550ee243fdb5719b076cd21e2cd9fbdbfd84d5afd805"} Nov 24 11:09:12 crc kubenswrapper[5072]: I1124 11:09:12.072007 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:12 crc kubenswrapper[5072]: I1124 11:09:12.072051 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:12 crc kubenswrapper[5072]: I1124 11:09:12.072068 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:12 crc kubenswrapper[5072]: I1124 11:09:12.208259 5072 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:09:12 crc kubenswrapper[5072]: I1124 11:09:12.211595 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:12 crc kubenswrapper[5072]: I1124 11:09:12.211653 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:12 crc kubenswrapper[5072]: I1124 11:09:12.211664 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:12 crc kubenswrapper[5072]: I1124 11:09:12.211687 5072 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 24 11:09:12 crc kubenswrapper[5072]: E1124 11:09:12.212209 5072 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.110:6443: connect: connection refused" node="crc" Nov 24 11:09:12 crc kubenswrapper[5072]: W1124 11:09:12.388546 5072 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.110:6443: connect: connection refused Nov 24 11:09:12 crc kubenswrapper[5072]: E1124 11:09:12.388654 5072 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.110:6443: connect: connection refused" logger="UnhandledError" Nov 24 11:09:12 crc kubenswrapper[5072]: I1124 11:09:12.787515 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 11:09:12 crc kubenswrapper[5072]: W1124 11:09:12.818615 5072 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.110:6443: connect: connection refused Nov 24 11:09:12 crc kubenswrapper[5072]: E1124 11:09:12.818705 5072 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.110:6443: connect: connection refused" logger="UnhandledError" Nov 24 11:09:12 crc kubenswrapper[5072]: I1124 11:09:12.943599 5072 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.110:6443: connect: connection refused Nov 24 11:09:12 crc kubenswrapper[5072]: W1124 11:09:12.954445 5072 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.110:6443: connect: connection refused Nov 24 11:09:12 crc kubenswrapper[5072]: E1124 11:09:12.954539 5072 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.110:6443: connect: connection refused" logger="UnhandledError" Nov 24 11:09:13 crc kubenswrapper[5072]: I1124 11:09:13.076590 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e03b85333c8be2e5efe40f082369652f009482373f8e230fd948b2dee4e2ee39"} Nov 24 11:09:13 crc kubenswrapper[5072]: I1124 11:09:13.076636 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0d2187669c4dc9aae8ca2f2141104aee1e20df96f0bccf45ecd4c8528f51d1af"} Nov 24 11:09:13 crc kubenswrapper[5072]: I1124 11:09:13.076755 5072 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:09:13 crc kubenswrapper[5072]: I1124 11:09:13.077993 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:13 crc kubenswrapper[5072]: I1124 11:09:13.078040 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:13 crc kubenswrapper[5072]: I1124 11:09:13.078055 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:13 crc kubenswrapper[5072]: I1124 11:09:13.080139 5072 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="c4342dc1e79fedf172c723736a130039e76d481d9c04106a22ad25ab8e3c8cb9" exitCode=0 Nov 24 11:09:13 crc kubenswrapper[5072]: I1124 11:09:13.080242 5072 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 11:09:13 crc kubenswrapper[5072]: I1124 11:09:13.080278 5072 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:09:13 crc kubenswrapper[5072]: I1124 11:09:13.080666 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"c4342dc1e79fedf172c723736a130039e76d481d9c04106a22ad25ab8e3c8cb9"} Nov 24 11:09:13 crc kubenswrapper[5072]: I1124 11:09:13.080768 5072 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:09:13 crc kubenswrapper[5072]: I1124 11:09:13.080815 5072 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:09:13 crc kubenswrapper[5072]: I1124 11:09:13.080846 5072 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:09:13 crc kubenswrapper[5072]: I1124 11:09:13.082195 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:13 crc kubenswrapper[5072]: I1124 11:09:13.082242 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:13 crc kubenswrapper[5072]: I1124 11:09:13.082264 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:13 crc kubenswrapper[5072]: I1124 11:09:13.082909 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:13 crc kubenswrapper[5072]: I1124 11:09:13.082937 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:13 crc kubenswrapper[5072]: I1124 11:09:13.082949 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:13 crc kubenswrapper[5072]: I1124 11:09:13.082983 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:13 crc kubenswrapper[5072]: I1124 11:09:13.083017 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:13 crc kubenswrapper[5072]: I1124 11:09:13.083039 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:13 crc kubenswrapper[5072]: I1124 11:09:13.083547 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:13 crc kubenswrapper[5072]: I1124 11:09:13.083627 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:13 crc kubenswrapper[5072]: I1124 11:09:13.083648 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:14 crc kubenswrapper[5072]: I1124 11:09:14.057324 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 11:09:14 crc kubenswrapper[5072]: I1124 11:09:14.088625 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4fc0def38d015fe99a0b28cb7d120f2057643bcb99bf6f3040e5edb22a436000"} Nov 24 11:09:14 crc kubenswrapper[5072]: I1124 11:09:14.088690 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"41099739f7a68ef18ea64b023b551a42670db1d9f80706439936aaf6942a38d1"} Nov 24 11:09:14 crc kubenswrapper[5072]: I1124 11:09:14.088704 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"836caae6820dd3abcef209e4d66a7d64ba81ffe10c43494666a989cee7ee24ee"} Nov 24 11:09:14 crc kubenswrapper[5072]: I1124 11:09:14.088744 5072 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:09:14 crc kubenswrapper[5072]: I1124 11:09:14.088784 5072 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:09:14 crc kubenswrapper[5072]: I1124 11:09:14.088822 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 11:09:14 crc kubenswrapper[5072]: I1124 11:09:14.091337 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:14 crc kubenswrapper[5072]: I1124 11:09:14.091417 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:14 crc kubenswrapper[5072]: I1124 11:09:14.091459 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:14 crc kubenswrapper[5072]: I1124 11:09:14.094830 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:14 crc kubenswrapper[5072]: I1124 11:09:14.094879 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:14 crc kubenswrapper[5072]: I1124 11:09:14.094899 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:14 crc kubenswrapper[5072]: I1124 11:09:14.909760 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 11:09:15 crc kubenswrapper[5072]: I1124 11:09:15.097624 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e99413babe707e048ced5765f9107219351b2df100fa7f430edb844cc73eecd0"} Nov 24 11:09:15 crc kubenswrapper[5072]: I1124 11:09:15.098477 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4f07b4fd90df5b04817aa5d8428f0790e1f543f9480016c9f260e26edd478db5"} Nov 24 11:09:15 crc kubenswrapper[5072]: I1124 11:09:15.097726 5072 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:09:15 crc kubenswrapper[5072]: I1124 11:09:15.097711 5072 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:09:15 crc kubenswrapper[5072]: I1124 11:09:15.099844 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:15 crc kubenswrapper[5072]: I1124 11:09:15.100031 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:15 crc kubenswrapper[5072]: I1124 11:09:15.100168 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:15 crc kubenswrapper[5072]: I1124 11:09:15.100782 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:15 crc kubenswrapper[5072]: I1124 11:09:15.100846 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:15 crc kubenswrapper[5072]: I1124 11:09:15.100864 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:15 crc kubenswrapper[5072]: I1124 11:09:15.412524 5072 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:09:15 crc kubenswrapper[5072]: I1124 11:09:15.413948 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:15 crc kubenswrapper[5072]: I1124 11:09:15.414056 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:15 crc kubenswrapper[5072]: I1124 11:09:15.414118 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:15 crc kubenswrapper[5072]: I1124 11:09:15.414195 5072 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 24 11:09:16 crc kubenswrapper[5072]: I1124 11:09:16.100244 5072 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:09:16 crc kubenswrapper[5072]: I1124 11:09:16.100301 5072 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:09:16 crc kubenswrapper[5072]: I1124 11:09:16.101783 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:16 crc kubenswrapper[5072]: I1124 11:09:16.101836 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:16 crc kubenswrapper[5072]: I1124 11:09:16.101859 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:16 crc kubenswrapper[5072]: I1124 11:09:16.102209 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:16 crc kubenswrapper[5072]: I1124 11:09:16.102411 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:16 crc kubenswrapper[5072]: I1124 11:09:16.102556 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:16 crc kubenswrapper[5072]: I1124 11:09:16.999237 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Nov 24 11:09:17 crc kubenswrapper[5072]: I1124 11:09:17.102511 5072 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:09:17 crc kubenswrapper[5072]: I1124 11:09:17.104796 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:17 crc kubenswrapper[5072]: I1124 11:09:17.104846 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:17 crc kubenswrapper[5072]: I1124 11:09:17.104862 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:17 crc kubenswrapper[5072]: I1124 11:09:17.901264 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 11:09:17 crc kubenswrapper[5072]: I1124 11:09:17.901566 5072 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:09:17 crc kubenswrapper[5072]: I1124 11:09:17.903496 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:17 crc kubenswrapper[5072]: I1124 11:09:17.903556 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:17 crc kubenswrapper[5072]: I1124 11:09:17.903577 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:17 crc kubenswrapper[5072]: I1124 11:09:17.908827 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 11:09:18 crc kubenswrapper[5072]: I1124 11:09:18.106071 5072 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:09:18 crc kubenswrapper[5072]: I1124 11:09:18.107814 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:18 crc kubenswrapper[5072]: I1124 11:09:18.107884 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:18 crc kubenswrapper[5072]: I1124 11:09:18.107902 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:18 crc kubenswrapper[5072]: I1124 11:09:18.152773 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 11:09:18 crc kubenswrapper[5072]: I1124 11:09:18.900668 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 11:09:18 crc kubenswrapper[5072]: I1124 11:09:18.900891 5072 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:09:18 crc kubenswrapper[5072]: I1124 11:09:18.902502 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:18 crc kubenswrapper[5072]: I1124 11:09:18.902600 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:18 crc kubenswrapper[5072]: I1124 11:09:18.902628 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:18 crc kubenswrapper[5072]: I1124 11:09:18.936863 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Nov 24 11:09:18 crc kubenswrapper[5072]: I1124 11:09:18.937029 5072 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:09:18 crc kubenswrapper[5072]: I1124 11:09:18.938072 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:18 crc kubenswrapper[5072]: I1124 11:09:18.938164 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:18 crc kubenswrapper[5072]: I1124 11:09:18.938193 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:19 crc kubenswrapper[5072]: E1124 11:09:19.108838 5072 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 24 11:09:19 crc kubenswrapper[5072]: I1124 11:09:19.119350 5072 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:09:19 crc kubenswrapper[5072]: I1124 11:09:19.123950 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:19 crc kubenswrapper[5072]: I1124 11:09:19.124014 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:19 crc kubenswrapper[5072]: I1124 11:09:19.124032 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:21 crc kubenswrapper[5072]: I1124 11:09:21.495472 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 11:09:21 crc kubenswrapper[5072]: I1124 11:09:21.496159 5072 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:09:21 crc kubenswrapper[5072]: I1124 11:09:21.498781 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:21 crc kubenswrapper[5072]: I1124 11:09:21.498829 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:21 crc kubenswrapper[5072]: I1124 11:09:21.498849 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:21 crc kubenswrapper[5072]: I1124 11:09:21.505931 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 11:09:22 crc kubenswrapper[5072]: I1124 11:09:22.126971 5072 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:09:22 crc kubenswrapper[5072]: I1124 11:09:22.128440 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:22 crc kubenswrapper[5072]: I1124 11:09:22.128507 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:22 crc kubenswrapper[5072]: I1124 11:09:22.128526 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:23 crc kubenswrapper[5072]: W1124 11:09:23.306911 5072 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Nov 24 11:09:23 crc kubenswrapper[5072]: I1124 11:09:23.307017 5072 trace.go:236] Trace[1796200961]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (24-Nov-2025 11:09:13.305) (total time: 10001ms): Nov 24 11:09:23 crc kubenswrapper[5072]: Trace[1796200961]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (11:09:23.306) Nov 24 11:09:23 crc kubenswrapper[5072]: Trace[1796200961]: [10.001576217s] [10.001576217s] END Nov 24 11:09:23 crc kubenswrapper[5072]: E1124 11:09:23.307046 5072 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Nov 24 11:09:23 crc kubenswrapper[5072]: I1124 11:09:23.572084 5072 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:52524->192.168.126.11:17697: read: connection reset by peer" start-of-body= Nov 24 11:09:23 crc kubenswrapper[5072]: I1124 11:09:23.572146 5072 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:52524->192.168.126.11:17697: read: connection reset by peer" Nov 24 11:09:23 crc kubenswrapper[5072]: I1124 11:09:23.944668 5072 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Nov 24 11:09:24 crc kubenswrapper[5072]: I1124 11:09:24.131961 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 24 11:09:24 crc kubenswrapper[5072]: I1124 11:09:24.133654 5072 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e03b85333c8be2e5efe40f082369652f009482373f8e230fd948b2dee4e2ee39" exitCode=255 Nov 24 11:09:24 crc kubenswrapper[5072]: I1124 11:09:24.133689 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"e03b85333c8be2e5efe40f082369652f009482373f8e230fd948b2dee4e2ee39"} Nov 24 11:09:24 crc kubenswrapper[5072]: I1124 11:09:24.133805 5072 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:09:24 crc kubenswrapper[5072]: I1124 11:09:24.134598 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:24 crc kubenswrapper[5072]: I1124 11:09:24.134643 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:24 crc kubenswrapper[5072]: I1124 11:09:24.134656 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:24 crc kubenswrapper[5072]: I1124 11:09:24.135189 5072 scope.go:117] "RemoveContainer" containerID="e03b85333c8be2e5efe40f082369652f009482373f8e230fd948b2dee4e2ee39" Nov 24 11:09:24 crc kubenswrapper[5072]: I1124 11:09:24.197604 5072 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Nov 24 11:09:24 crc kubenswrapper[5072]: I1124 11:09:24.197736 5072 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 24 11:09:24 crc kubenswrapper[5072]: I1124 11:09:24.218451 5072 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Nov 24 11:09:24 crc kubenswrapper[5072]: I1124 11:09:24.218523 5072 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 24 11:09:24 crc kubenswrapper[5072]: I1124 11:09:24.495912 5072 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 24 11:09:24 crc kubenswrapper[5072]: I1124 11:09:24.495993 5072 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 24 11:09:24 crc kubenswrapper[5072]: I1124 11:09:24.917432 5072 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Nov 24 11:09:24 crc kubenswrapper[5072]: [+]log ok Nov 24 11:09:24 crc kubenswrapper[5072]: [+]etcd ok Nov 24 11:09:24 crc kubenswrapper[5072]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Nov 24 11:09:24 crc kubenswrapper[5072]: [+]poststarthook/openshift.io-api-request-count-filter ok Nov 24 11:09:24 crc kubenswrapper[5072]: [+]poststarthook/openshift.io-startkubeinformers ok Nov 24 11:09:24 crc kubenswrapper[5072]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Nov 24 11:09:24 crc kubenswrapper[5072]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Nov 24 11:09:24 crc kubenswrapper[5072]: [+]poststarthook/start-apiserver-admission-initializer ok Nov 24 11:09:24 crc kubenswrapper[5072]: [+]poststarthook/generic-apiserver-start-informers ok Nov 24 11:09:24 crc kubenswrapper[5072]: [+]poststarthook/priority-and-fairness-config-consumer ok Nov 24 11:09:24 crc kubenswrapper[5072]: [+]poststarthook/priority-and-fairness-filter ok Nov 24 11:09:24 crc kubenswrapper[5072]: [+]poststarthook/storage-object-count-tracker-hook ok Nov 24 11:09:24 crc kubenswrapper[5072]: [+]poststarthook/start-apiextensions-informers ok Nov 24 11:09:24 crc kubenswrapper[5072]: [+]poststarthook/start-apiextensions-controllers ok Nov 24 11:09:24 crc kubenswrapper[5072]: [+]poststarthook/crd-informer-synced ok Nov 24 11:09:24 crc kubenswrapper[5072]: [+]poststarthook/start-system-namespaces-controller ok Nov 24 11:09:24 crc kubenswrapper[5072]: [+]poststarthook/start-cluster-authentication-info-controller ok Nov 24 11:09:24 crc kubenswrapper[5072]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Nov 24 11:09:24 crc kubenswrapper[5072]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Nov 24 11:09:24 crc kubenswrapper[5072]: [+]poststarthook/start-legacy-token-tracking-controller ok Nov 24 11:09:24 crc kubenswrapper[5072]: [+]poststarthook/start-service-ip-repair-controllers ok Nov 24 11:09:24 crc kubenswrapper[5072]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Nov 24 11:09:24 crc kubenswrapper[5072]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Nov 24 11:09:24 crc kubenswrapper[5072]: [+]poststarthook/priority-and-fairness-config-producer ok Nov 24 11:09:24 crc kubenswrapper[5072]: [+]poststarthook/bootstrap-controller ok Nov 24 11:09:24 crc kubenswrapper[5072]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Nov 24 11:09:24 crc kubenswrapper[5072]: [+]poststarthook/start-kube-aggregator-informers ok Nov 24 11:09:24 crc kubenswrapper[5072]: [+]poststarthook/apiservice-status-local-available-controller ok Nov 24 11:09:24 crc kubenswrapper[5072]: [+]poststarthook/apiservice-status-remote-available-controller ok Nov 24 11:09:24 crc kubenswrapper[5072]: [+]poststarthook/apiservice-registration-controller ok Nov 24 11:09:24 crc kubenswrapper[5072]: [+]poststarthook/apiservice-wait-for-first-sync ok Nov 24 11:09:24 crc kubenswrapper[5072]: [+]poststarthook/apiservice-discovery-controller ok Nov 24 11:09:24 crc kubenswrapper[5072]: [+]poststarthook/kube-apiserver-autoregistration ok Nov 24 11:09:24 crc kubenswrapper[5072]: [+]autoregister-completion ok Nov 24 11:09:24 crc kubenswrapper[5072]: [+]poststarthook/apiservice-openapi-controller ok Nov 24 11:09:24 crc kubenswrapper[5072]: [+]poststarthook/apiservice-openapiv3-controller ok Nov 24 11:09:24 crc kubenswrapper[5072]: livez check failed Nov 24 11:09:24 crc kubenswrapper[5072]: I1124 11:09:24.917527 5072 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 11:09:25 crc kubenswrapper[5072]: I1124 11:09:25.140429 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 24 11:09:25 crc kubenswrapper[5072]: I1124 11:09:25.143352 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a5f54ddd554c2e52a492be6b3e237793c7b7bed201d942c23d11983e154863a7"} Nov 24 11:09:25 crc kubenswrapper[5072]: I1124 11:09:25.143629 5072 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:09:25 crc kubenswrapper[5072]: I1124 11:09:25.144863 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:25 crc kubenswrapper[5072]: I1124 11:09:25.144911 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:25 crc kubenswrapper[5072]: I1124 11:09:25.144928 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:28 crc kubenswrapper[5072]: I1124 11:09:28.970993 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Nov 24 11:09:28 crc kubenswrapper[5072]: I1124 11:09:28.971283 5072 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:09:28 crc kubenswrapper[5072]: I1124 11:09:28.972957 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:28 crc kubenswrapper[5072]: I1124 11:09:28.973043 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:28 crc kubenswrapper[5072]: I1124 11:09:28.973061 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:28 crc kubenswrapper[5072]: I1124 11:09:28.991820 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Nov 24 11:09:29 crc kubenswrapper[5072]: E1124 11:09:29.109551 5072 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 24 11:09:29 crc kubenswrapper[5072]: I1124 11:09:29.153982 5072 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:09:29 crc kubenswrapper[5072]: I1124 11:09:29.156481 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:29 crc kubenswrapper[5072]: I1124 11:09:29.156543 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:29 crc kubenswrapper[5072]: I1124 11:09:29.156571 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:29 crc kubenswrapper[5072]: E1124 11:09:29.197856 5072 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Nov 24 11:09:29 crc kubenswrapper[5072]: I1124 11:09:29.198027 5072 trace.go:236] Trace[968149242]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (24-Nov-2025 11:09:17.877) (total time: 11320ms): Nov 24 11:09:29 crc kubenswrapper[5072]: Trace[968149242]: ---"Objects listed" error: 11320ms (11:09:29.197) Nov 24 11:09:29 crc kubenswrapper[5072]: Trace[968149242]: [11.320058967s] [11.320058967s] END Nov 24 11:09:29 crc kubenswrapper[5072]: I1124 11:09:29.198208 5072 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Nov 24 11:09:29 crc kubenswrapper[5072]: I1124 11:09:29.203483 5072 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Nov 24 11:09:29 crc kubenswrapper[5072]: I1124 11:09:29.203532 5072 trace.go:236] Trace[1189104985]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (24-Nov-2025 11:09:16.562) (total time: 12640ms): Nov 24 11:09:29 crc kubenswrapper[5072]: Trace[1189104985]: ---"Objects listed" error: 12640ms (11:09:29.203) Nov 24 11:09:29 crc kubenswrapper[5072]: Trace[1189104985]: [12.640874736s] [12.640874736s] END Nov 24 11:09:29 crc kubenswrapper[5072]: I1124 11:09:29.203565 5072 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Nov 24 11:09:29 crc kubenswrapper[5072]: I1124 11:09:29.203634 5072 trace.go:236] Trace[1603471115]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (24-Nov-2025 11:09:18.831) (total time: 10372ms): Nov 24 11:09:29 crc kubenswrapper[5072]: Trace[1603471115]: ---"Objects listed" error: 10371ms (11:09:29.203) Nov 24 11:09:29 crc kubenswrapper[5072]: Trace[1603471115]: [10.372029831s] [10.372029831s] END Nov 24 11:09:29 crc kubenswrapper[5072]: I1124 11:09:29.203650 5072 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Nov 24 11:09:29 crc kubenswrapper[5072]: I1124 11:09:29.209821 5072 kubelet_node_status.go:115] "Node was previously registered" node="crc" Nov 24 11:09:29 crc kubenswrapper[5072]: I1124 11:09:29.210068 5072 kubelet_node_status.go:79] "Successfully registered node" node="crc" Nov 24 11:09:29 crc kubenswrapper[5072]: I1124 11:09:29.211518 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:29 crc kubenswrapper[5072]: I1124 11:09:29.211616 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:29 crc kubenswrapper[5072]: I1124 11:09:29.211689 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:29 crc kubenswrapper[5072]: I1124 11:09:29.211763 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:29 crc kubenswrapper[5072]: I1124 11:09:29.211839 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:29Z","lastTransitionTime":"2025-11-24T11:09:29Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Nov 24 11:09:29 crc kubenswrapper[5072]: E1124 11:09:29.224186 5072 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a41d3a9c-0834-482e-9391-dff98db0f196\\\",\\\"systemUUID\\\":\\\"d0383649-b062-48ed-9fc1-5e553cb9256a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:09:29 crc kubenswrapper[5072]: I1124 11:09:29.228005 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:29 crc kubenswrapper[5072]: I1124 11:09:29.228132 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:29 crc kubenswrapper[5072]: I1124 11:09:29.228192 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:29 crc kubenswrapper[5072]: I1124 11:09:29.228270 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:29 crc kubenswrapper[5072]: I1124 11:09:29.228345 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:29Z","lastTransitionTime":"2025-11-24T11:09:29Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Nov 24 11:09:29 crc kubenswrapper[5072]: E1124 11:09:29.262761 5072 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a41d3a9c-0834-482e-9391-dff98db0f196\\\",\\\"systemUUID\\\":\\\"d0383649-b062-48ed-9fc1-5e553cb9256a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:09:29 crc kubenswrapper[5072]: I1124 11:09:29.267242 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:29 crc kubenswrapper[5072]: I1124 11:09:29.267278 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:29 crc kubenswrapper[5072]: I1124 11:09:29.267291 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:29 crc kubenswrapper[5072]: I1124 11:09:29.267312 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:29 crc kubenswrapper[5072]: I1124 11:09:29.267324 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:29Z","lastTransitionTime":"2025-11-24T11:09:29Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Nov 24 11:09:29 crc kubenswrapper[5072]: E1124 11:09:29.279355 5072 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a41d3a9c-0834-482e-9391-dff98db0f196\\\",\\\"systemUUID\\\":\\\"d0383649-b062-48ed-9fc1-5e553cb9256a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:09:29 crc kubenswrapper[5072]: I1124 11:09:29.282721 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:29 crc kubenswrapper[5072]: I1124 11:09:29.282746 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:29 crc kubenswrapper[5072]: I1124 11:09:29.282754 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:29 crc kubenswrapper[5072]: I1124 11:09:29.282769 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:29 crc kubenswrapper[5072]: I1124 11:09:29.282779 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:29Z","lastTransitionTime":"2025-11-24T11:09:29Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Nov 24 11:09:29 crc kubenswrapper[5072]: E1124 11:09:29.290961 5072 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a41d3a9c-0834-482e-9391-dff98db0f196\\\",\\\"systemUUID\\\":\\\"d0383649-b062-48ed-9fc1-5e553cb9256a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:09:29 crc kubenswrapper[5072]: I1124 11:09:29.293401 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:29 crc kubenswrapper[5072]: I1124 11:09:29.293425 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:29 crc kubenswrapper[5072]: I1124 11:09:29.293433 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:29 crc kubenswrapper[5072]: I1124 11:09:29.293445 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:29 crc kubenswrapper[5072]: I1124 11:09:29.293453 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:29Z","lastTransitionTime":"2025-11-24T11:09:29Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Nov 24 11:09:29 crc kubenswrapper[5072]: E1124 11:09:29.300835 5072 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a41d3a9c-0834-482e-9391-dff98db0f196\\\",\\\"systemUUID\\\":\\\"d0383649-b062-48ed-9fc1-5e553cb9256a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:09:29 crc kubenswrapper[5072]: E1124 11:09:29.300938 5072 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 24 11:09:29 crc kubenswrapper[5072]: I1124 11:09:29.302121 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:29 crc kubenswrapper[5072]: I1124 11:09:29.302149 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:29 crc kubenswrapper[5072]: I1124 11:09:29.302157 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:29 crc kubenswrapper[5072]: I1124 11:09:29.302172 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:29 crc kubenswrapper[5072]: I1124 11:09:29.302180 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:29Z","lastTransitionTime":"2025-11-24T11:09:29Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Nov 24 11:09:29 crc kubenswrapper[5072]: I1124 11:09:29.387282 5072 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Nov 24 11:09:29 crc kubenswrapper[5072]: I1124 11:09:29.404531 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:29 crc kubenswrapper[5072]: I1124 11:09:29.404856 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:29 crc kubenswrapper[5072]: I1124 11:09:29.404871 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:29 crc kubenswrapper[5072]: I1124 11:09:29.404905 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:29 crc kubenswrapper[5072]: I1124 11:09:29.404926 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:29Z","lastTransitionTime":"2025-11-24T11:09:29Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Nov 24 11:09:29 crc kubenswrapper[5072]: I1124 11:09:29.507737 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:29 crc kubenswrapper[5072]: I1124 11:09:29.507797 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:29 crc kubenswrapper[5072]: I1124 11:09:29.507818 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:29 crc kubenswrapper[5072]: I1124 11:09:29.507849 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:29 crc kubenswrapper[5072]: I1124 11:09:29.507868 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:29Z","lastTransitionTime":"2025-11-24T11:09:29Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Nov 24 11:09:29 crc kubenswrapper[5072]: I1124 11:09:29.610223 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:29 crc kubenswrapper[5072]: I1124 11:09:29.610301 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:29 crc kubenswrapper[5072]: I1124 11:09:29.610323 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:29 crc kubenswrapper[5072]: I1124 11:09:29.610356 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:29 crc kubenswrapper[5072]: I1124 11:09:29.610422 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:29Z","lastTransitionTime":"2025-11-24T11:09:29Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Nov 24 11:09:29 crc kubenswrapper[5072]: I1124 11:09:29.713145 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:29 crc kubenswrapper[5072]: I1124 11:09:29.713178 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:29 crc kubenswrapper[5072]: I1124 11:09:29.713188 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:29 crc kubenswrapper[5072]: I1124 11:09:29.713207 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:29 crc kubenswrapper[5072]: I1124 11:09:29.713218 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:29Z","lastTransitionTime":"2025-11-24T11:09:29Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Nov 24 11:09:29 crc kubenswrapper[5072]: I1124 11:09:29.815072 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:29 crc kubenswrapper[5072]: I1124 11:09:29.815171 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:29 crc kubenswrapper[5072]: I1124 11:09:29.815190 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:29 crc kubenswrapper[5072]: I1124 11:09:29.815220 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:29 crc kubenswrapper[5072]: I1124 11:09:29.815238 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:29Z","lastTransitionTime":"2025-11-24T11:09:29Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Nov 24 11:09:29 crc kubenswrapper[5072]: I1124 11:09:29.918256 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:29 crc kubenswrapper[5072]: I1124 11:09:29.918342 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:29 crc kubenswrapper[5072]: I1124 11:09:29.918367 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:29 crc kubenswrapper[5072]: I1124 11:09:29.918457 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:29 crc kubenswrapper[5072]: I1124 11:09:29.918485 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:29Z","lastTransitionTime":"2025-11-24T11:09:29Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Nov 24 11:09:29 crc kubenswrapper[5072]: I1124 11:09:29.921176 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 11:09:29 crc kubenswrapper[5072]: I1124 11:09:29.921520 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 11:09:29 crc kubenswrapper[5072]: I1124 11:09:29.926513 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 11:09:29 crc kubenswrapper[5072]: I1124 11:09:29.937803 5072 apiserver.go:52] "Watching apiserver" Nov 24 11:09:29 crc kubenswrapper[5072]: I1124 11:09:29.950004 5072 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Nov 24 11:09:29 crc kubenswrapper[5072]: I1124 11:09:29.950314 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-kube-apiserver/kube-apiserver-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb"] Nov 24 11:09:29 crc kubenswrapper[5072]: I1124 11:09:29.950702 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 11:09:29 crc kubenswrapper[5072]: I1124 11:09:29.950807 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:09:29 crc kubenswrapper[5072]: E1124 11:09:29.950941 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:09:29 crc kubenswrapper[5072]: I1124 11:09:29.951159 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:09:29 crc kubenswrapper[5072]: E1124 11:09:29.951498 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:09:29 crc kubenswrapper[5072]: I1124 11:09:29.951613 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 11:09:29 crc kubenswrapper[5072]: I1124 11:09:29.951891 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:09:29 crc kubenswrapper[5072]: E1124 11:09:29.951982 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:09:29 crc kubenswrapper[5072]: I1124 11:09:29.952429 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 11:09:29 crc kubenswrapper[5072]: I1124 11:09:29.953602 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Nov 24 11:09:29 crc kubenswrapper[5072]: I1124 11:09:29.955060 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Nov 24 11:09:29 crc kubenswrapper[5072]: I1124 11:09:29.956237 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Nov 24 11:09:29 crc kubenswrapper[5072]: I1124 11:09:29.957741 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Nov 24 11:09:29 crc kubenswrapper[5072]: I1124 11:09:29.958666 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Nov 24 11:09:29 crc kubenswrapper[5072]: I1124 11:09:29.958749 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Nov 24 11:09:29 crc kubenswrapper[5072]: I1124 11:09:29.958679 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Nov 24 11:09:29 crc kubenswrapper[5072]: I1124 11:09:29.960947 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Nov 24 11:09:29 crc kubenswrapper[5072]: I1124 11:09:29.962620 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Nov 24 11:09:29 crc kubenswrapper[5072]: I1124 11:09:29.986756 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.006388 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.022220 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.022283 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.022302 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.022327 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.022411 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:30Z","lastTransitionTime":"2025-11-24T11:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.031247 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.042011 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.050413 5072 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.055324 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.068583 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a60343a1-7193-420d-b6ef-81505cfad266\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6597a19c8ed876fea1aaa8077315a8f39d0a79dee6af94970a3abcd552d673e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89e652bfaac124e13e0b3dfd3f167688a6b417b3613fb94d5422e2134ad95a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c9b314ea6e67a2866adfd0dc2e429523b6db6dab450a1a95fe5528548a0fcb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5f54ddd554c2e52a492be6b3e237793c7b7bed201d942c23d11983e154863a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e03b85333c8be2e5efe40f082369652f009482373f8e230fd948b2dee4e2ee39\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:09:23Z\\\",\\\"message\\\":\\\"W1124 11:09:12.543261 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 11:09:12.543592 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763982552 cert, and key in /tmp/serving-cert-2249531990/serving-signer.crt, /tmp/serving-cert-2249531990/serving-signer.key\\\\nI1124 11:09:13.042739 1 observer_polling.go:159] Starting file observer\\\\nW1124 11:09:13.046128 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 11:09:13.046351 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:09:13.048981 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2249531990/tls.crt::/tmp/serving-cert-2249531990/tls.key\\\\\\\"\\\\nF1124 11:09:23.567420 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d2187669c4dc9aae8ca2f2141104aee1e20df96f0bccf45ecd4c8528f51d1af\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a6b0468c00ca40213d12dd7b80c9f0dcfb93509a44ae37414053672e674f9f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6b0468c00ca40213d12dd7b80c9f0dcfb93509a44ae37414053672e674f9f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.082779 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.100671 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.107687 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.107742 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.107762 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.107798 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.107815 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.107830 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.107847 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.107864 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.107881 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.108116 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.108262 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.108293 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.108322 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.108344 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.108527 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.108546 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.108545 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.108574 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.108639 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.108666 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.108703 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.108736 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.108740 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.108763 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.108792 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.108842 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.108847 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.108896 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.108921 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.108941 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.108957 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.108879 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.108974 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.109082 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.109097 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.109101 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.109119 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.109135 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.109155 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.109177 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.109195 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.109211 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.109226 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.109244 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.109260 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.109277 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.109294 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.109310 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.109328 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.109344 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.109343 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.109347 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.109360 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.109415 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.109418 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.109519 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.109530 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.109540 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.109599 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.109628 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.109642 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.109657 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.109682 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.109706 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.109759 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.109780 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.109802 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.109823 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.109846 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.109869 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.109896 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.109915 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.109934 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.109937 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.109983 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.110001 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.110016 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.110015 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.110021 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.110035 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.110034 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.110053 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.110071 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.110088 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.110104 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.110119 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.110134 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.110150 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.110155 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.110165 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.110181 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.110197 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.110214 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.110229 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.110247 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.110262 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.110268 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.110279 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.110322 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.110342 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.110347 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.110416 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.110445 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.110468 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.110490 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.110511 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.110513 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.110533 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.110555 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.110574 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.110580 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.110624 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.110650 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.110676 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.110700 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.110718 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.110726 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.110771 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.110773 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.110799 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.111047 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.111075 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.111096 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.111122 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.111161 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.111318 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.111320 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.111970 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.111990 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.111999 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.112022 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.112043 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.112066 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.112087 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.112109 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.112131 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.112152 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.112153 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.112175 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.112197 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.112220 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.112239 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.112263 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.112285 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.112283 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.112307 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.112329 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.112349 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.112407 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.112431 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.112459 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.112483 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.112504 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.112525 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.112539 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.112550 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.112605 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.112636 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.112664 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.112691 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.112719 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.112747 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.113000 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.113027 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.113051 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.113067 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.113076 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.113103 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.113131 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.113206 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.113224 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.113256 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.113282 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.113306 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.113330 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.113357 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.113405 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.113434 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.113462 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.113491 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.113516 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.113541 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.113564 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.113587 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.113611 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.113637 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.113664 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.113687 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.113711 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.113741 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.113768 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.113794 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.113819 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.113843 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.113867 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.113892 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.113916 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.113938 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.114000 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.114027 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.114051 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.114074 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.114097 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.114123 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.114147 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.114172 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.114197 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.114228 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.114253 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.114276 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.114298 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.114321 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.114343 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.114366 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.114408 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.114430 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.114451 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.114474 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.114499 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.114520 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.114542 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.114564 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.114588 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.114610 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.114636 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.114665 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.114690 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.114714 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.114740 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.114764 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.114789 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.114821 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.114840 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.114872 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.114898 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.114922 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.114947 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.114974 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.114999 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.115027 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.115047 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.115051 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.115111 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.115145 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.115170 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.115191 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.115211 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.115324 5072 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.115336 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.115349 5072 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.115361 5072 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.115386 5072 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.115398 5072 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.115408 5072 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.115418 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.115430 5072 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.115440 5072 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.115451 5072 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.115462 5072 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.115473 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.115484 5072 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.115495 5072 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.115504 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.115514 5072 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.115525 5072 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.115535 5072 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.115545 5072 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.115554 5072 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.115563 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.115573 5072 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.115582 5072 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.115591 5072 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.115601 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.115610 5072 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.115620 5072 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.115631 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.115641 5072 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.115650 5072 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.115659 5072 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.115669 5072 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.115679 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.115687 5072 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.115696 5072 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.115709 5072 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.115331 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.115581 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.115794 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.115905 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.116076 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.116321 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.116582 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.116597 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.116924 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.117228 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.117601 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.117886 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.117897 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.118102 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.118261 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.118430 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.118818 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.118819 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: E1124 11:09:30.118930 5072 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 11:09:30 crc kubenswrapper[5072]: E1124 11:09:30.118989 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 11:09:30.61897092 +0000 UTC m=+22.330495396 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.119185 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.119407 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.119603 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.119813 5072 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.119815 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.119982 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.120519 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.120825 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.121278 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.121502 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.123103 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.123472 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.124534 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.124953 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.125223 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.125519 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.125971 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.126521 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.126870 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.127321 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.130453 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.130486 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.131844 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.132019 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.132271 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.132908 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.133046 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.133319 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.133526 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.133685 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.133688 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: E1124 11:09:30.134904 5072 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 11:09:30 crc kubenswrapper[5072]: E1124 11:09:30.135312 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 11:09:30.6352928 +0000 UTC m=+22.346817286 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.149736 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.149879 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.149977 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.150068 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.150232 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.150230 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.150274 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: E1124 11:09:30.150333 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:09:30.650313709 +0000 UTC m=+22.361838185 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.150575 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.151069 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.155828 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.156128 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.156417 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.156658 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.157704 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.157997 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.158077 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.158116 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.158433 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.158441 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.158513 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.158579 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.158599 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.158853 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.158868 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.158876 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.159000 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.159172 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.159213 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.159286 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.159460 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.159646 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.159657 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.160499 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.160765 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.160891 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.160949 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.161056 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.161075 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.161178 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.161277 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.161366 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.161457 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.161542 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.161613 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:30Z","lastTransitionTime":"2025-11-24T11:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.161504 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.161724 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.162048 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.162091 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.162276 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.163578 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.161194 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.164024 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.164510 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.164813 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.165046 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.165090 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.168285 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.168626 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.168699 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.168982 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.169231 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.169297 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.169581 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.169646 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.169900 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.169917 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.170119 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.170195 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.170362 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.170479 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.170494 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.170710 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.170738 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.170796 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.170936 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.171190 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.171452 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.172321 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.172478 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.172647 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: E1124 11:09:30.172683 5072 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-crc\" already exists" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.173064 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.173835 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.173955 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.174536 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.175606 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.175810 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.176107 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.176214 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.176567 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.177453 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.177776 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.177789 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.177834 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.181728 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.182821 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.183473 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.183822 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.184081 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.184196 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.184436 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.184693 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.184796 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.184838 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.185097 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.185305 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.185473 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.185486 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.185821 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.185862 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.185967 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.186133 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.186175 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.186543 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.186624 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: E1124 11:09:30.186842 5072 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 11:09:30 crc kubenswrapper[5072]: E1124 11:09:30.186870 5072 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 11:09:30 crc kubenswrapper[5072]: E1124 11:09:30.186882 5072 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:09:30 crc kubenswrapper[5072]: E1124 11:09:30.186938 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 11:09:30.686921445 +0000 UTC m=+22.398445921 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.187105 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.187693 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.189121 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.189591 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: E1124 11:09:30.189725 5072 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 11:09:30 crc kubenswrapper[5072]: E1124 11:09:30.189737 5072 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 11:09:30 crc kubenswrapper[5072]: E1124 11:09:30.189746 5072 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:09:30 crc kubenswrapper[5072]: E1124 11:09:30.189777 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 11:09:30.689767483 +0000 UTC m=+22.401291959 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.190250 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.193225 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.197776 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.205329 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.216478 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.216541 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.216588 5072 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.216602 5072 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.216611 5072 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.216623 5072 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.216634 5072 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.216643 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.216651 5072 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.216659 5072 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.216667 5072 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.216676 5072 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.216685 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.216695 5072 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.216706 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.216717 5072 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.216726 5072 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.216736 5072 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.216744 5072 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.216770 5072 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.216778 5072 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.216787 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.216798 5072 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.216809 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.216820 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.216828 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.216836 5072 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.216845 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.216854 5072 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.216862 5072 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.216870 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.216893 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.216901 5072 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.216910 5072 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.216918 5072 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.216929 5072 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.216938 5072 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.216952 5072 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.216960 5072 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.216968 5072 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.216976 5072 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.216984 5072 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.216992 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217001 5072 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217009 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217017 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217026 5072 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217033 5072 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217041 5072 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217049 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217057 5072 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217066 5072 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217079 5072 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217090 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217098 5072 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217106 5072 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217114 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217121 5072 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217129 5072 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217141 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217151 5072 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217160 5072 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217168 5072 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217178 5072 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217186 5072 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217195 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217203 5072 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217210 5072 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217218 5072 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217230 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217242 5072 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217253 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217263 5072 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217271 5072 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217279 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217289 5072 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217297 5072 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217306 5072 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217313 5072 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217321 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217329 5072 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217337 5072 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217345 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217353 5072 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217361 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217386 5072 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217395 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217403 5072 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217410 5072 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217418 5072 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217428 5072 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217438 5072 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217446 5072 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217454 5072 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217462 5072 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217471 5072 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217479 5072 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217486 5072 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217494 5072 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217502 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217510 5072 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217520 5072 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217527 5072 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217536 5072 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217547 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217556 5072 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217564 5072 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217571 5072 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217580 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217588 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217598 5072 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217607 5072 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217616 5072 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217623 5072 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217633 5072 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217641 5072 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217650 5072 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217658 5072 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217666 5072 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217674 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217682 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217689 5072 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217697 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217705 5072 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217712 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217721 5072 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217728 5072 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217742 5072 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217752 5072 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217759 5072 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217767 5072 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217776 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217784 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217793 5072 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217801 5072 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217809 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217819 5072 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217828 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217840 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217850 5072 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217858 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217866 5072 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217875 5072 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217885 5072 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217895 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217906 5072 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217916 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217928 5072 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217936 5072 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217944 5072 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217952 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217960 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217968 5072 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217977 5072 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217985 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.217993 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.218001 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.218010 5072 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.218018 5072 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.218062 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.218190 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.264409 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.264467 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.264478 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.264492 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.264504 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:30Z","lastTransitionTime":"2025-11-24T11:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.269698 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.278694 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.285882 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.369666 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.369914 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.369923 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.369937 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.369946 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:30Z","lastTransitionTime":"2025-11-24T11:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.473627 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.473663 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.473672 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.473689 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.473731 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:30Z","lastTransitionTime":"2025-11-24T11:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.540911 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-bkjf7"] Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.541460 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-bkjf7" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.544655 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.548430 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.548814 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.560468 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.570779 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.575694 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.575747 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.575778 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.575796 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.575809 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:30Z","lastTransitionTime":"2025-11-24T11:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.582356 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.593322 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a60343a1-7193-420d-b6ef-81505cfad266\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6597a19c8ed876fea1aaa8077315a8f39d0a79dee6af94970a3abcd552d673e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89e652bfaac124e13e0b3dfd3f167688a6b417b3613fb94d5422e2134ad95a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c9b314ea6e67a2866adfd0dc2e429523b6db6dab450a1a95fe5528548a0fcb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5f54ddd554c2e52a492be6b3e237793c7b7bed201d942c23d11983e154863a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e03b85333c8be2e5efe40f082369652f009482373f8e230fd948b2dee4e2ee39\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:09:23Z\\\",\\\"message\\\":\\\"W1124 11:09:12.543261 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 11:09:12.543592 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763982552 cert, and key in /tmp/serving-cert-2249531990/serving-signer.crt, /tmp/serving-cert-2249531990/serving-signer.key\\\\nI1124 11:09:13.042739 1 observer_polling.go:159] Starting file observer\\\\nW1124 11:09:13.046128 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 11:09:13.046351 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:09:13.048981 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2249531990/tls.crt::/tmp/serving-cert-2249531990/tls.key\\\\\\\"\\\\nF1124 11:09:23.567420 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d2187669c4dc9aae8ca2f2141104aee1e20df96f0bccf45ecd4c8528f51d1af\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a6b0468c00ca40213d12dd7b80c9f0dcfb93509a44ae37414053672e674f9f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6b0468c00ca40213d12dd7b80c9f0dcfb93509a44ae37414053672e674f9f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.611816 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.624379 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.624413 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/175fd540-009b-4cb4-9c3e-e2ebc7e787f3-hosts-file\") pod \"node-resolver-bkjf7\" (UID: \"175fd540-009b-4cb4-9c3e-e2ebc7e787f3\") " pod="openshift-dns/node-resolver-bkjf7" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.624452 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcts8\" (UniqueName: \"kubernetes.io/projected/175fd540-009b-4cb4-9c3e-e2ebc7e787f3-kube-api-access-tcts8\") pod \"node-resolver-bkjf7\" (UID: \"175fd540-009b-4cb4-9c3e-e2ebc7e787f3\") " pod="openshift-dns/node-resolver-bkjf7" Nov 24 11:09:30 crc kubenswrapper[5072]: E1124 11:09:30.624542 5072 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 11:09:30 crc kubenswrapper[5072]: E1124 11:09:30.624653 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 11:09:31.62463204 +0000 UTC m=+23.336156556 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.635358 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.651901 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.663709 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bkjf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"175fd540-009b-4cb4-9c3e-e2ebc7e787f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcts8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bkjf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.678417 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.678463 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.678475 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.678491 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.678503 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:30Z","lastTransitionTime":"2025-11-24T11:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.724899 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.725034 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/175fd540-009b-4cb4-9c3e-e2ebc7e787f3-hosts-file\") pod \"node-resolver-bkjf7\" (UID: \"175fd540-009b-4cb4-9c3e-e2ebc7e787f3\") " pod="openshift-dns/node-resolver-bkjf7" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.725077 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.725115 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:09:30 crc kubenswrapper[5072]: E1124 11:09:30.725148 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:09:31.725114342 +0000 UTC m=+23.436638818 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.725210 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcts8\" (UniqueName: \"kubernetes.io/projected/175fd540-009b-4cb4-9c3e-e2ebc7e787f3-kube-api-access-tcts8\") pod \"node-resolver-bkjf7\" (UID: \"175fd540-009b-4cb4-9c3e-e2ebc7e787f3\") " pod="openshift-dns/node-resolver-bkjf7" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.725225 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/175fd540-009b-4cb4-9c3e-e2ebc7e787f3-hosts-file\") pod \"node-resolver-bkjf7\" (UID: \"175fd540-009b-4cb4-9c3e-e2ebc7e787f3\") " pod="openshift-dns/node-resolver-bkjf7" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.725254 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:09:30 crc kubenswrapper[5072]: E1124 11:09:30.725322 5072 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 11:09:30 crc kubenswrapper[5072]: E1124 11:09:30.725351 5072 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 11:09:30 crc kubenswrapper[5072]: E1124 11:09:30.725367 5072 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:09:30 crc kubenswrapper[5072]: E1124 11:09:30.725402 5072 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 11:09:30 crc kubenswrapper[5072]: E1124 11:09:30.725424 5072 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 11:09:30 crc kubenswrapper[5072]: E1124 11:09:30.725437 5072 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:09:30 crc kubenswrapper[5072]: E1124 11:09:30.725470 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 11:09:31.72544473 +0000 UTC m=+23.436969226 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:09:30 crc kubenswrapper[5072]: E1124 11:09:30.725499 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 11:09:31.725487511 +0000 UTC m=+23.437012007 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:09:30 crc kubenswrapper[5072]: E1124 11:09:30.725595 5072 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 11:09:30 crc kubenswrapper[5072]: E1124 11:09:30.725658 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 11:09:31.725645115 +0000 UTC m=+23.437169781 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.743983 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcts8\" (UniqueName: \"kubernetes.io/projected/175fd540-009b-4cb4-9c3e-e2ebc7e787f3-kube-api-access-tcts8\") pod \"node-resolver-bkjf7\" (UID: \"175fd540-009b-4cb4-9c3e-e2ebc7e787f3\") " pod="openshift-dns/node-resolver-bkjf7" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.780560 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.780846 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.780998 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.781122 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.781253 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:30Z","lastTransitionTime":"2025-11-24T11:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.856393 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-bkjf7" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.882839 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.883048 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.883133 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.883216 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.883308 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:30Z","lastTransitionTime":"2025-11-24T11:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:30 crc kubenswrapper[5072]: W1124 11:09:30.889120 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod175fd540_009b_4cb4_9c3e_e2ebc7e787f3.slice/crio-705b59b2ebac9fa21437d23f77b512ff2de0734086276ecaffbc8dcc82afb253 WatchSource:0}: Error finding container 705b59b2ebac9fa21437d23f77b512ff2de0734086276ecaffbc8dcc82afb253: Status 404 returned error can't find the container with id 705b59b2ebac9fa21437d23f77b512ff2de0734086276ecaffbc8dcc82afb253 Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.987873 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.987921 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.987935 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.987951 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:30 crc kubenswrapper[5072]: I1124 11:09:30.987963 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:30Z","lastTransitionTime":"2025-11-24T11:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.020136 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.021010 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.022355 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.023162 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.024482 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.025116 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.025907 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.027139 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.027940 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.029147 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.029770 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.031075 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.031718 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.032500 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.033646 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.034279 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.035619 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.036102 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.037029 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.038436 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.039019 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.040284 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.040999 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.042326 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.043002 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.043841 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.045288 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.046253 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.047053 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.048216 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.048961 5072 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.049099 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.051768 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.052445 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.052983 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.055274 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.057329 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.058547 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.060896 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.061559 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.062430 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.063010 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.064012 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.065016 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.065477 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.066412 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.066890 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.067997 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.068653 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.069080 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.069897 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.070461 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.071363 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.071853 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.090247 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.090282 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.090290 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.090304 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.090315 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:31Z","lastTransitionTime":"2025-11-24T11:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.169557 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-bkjf7" event={"ID":"175fd540-009b-4cb4-9c3e-e2ebc7e787f3","Type":"ContainerStarted","Data":"d000a9d98b0e3ed54c1cc50148360bb8103d332c45ee03e745f14929132d2c7e"} Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.169618 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-bkjf7" event={"ID":"175fd540-009b-4cb4-9c3e-e2ebc7e787f3","Type":"ContainerStarted","Data":"705b59b2ebac9fa21437d23f77b512ff2de0734086276ecaffbc8dcc82afb253"} Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.170411 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"88dcacfdb486ccc18915c40ff0b364861848fe81e9fb0d68b738bcd5dde2c9e4"} Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.171831 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"4b45fbff892ae7b15dc056d52d6485a995bb8a62ae423498027fe4866ef51e31"} Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.171874 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"dcaa27616bc15c5ce26c371eb8a8f155914434949662b30894cd1ef7aa8e04a8"} Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.171890 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"6d708e4e44d6c39ed3ea0f487d7f5d2b7a3dd0ab7f7927f328e76dee5c4242d8"} Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.172970 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"47a948c39e09b468da8df5726e7734af35e1d5324d44a6ad11f6e30031f27060"} Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.172999 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"500bdbea372532ce4c985b990518cc95dfa7a9be5225532f59e1a3c616230b24"} Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.183481 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:31Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.193111 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.193156 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.193180 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.193200 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.193213 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:31Z","lastTransitionTime":"2025-11-24T11:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.194297 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bkjf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"175fd540-009b-4cb4-9c3e-e2ebc7e787f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d000a9d98b0e3ed54c1cc50148360bb8103d332c45ee03e745f14929132d2c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcts8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bkjf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:31Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.208100 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a60343a1-7193-420d-b6ef-81505cfad266\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6597a19c8ed876fea1aaa8077315a8f39d0a79dee6af94970a3abcd552d673e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89e652bfaac124e13e0b3dfd3f167688a6b417b3613fb94d5422e2134ad95a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c9b314ea6e67a2866adfd0dc2e429523b6db6dab450a1a95fe5528548a0fcb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5f54ddd554c2e52a492be6b3e237793c7b7bed201d942c23d11983e154863a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e03b85333c8be2e5efe40f082369652f009482373f8e230fd948b2dee4e2ee39\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:09:23Z\\\",\\\"message\\\":\\\"W1124 11:09:12.543261 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 11:09:12.543592 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763982552 cert, and key in /tmp/serving-cert-2249531990/serving-signer.crt, /tmp/serving-cert-2249531990/serving-signer.key\\\\nI1124 11:09:13.042739 1 observer_polling.go:159] Starting file observer\\\\nW1124 11:09:13.046128 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 11:09:13.046351 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:09:13.048981 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2249531990/tls.crt::/tmp/serving-cert-2249531990/tls.key\\\\\\\"\\\\nF1124 11:09:23.567420 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d2187669c4dc9aae8ca2f2141104aee1e20df96f0bccf45ecd4c8528f51d1af\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a6b0468c00ca40213d12dd7b80c9f0dcfb93509a44ae37414053672e674f9f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6b0468c00ca40213d12dd7b80c9f0dcfb93509a44ae37414053672e674f9f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:31Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.220858 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:31Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.246754 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:31Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.270033 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:31Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.296895 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.296932 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.296943 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.296955 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.296965 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:31Z","lastTransitionTime":"2025-11-24T11:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.298278 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:31Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.310542 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:31Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.331449 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a60343a1-7193-420d-b6ef-81505cfad266\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6597a19c8ed876fea1aaa8077315a8f39d0a79dee6af94970a3abcd552d673e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89e652bfaac124e13e0b3dfd3f167688a6b417b3613fb94d5422e2134ad95a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c9b314ea6e67a2866adfd0dc2e429523b6db6dab450a1a95fe5528548a0fcb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5f54ddd554c2e52a492be6b3e237793c7b7bed201d942c23d11983e154863a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e03b85333c8be2e5efe40f082369652f009482373f8e230fd948b2dee4e2ee39\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:09:23Z\\\",\\\"message\\\":\\\"W1124 11:09:12.543261 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 11:09:12.543592 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763982552 cert, and key in /tmp/serving-cert-2249531990/serving-signer.crt, /tmp/serving-cert-2249531990/serving-signer.key\\\\nI1124 11:09:13.042739 1 observer_polling.go:159] Starting file observer\\\\nW1124 11:09:13.046128 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 11:09:13.046351 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:09:13.048981 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2249531990/tls.crt::/tmp/serving-cert-2249531990/tls.key\\\\\\\"\\\\nF1124 11:09:23.567420 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d2187669c4dc9aae8ca2f2141104aee1e20df96f0bccf45ecd4c8528f51d1af\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a6b0468c00ca40213d12dd7b80c9f0dcfb93509a44ae37414053672e674f9f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6b0468c00ca40213d12dd7b80c9f0dcfb93509a44ae37414053672e674f9f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:31Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.341618 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:31Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.352602 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b45fbff892ae7b15dc056d52d6485a995bb8a62ae423498027fe4866ef51e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcaa27616bc15c5ce26c371eb8a8f155914434949662b30894cd1ef7aa8e04a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:31Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.369866 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:31Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.384074 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bkjf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"175fd540-009b-4cb4-9c3e-e2ebc7e787f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d000a9d98b0e3ed54c1cc50148360bb8103d332c45ee03e745f14929132d2c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcts8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bkjf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:31Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.397493 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:31Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.398767 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.398801 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.398816 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.398835 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.398846 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:31Z","lastTransitionTime":"2025-11-24T11:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.409889 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a948c39e09b468da8df5726e7734af35e1d5324d44a6ad11f6e30031f27060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:31Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.424019 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:31Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.500508 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.504549 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.513924 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.514027 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.514117 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.514179 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.514240 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:31Z","lastTransitionTime":"2025-11-24T11:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.516888 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:31Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.526910 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a948c39e09b468da8df5726e7734af35e1d5324d44a6ad11f6e30031f27060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:31Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.536015 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:31Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.546884 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a60343a1-7193-420d-b6ef-81505cfad266\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6597a19c8ed876fea1aaa8077315a8f39d0a79dee6af94970a3abcd552d673e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89e652bfaac124e13e0b3dfd3f167688a6b417b3613fb94d5422e2134ad95a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c9b314ea6e67a2866adfd0dc2e429523b6db6dab450a1a95fe5528548a0fcb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5f54ddd554c2e52a492be6b3e237793c7b7bed201d942c23d11983e154863a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e03b85333c8be2e5efe40f082369652f009482373f8e230fd948b2dee4e2ee39\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:09:23Z\\\",\\\"message\\\":\\\"W1124 11:09:12.543261 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 11:09:12.543592 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763982552 cert, and key in /tmp/serving-cert-2249531990/serving-signer.crt, /tmp/serving-cert-2249531990/serving-signer.key\\\\nI1124 11:09:13.042739 1 observer_polling.go:159] Starting file observer\\\\nW1124 11:09:13.046128 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 11:09:13.046351 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:09:13.048981 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2249531990/tls.crt::/tmp/serving-cert-2249531990/tls.key\\\\\\\"\\\\nF1124 11:09:23.567420 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d2187669c4dc9aae8ca2f2141104aee1e20df96f0bccf45ecd4c8528f51d1af\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a6b0468c00ca40213d12dd7b80c9f0dcfb93509a44ae37414053672e674f9f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6b0468c00ca40213d12dd7b80c9f0dcfb93509a44ae37414053672e674f9f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:31Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.563842 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-qjsxf"] Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.567975 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-qjsxf" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.571050 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.571631 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.571755 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.571837 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.572215 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.574822 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:31Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.574969 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-t8b9x"] Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.575439 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-jfxnb"] Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.575666 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-t8b9x" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.575733 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.577590 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.577757 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.578156 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.578436 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.578477 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.578727 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.579111 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.586627 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.591862 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b45fbff892ae7b15dc056d52d6485a995bb8a62ae423498027fe4866ef51e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcaa27616bc15c5ce26c371eb8a8f155914434949662b30894cd1ef7aa8e04a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:31Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.604870 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:31Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.612467 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bkjf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"175fd540-009b-4cb4-9c3e-e2ebc7e787f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d000a9d98b0e3ed54c1cc50148360bb8103d332c45ee03e745f14929132d2c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcts8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bkjf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:31Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.616342 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.616395 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.616407 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.616423 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.616435 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:31Z","lastTransitionTime":"2025-11-24T11:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.624595 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b45fbff892ae7b15dc056d52d6485a995bb8a62ae423498027fe4866ef51e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcaa27616bc15c5ce26c371eb8a8f155914434949662b30894cd1ef7aa8e04a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:31Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.631091 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/74eb978f-00ff-4ed3-a5da-8026a3211592-system-cni-dir\") pod \"multus-additional-cni-plugins-qjsxf\" (UID: \"74eb978f-00ff-4ed3-a5da-8026a3211592\") " pod="openshift-multus/multus-additional-cni-plugins-qjsxf" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.631143 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/74eb978f-00ff-4ed3-a5da-8026a3211592-os-release\") pod \"multus-additional-cni-plugins-qjsxf\" (UID: \"74eb978f-00ff-4ed3-a5da-8026a3211592\") " pod="openshift-multus/multus-additional-cni-plugins-qjsxf" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.631200 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-br29d\" (UniqueName: \"kubernetes.io/projected/74eb978f-00ff-4ed3-a5da-8026a3211592-kube-api-access-br29d\") pod \"multus-additional-cni-plugins-qjsxf\" (UID: \"74eb978f-00ff-4ed3-a5da-8026a3211592\") " pod="openshift-multus/multus-additional-cni-plugins-qjsxf" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.631301 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/74eb978f-00ff-4ed3-a5da-8026a3211592-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-qjsxf\" (UID: \"74eb978f-00ff-4ed3-a5da-8026a3211592\") " pod="openshift-multus/multus-additional-cni-plugins-qjsxf" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.631342 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/74eb978f-00ff-4ed3-a5da-8026a3211592-cni-binary-copy\") pod \"multus-additional-cni-plugins-qjsxf\" (UID: \"74eb978f-00ff-4ed3-a5da-8026a3211592\") " pod="openshift-multus/multus-additional-cni-plugins-qjsxf" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.631383 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.631401 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/74eb978f-00ff-4ed3-a5da-8026a3211592-cnibin\") pod \"multus-additional-cni-plugins-qjsxf\" (UID: \"74eb978f-00ff-4ed3-a5da-8026a3211592\") " pod="openshift-multus/multus-additional-cni-plugins-qjsxf" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.631419 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/74eb978f-00ff-4ed3-a5da-8026a3211592-tuning-conf-dir\") pod \"multus-additional-cni-plugins-qjsxf\" (UID: \"74eb978f-00ff-4ed3-a5da-8026a3211592\") " pod="openshift-multus/multus-additional-cni-plugins-qjsxf" Nov 24 11:09:31 crc kubenswrapper[5072]: E1124 11:09:31.631531 5072 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 11:09:31 crc kubenswrapper[5072]: E1124 11:09:31.631584 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 11:09:33.631571375 +0000 UTC m=+25.343095841 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.635706 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:31Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.644556 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bkjf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"175fd540-009b-4cb4-9c3e-e2ebc7e787f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d000a9d98b0e3ed54c1cc50148360bb8103d332c45ee03e745f14929132d2c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcts8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bkjf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:31Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.654790 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t8b9x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a9fe7b3-71a3-4388-8ee4-7531ceef6049\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmbvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t8b9x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:31Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.668263 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85ee6420-36f0-467c-acf4-ebea8b02c8d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jfxnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:31Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.679982 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9007e2c-ce36-49d5-ac3f-a2a0ced4e662\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://631c19835680cfbfc94d8d2864f79bb327a834aae717a2c9c525383029e44001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a299161b21fb4a4bc255d765f39eaafa3c87549cc62d458d28ff57fbb4b5fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25ce4f3c52e2096622385f0bd213a058de7ddd3967ed8ba918e79fc63b00429c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28c581f99dcf7d549d235350230e7c3ef380dfeb4fdff577353410642700cb1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:31Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.693539 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:31Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.706898 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a948c39e09b468da8df5726e7734af35e1d5324d44a6ad11f6e30031f27060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:31Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.719382 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.719429 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.719441 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.719457 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.719469 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:31Z","lastTransitionTime":"2025-11-24T11:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.720288 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:31Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.732494 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.732563 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.732585 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/85ee6420-36f0-467c-acf4-ebea8b02c8d5-mcd-auth-proxy-config\") pod \"machine-config-daemon-jfxnb\" (UID: \"85ee6420-36f0-467c-acf4-ebea8b02c8d5\") " pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.732604 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1a9fe7b3-71a3-4388-8ee4-7531ceef6049-multus-cni-dir\") pod \"multus-t8b9x\" (UID: \"1a9fe7b3-71a3-4388-8ee4-7531ceef6049\") " pod="openshift-multus/multus-t8b9x" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.732620 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1a9fe7b3-71a3-4388-8ee4-7531ceef6049-etc-kubernetes\") pod \"multus-t8b9x\" (UID: \"1a9fe7b3-71a3-4388-8ee4-7531ceef6049\") " pod="openshift-multus/multus-t8b9x" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.732639 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/74eb978f-00ff-4ed3-a5da-8026a3211592-tuning-conf-dir\") pod \"multus-additional-cni-plugins-qjsxf\" (UID: \"74eb978f-00ff-4ed3-a5da-8026a3211592\") " pod="openshift-multus/multus-additional-cni-plugins-qjsxf" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.732664 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1a9fe7b3-71a3-4388-8ee4-7531ceef6049-host-var-lib-kubelet\") pod \"multus-t8b9x\" (UID: \"1a9fe7b3-71a3-4388-8ee4-7531ceef6049\") " pod="openshift-multus/multus-t8b9x" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.732694 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1a9fe7b3-71a3-4388-8ee4-7531ceef6049-multus-conf-dir\") pod \"multus-t8b9x\" (UID: \"1a9fe7b3-71a3-4388-8ee4-7531ceef6049\") " pod="openshift-multus/multus-t8b9x" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.732715 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1a9fe7b3-71a3-4388-8ee4-7531ceef6049-os-release\") pod \"multus-t8b9x\" (UID: \"1a9fe7b3-71a3-4388-8ee4-7531ceef6049\") " pod="openshift-multus/multus-t8b9x" Nov 24 11:09:31 crc kubenswrapper[5072]: E1124 11:09:31.732744 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:09:33.732723383 +0000 UTC m=+25.444247859 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.732772 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/85ee6420-36f0-467c-acf4-ebea8b02c8d5-proxy-tls\") pod \"machine-config-daemon-jfxnb\" (UID: \"85ee6420-36f0-467c-acf4-ebea8b02c8d5\") " pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.732797 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1a9fe7b3-71a3-4388-8ee4-7531ceef6049-host-var-lib-cni-bin\") pod \"multus-t8b9x\" (UID: \"1a9fe7b3-71a3-4388-8ee4-7531ceef6049\") " pod="openshift-multus/multus-t8b9x" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.732824 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-br29d\" (UniqueName: \"kubernetes.io/projected/74eb978f-00ff-4ed3-a5da-8026a3211592-kube-api-access-br29d\") pod \"multus-additional-cni-plugins-qjsxf\" (UID: \"74eb978f-00ff-4ed3-a5da-8026a3211592\") " pod="openshift-multus/multus-additional-cni-plugins-qjsxf" Nov 24 11:09:31 crc kubenswrapper[5072]: E1124 11:09:31.732834 5072 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 11:09:31 crc kubenswrapper[5072]: E1124 11:09:31.732883 5072 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 11:09:31 crc kubenswrapper[5072]: E1124 11:09:31.732907 5072 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.732845 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1a9fe7b3-71a3-4388-8ee4-7531ceef6049-cni-binary-copy\") pod \"multus-t8b9x\" (UID: \"1a9fe7b3-71a3-4388-8ee4-7531ceef6049\") " pod="openshift-multus/multus-t8b9x" Nov 24 11:09:31 crc kubenswrapper[5072]: E1124 11:09:31.733007 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 11:09:33.732968499 +0000 UTC m=+25.444493015 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.733051 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/74eb978f-00ff-4ed3-a5da-8026a3211592-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-qjsxf\" (UID: \"74eb978f-00ff-4ed3-a5da-8026a3211592\") " pod="openshift-multus/multus-additional-cni-plugins-qjsxf" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.733100 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1a9fe7b3-71a3-4388-8ee4-7531ceef6049-system-cni-dir\") pod \"multus-t8b9x\" (UID: \"1a9fe7b3-71a3-4388-8ee4-7531ceef6049\") " pod="openshift-multus/multus-t8b9x" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.733140 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1a9fe7b3-71a3-4388-8ee4-7531ceef6049-cnibin\") pod \"multus-t8b9x\" (UID: \"1a9fe7b3-71a3-4388-8ee4-7531ceef6049\") " pod="openshift-multus/multus-t8b9x" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.733174 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/1a9fe7b3-71a3-4388-8ee4-7531ceef6049-multus-socket-dir-parent\") pod \"multus-t8b9x\" (UID: \"1a9fe7b3-71a3-4388-8ee4-7531ceef6049\") " pod="openshift-multus/multus-t8b9x" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.733207 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/1a9fe7b3-71a3-4388-8ee4-7531ceef6049-hostroot\") pod \"multus-t8b9x\" (UID: \"1a9fe7b3-71a3-4388-8ee4-7531ceef6049\") " pod="openshift-multus/multus-t8b9x" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.733238 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/1a9fe7b3-71a3-4388-8ee4-7531ceef6049-host-run-multus-certs\") pod \"multus-t8b9x\" (UID: \"1a9fe7b3-71a3-4388-8ee4-7531ceef6049\") " pod="openshift-multus/multus-t8b9x" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.733298 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/74eb978f-00ff-4ed3-a5da-8026a3211592-cni-binary-copy\") pod \"multus-additional-cni-plugins-qjsxf\" (UID: \"74eb978f-00ff-4ed3-a5da-8026a3211592\") " pod="openshift-multus/multus-additional-cni-plugins-qjsxf" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.733337 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56nm5\" (UniqueName: \"kubernetes.io/projected/85ee6420-36f0-467c-acf4-ebea8b02c8d5-kube-api-access-56nm5\") pod \"machine-config-daemon-jfxnb\" (UID: \"85ee6420-36f0-467c-acf4-ebea8b02c8d5\") " pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.733406 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/74eb978f-00ff-4ed3-a5da-8026a3211592-cnibin\") pod \"multus-additional-cni-plugins-qjsxf\" (UID: \"74eb978f-00ff-4ed3-a5da-8026a3211592\") " pod="openshift-multus/multus-additional-cni-plugins-qjsxf" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.733441 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/85ee6420-36f0-467c-acf4-ebea8b02c8d5-rootfs\") pod \"machine-config-daemon-jfxnb\" (UID: \"85ee6420-36f0-467c-acf4-ebea8b02c8d5\") " pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.733485 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/74eb978f-00ff-4ed3-a5da-8026a3211592-cnibin\") pod \"multus-additional-cni-plugins-qjsxf\" (UID: \"74eb978f-00ff-4ed3-a5da-8026a3211592\") " pod="openshift-multus/multus-additional-cni-plugins-qjsxf" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.733483 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1a9fe7b3-71a3-4388-8ee4-7531ceef6049-host-run-netns\") pod \"multus-t8b9x\" (UID: \"1a9fe7b3-71a3-4388-8ee4-7531ceef6049\") " pod="openshift-multus/multus-t8b9x" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.733543 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.733571 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/74eb978f-00ff-4ed3-a5da-8026a3211592-system-cni-dir\") pod \"multus-additional-cni-plugins-qjsxf\" (UID: \"74eb978f-00ff-4ed3-a5da-8026a3211592\") " pod="openshift-multus/multus-additional-cni-plugins-qjsxf" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.733596 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmbvh\" (UniqueName: \"kubernetes.io/projected/1a9fe7b3-71a3-4388-8ee4-7531ceef6049-kube-api-access-kmbvh\") pod \"multus-t8b9x\" (UID: \"1a9fe7b3-71a3-4388-8ee4-7531ceef6049\") " pod="openshift-multus/multus-t8b9x" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.733624 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.733648 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/74eb978f-00ff-4ed3-a5da-8026a3211592-os-release\") pod \"multus-additional-cni-plugins-qjsxf\" (UID: \"74eb978f-00ff-4ed3-a5da-8026a3211592\") " pod="openshift-multus/multus-additional-cni-plugins-qjsxf" Nov 24 11:09:31 crc kubenswrapper[5072]: E1124 11:09:31.733658 5072 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.733672 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/1a9fe7b3-71a3-4388-8ee4-7531ceef6049-host-run-k8s-cni-cncf-io\") pod \"multus-t8b9x\" (UID: \"1a9fe7b3-71a3-4388-8ee4-7531ceef6049\") " pod="openshift-multus/multus-t8b9x" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.733694 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/1a9fe7b3-71a3-4388-8ee4-7531ceef6049-host-var-lib-cni-multus\") pod \"multus-t8b9x\" (UID: \"1a9fe7b3-71a3-4388-8ee4-7531ceef6049\") " pod="openshift-multus/multus-t8b9x" Nov 24 11:09:31 crc kubenswrapper[5072]: E1124 11:09:31.733719 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 11:09:33.733697856 +0000 UTC m=+25.445222402 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.733740 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/74eb978f-00ff-4ed3-a5da-8026a3211592-system-cni-dir\") pod \"multus-additional-cni-plugins-qjsxf\" (UID: \"74eb978f-00ff-4ed3-a5da-8026a3211592\") " pod="openshift-multus/multus-additional-cni-plugins-qjsxf" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.733759 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/1a9fe7b3-71a3-4388-8ee4-7531ceef6049-multus-daemon-config\") pod \"multus-t8b9x\" (UID: \"1a9fe7b3-71a3-4388-8ee4-7531ceef6049\") " pod="openshift-multus/multus-t8b9x" Nov 24 11:09:31 crc kubenswrapper[5072]: E1124 11:09:31.733823 5072 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 11:09:31 crc kubenswrapper[5072]: E1124 11:09:31.733841 5072 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.733441 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/74eb978f-00ff-4ed3-a5da-8026a3211592-tuning-conf-dir\") pod \"multus-additional-cni-plugins-qjsxf\" (UID: \"74eb978f-00ff-4ed3-a5da-8026a3211592\") " pod="openshift-multus/multus-additional-cni-plugins-qjsxf" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.733843 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/74eb978f-00ff-4ed3-a5da-8026a3211592-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-qjsxf\" (UID: \"74eb978f-00ff-4ed3-a5da-8026a3211592\") " pod="openshift-multus/multus-additional-cni-plugins-qjsxf" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.733900 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/74eb978f-00ff-4ed3-a5da-8026a3211592-os-release\") pod \"multus-additional-cni-plugins-qjsxf\" (UID: \"74eb978f-00ff-4ed3-a5da-8026a3211592\") " pod="openshift-multus/multus-additional-cni-plugins-qjsxf" Nov 24 11:09:31 crc kubenswrapper[5072]: E1124 11:09:31.733853 5072 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:09:31 crc kubenswrapper[5072]: E1124 11:09:31.733958 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 11:09:33.733947872 +0000 UTC m=+25.445472458 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.733990 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/74eb978f-00ff-4ed3-a5da-8026a3211592-cni-binary-copy\") pod \"multus-additional-cni-plugins-qjsxf\" (UID: \"74eb978f-00ff-4ed3-a5da-8026a3211592\") " pod="openshift-multus/multus-additional-cni-plugins-qjsxf" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.734438 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a60343a1-7193-420d-b6ef-81505cfad266\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6597a19c8ed876fea1aaa8077315a8f39d0a79dee6af94970a3abcd552d673e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89e652bfaac124e13e0b3dfd3f167688a6b417b3613fb94d5422e2134ad95a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c9b314ea6e67a2866adfd0dc2e429523b6db6dab450a1a95fe5528548a0fcb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5f54ddd554c2e52a492be6b3e237793c7b7bed201d942c23d11983e154863a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e03b85333c8be2e5efe40f082369652f009482373f8e230fd948b2dee4e2ee39\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:09:23Z\\\",\\\"message\\\":\\\"W1124 11:09:12.543261 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 11:09:12.543592 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763982552 cert, and key in /tmp/serving-cert-2249531990/serving-signer.crt, /tmp/serving-cert-2249531990/serving-signer.key\\\\nI1124 11:09:13.042739 1 observer_polling.go:159] Starting file observer\\\\nW1124 11:09:13.046128 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 11:09:13.046351 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:09:13.048981 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2249531990/tls.crt::/tmp/serving-cert-2249531990/tls.key\\\\\\\"\\\\nF1124 11:09:23.567420 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d2187669c4dc9aae8ca2f2141104aee1e20df96f0bccf45ecd4c8528f51d1af\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a6b0468c00ca40213d12dd7b80c9f0dcfb93509a44ae37414053672e674f9f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6b0468c00ca40213d12dd7b80c9f0dcfb93509a44ae37414053672e674f9f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:31Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.747627 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:31Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.750263 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-br29d\" (UniqueName: \"kubernetes.io/projected/74eb978f-00ff-4ed3-a5da-8026a3211592-kube-api-access-br29d\") pod \"multus-additional-cni-plugins-qjsxf\" (UID: \"74eb978f-00ff-4ed3-a5da-8026a3211592\") " pod="openshift-multus/multus-additional-cni-plugins-qjsxf" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.772280 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qjsxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74eb978f-00ff-4ed3-a5da-8026a3211592\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qjsxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:31Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.822522 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.822590 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.822601 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.822613 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.822622 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:31Z","lastTransitionTime":"2025-11-24T11:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.835061 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56nm5\" (UniqueName: \"kubernetes.io/projected/85ee6420-36f0-467c-acf4-ebea8b02c8d5-kube-api-access-56nm5\") pod \"machine-config-daemon-jfxnb\" (UID: \"85ee6420-36f0-467c-acf4-ebea8b02c8d5\") " pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.835085 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/1a9fe7b3-71a3-4388-8ee4-7531ceef6049-hostroot\") pod \"multus-t8b9x\" (UID: \"1a9fe7b3-71a3-4388-8ee4-7531ceef6049\") " pod="openshift-multus/multus-t8b9x" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.835104 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/1a9fe7b3-71a3-4388-8ee4-7531ceef6049-host-run-multus-certs\") pod \"multus-t8b9x\" (UID: \"1a9fe7b3-71a3-4388-8ee4-7531ceef6049\") " pod="openshift-multus/multus-t8b9x" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.835120 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/85ee6420-36f0-467c-acf4-ebea8b02c8d5-rootfs\") pod \"machine-config-daemon-jfxnb\" (UID: \"85ee6420-36f0-467c-acf4-ebea8b02c8d5\") " pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.835136 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1a9fe7b3-71a3-4388-8ee4-7531ceef6049-host-run-netns\") pod \"multus-t8b9x\" (UID: \"1a9fe7b3-71a3-4388-8ee4-7531ceef6049\") " pod="openshift-multus/multus-t8b9x" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.835158 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmbvh\" (UniqueName: \"kubernetes.io/projected/1a9fe7b3-71a3-4388-8ee4-7531ceef6049-kube-api-access-kmbvh\") pod \"multus-t8b9x\" (UID: \"1a9fe7b3-71a3-4388-8ee4-7531ceef6049\") " pod="openshift-multus/multus-t8b9x" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.835178 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/1a9fe7b3-71a3-4388-8ee4-7531ceef6049-host-run-k8s-cni-cncf-io\") pod \"multus-t8b9x\" (UID: \"1a9fe7b3-71a3-4388-8ee4-7531ceef6049\") " pod="openshift-multus/multus-t8b9x" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.835191 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/1a9fe7b3-71a3-4388-8ee4-7531ceef6049-host-var-lib-cni-multus\") pod \"multus-t8b9x\" (UID: \"1a9fe7b3-71a3-4388-8ee4-7531ceef6049\") " pod="openshift-multus/multus-t8b9x" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.835206 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/1a9fe7b3-71a3-4388-8ee4-7531ceef6049-multus-daemon-config\") pod \"multus-t8b9x\" (UID: \"1a9fe7b3-71a3-4388-8ee4-7531ceef6049\") " pod="openshift-multus/multus-t8b9x" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.835226 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/85ee6420-36f0-467c-acf4-ebea8b02c8d5-mcd-auth-proxy-config\") pod \"machine-config-daemon-jfxnb\" (UID: \"85ee6420-36f0-467c-acf4-ebea8b02c8d5\") " pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.835241 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1a9fe7b3-71a3-4388-8ee4-7531ceef6049-multus-cni-dir\") pod \"multus-t8b9x\" (UID: \"1a9fe7b3-71a3-4388-8ee4-7531ceef6049\") " pod="openshift-multus/multus-t8b9x" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.835255 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1a9fe7b3-71a3-4388-8ee4-7531ceef6049-etc-kubernetes\") pod \"multus-t8b9x\" (UID: \"1a9fe7b3-71a3-4388-8ee4-7531ceef6049\") " pod="openshift-multus/multus-t8b9x" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.835271 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1a9fe7b3-71a3-4388-8ee4-7531ceef6049-host-var-lib-kubelet\") pod \"multus-t8b9x\" (UID: \"1a9fe7b3-71a3-4388-8ee4-7531ceef6049\") " pod="openshift-multus/multus-t8b9x" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.835285 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1a9fe7b3-71a3-4388-8ee4-7531ceef6049-multus-conf-dir\") pod \"multus-t8b9x\" (UID: \"1a9fe7b3-71a3-4388-8ee4-7531ceef6049\") " pod="openshift-multus/multus-t8b9x" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.835299 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1a9fe7b3-71a3-4388-8ee4-7531ceef6049-os-release\") pod \"multus-t8b9x\" (UID: \"1a9fe7b3-71a3-4388-8ee4-7531ceef6049\") " pod="openshift-multus/multus-t8b9x" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.835313 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/85ee6420-36f0-467c-acf4-ebea8b02c8d5-proxy-tls\") pod \"machine-config-daemon-jfxnb\" (UID: \"85ee6420-36f0-467c-acf4-ebea8b02c8d5\") " pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.835328 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1a9fe7b3-71a3-4388-8ee4-7531ceef6049-host-var-lib-cni-bin\") pod \"multus-t8b9x\" (UID: \"1a9fe7b3-71a3-4388-8ee4-7531ceef6049\") " pod="openshift-multus/multus-t8b9x" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.835343 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1a9fe7b3-71a3-4388-8ee4-7531ceef6049-cni-binary-copy\") pod \"multus-t8b9x\" (UID: \"1a9fe7b3-71a3-4388-8ee4-7531ceef6049\") " pod="openshift-multus/multus-t8b9x" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.835359 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1a9fe7b3-71a3-4388-8ee4-7531ceef6049-system-cni-dir\") pod \"multus-t8b9x\" (UID: \"1a9fe7b3-71a3-4388-8ee4-7531ceef6049\") " pod="openshift-multus/multus-t8b9x" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.835390 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1a9fe7b3-71a3-4388-8ee4-7531ceef6049-cnibin\") pod \"multus-t8b9x\" (UID: \"1a9fe7b3-71a3-4388-8ee4-7531ceef6049\") " pod="openshift-multus/multus-t8b9x" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.835406 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/1a9fe7b3-71a3-4388-8ee4-7531ceef6049-multus-socket-dir-parent\") pod \"multus-t8b9x\" (UID: \"1a9fe7b3-71a3-4388-8ee4-7531ceef6049\") " pod="openshift-multus/multus-t8b9x" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.835527 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/1a9fe7b3-71a3-4388-8ee4-7531ceef6049-multus-socket-dir-parent\") pod \"multus-t8b9x\" (UID: \"1a9fe7b3-71a3-4388-8ee4-7531ceef6049\") " pod="openshift-multus/multus-t8b9x" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.835758 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/1a9fe7b3-71a3-4388-8ee4-7531ceef6049-hostroot\") pod \"multus-t8b9x\" (UID: \"1a9fe7b3-71a3-4388-8ee4-7531ceef6049\") " pod="openshift-multus/multus-t8b9x" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.835789 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/1a9fe7b3-71a3-4388-8ee4-7531ceef6049-host-run-multus-certs\") pod \"multus-t8b9x\" (UID: \"1a9fe7b3-71a3-4388-8ee4-7531ceef6049\") " pod="openshift-multus/multus-t8b9x" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.835819 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/85ee6420-36f0-467c-acf4-ebea8b02c8d5-rootfs\") pod \"machine-config-daemon-jfxnb\" (UID: \"85ee6420-36f0-467c-acf4-ebea8b02c8d5\") " pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.835842 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1a9fe7b3-71a3-4388-8ee4-7531ceef6049-host-run-netns\") pod \"multus-t8b9x\" (UID: \"1a9fe7b3-71a3-4388-8ee4-7531ceef6049\") " pod="openshift-multus/multus-t8b9x" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.835970 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/1a9fe7b3-71a3-4388-8ee4-7531ceef6049-host-run-k8s-cni-cncf-io\") pod \"multus-t8b9x\" (UID: \"1a9fe7b3-71a3-4388-8ee4-7531ceef6049\") " pod="openshift-multus/multus-t8b9x" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.836026 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/1a9fe7b3-71a3-4388-8ee4-7531ceef6049-host-var-lib-cni-multus\") pod \"multus-t8b9x\" (UID: \"1a9fe7b3-71a3-4388-8ee4-7531ceef6049\") " pod="openshift-multus/multus-t8b9x" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.836848 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1a9fe7b3-71a3-4388-8ee4-7531ceef6049-os-release\") pod \"multus-t8b9x\" (UID: \"1a9fe7b3-71a3-4388-8ee4-7531ceef6049\") " pod="openshift-multus/multus-t8b9x" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.836893 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/1a9fe7b3-71a3-4388-8ee4-7531ceef6049-multus-daemon-config\") pod \"multus-t8b9x\" (UID: \"1a9fe7b3-71a3-4388-8ee4-7531ceef6049\") " pod="openshift-multus/multus-t8b9x" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.837073 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1a9fe7b3-71a3-4388-8ee4-7531ceef6049-host-var-lib-cni-bin\") pod \"multus-t8b9x\" (UID: \"1a9fe7b3-71a3-4388-8ee4-7531ceef6049\") " pod="openshift-multus/multus-t8b9x" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.836945 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1a9fe7b3-71a3-4388-8ee4-7531ceef6049-etc-kubernetes\") pod \"multus-t8b9x\" (UID: \"1a9fe7b3-71a3-4388-8ee4-7531ceef6049\") " pod="openshift-multus/multus-t8b9x" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.837051 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1a9fe7b3-71a3-4388-8ee4-7531ceef6049-multus-cni-dir\") pod \"multus-t8b9x\" (UID: \"1a9fe7b3-71a3-4388-8ee4-7531ceef6049\") " pod="openshift-multus/multus-t8b9x" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.837137 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1a9fe7b3-71a3-4388-8ee4-7531ceef6049-host-var-lib-kubelet\") pod \"multus-t8b9x\" (UID: \"1a9fe7b3-71a3-4388-8ee4-7531ceef6049\") " pod="openshift-multus/multus-t8b9x" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.837161 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1a9fe7b3-71a3-4388-8ee4-7531ceef6049-multus-conf-dir\") pod \"multus-t8b9x\" (UID: \"1a9fe7b3-71a3-4388-8ee4-7531ceef6049\") " pod="openshift-multus/multus-t8b9x" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.837173 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/85ee6420-36f0-467c-acf4-ebea8b02c8d5-mcd-auth-proxy-config\") pod \"machine-config-daemon-jfxnb\" (UID: \"85ee6420-36f0-467c-acf4-ebea8b02c8d5\") " pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.837204 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1a9fe7b3-71a3-4388-8ee4-7531ceef6049-cnibin\") pod \"multus-t8b9x\" (UID: \"1a9fe7b3-71a3-4388-8ee4-7531ceef6049\") " pod="openshift-multus/multus-t8b9x" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.836928 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1a9fe7b3-71a3-4388-8ee4-7531ceef6049-system-cni-dir\") pod \"multus-t8b9x\" (UID: \"1a9fe7b3-71a3-4388-8ee4-7531ceef6049\") " pod="openshift-multus/multus-t8b9x" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.837613 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1a9fe7b3-71a3-4388-8ee4-7531ceef6049-cni-binary-copy\") pod \"multus-t8b9x\" (UID: \"1a9fe7b3-71a3-4388-8ee4-7531ceef6049\") " pod="openshift-multus/multus-t8b9x" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.847024 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/85ee6420-36f0-467c-acf4-ebea8b02c8d5-proxy-tls\") pod \"machine-config-daemon-jfxnb\" (UID: \"85ee6420-36f0-467c-acf4-ebea8b02c8d5\") " pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.856880 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmbvh\" (UniqueName: \"kubernetes.io/projected/1a9fe7b3-71a3-4388-8ee4-7531ceef6049-kube-api-access-kmbvh\") pod \"multus-t8b9x\" (UID: \"1a9fe7b3-71a3-4388-8ee4-7531ceef6049\") " pod="openshift-multus/multus-t8b9x" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.862938 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56nm5\" (UniqueName: \"kubernetes.io/projected/85ee6420-36f0-467c-acf4-ebea8b02c8d5-kube-api-access-56nm5\") pod \"machine-config-daemon-jfxnb\" (UID: \"85ee6420-36f0-467c-acf4-ebea8b02c8d5\") " pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.883786 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-qjsxf" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.892784 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" Nov 24 11:09:31 crc kubenswrapper[5072]: W1124 11:09:31.896264 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod74eb978f_00ff_4ed3_a5da_8026a3211592.slice/crio-e1fe17623644d34073db021204fbb77ef6abd9e1dd2f576eeec37430e9da0662 WatchSource:0}: Error finding container e1fe17623644d34073db021204fbb77ef6abd9e1dd2f576eeec37430e9da0662: Status 404 returned error can't find the container with id e1fe17623644d34073db021204fbb77ef6abd9e1dd2f576eeec37430e9da0662 Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.900559 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-t8b9x" Nov 24 11:09:31 crc kubenswrapper[5072]: W1124 11:09:31.917584 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod85ee6420_36f0_467c_acf4_ebea8b02c8d5.slice/crio-de6aaa75fd99f76fd48fce06c07420967ea3d8bff8584bfb7a7b70bc1ab6eb63 WatchSource:0}: Error finding container de6aaa75fd99f76fd48fce06c07420967ea3d8bff8584bfb7a7b70bc1ab6eb63: Status 404 returned error can't find the container with id de6aaa75fd99f76fd48fce06c07420967ea3d8bff8584bfb7a7b70bc1ab6eb63 Nov 24 11:09:31 crc kubenswrapper[5072]: W1124 11:09:31.918233 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1a9fe7b3_71a3_4388_8ee4_7531ceef6049.slice/crio-7384c3205424942d05e7b807aeff42369405b6ff5bb131f23504f3dbe7e859cc WatchSource:0}: Error finding container 7384c3205424942d05e7b807aeff42369405b6ff5bb131f23504f3dbe7e859cc: Status 404 returned error can't find the container with id 7384c3205424942d05e7b807aeff42369405b6ff5bb131f23504f3dbe7e859cc Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.925494 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.925532 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.925549 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.925571 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:31 crc kubenswrapper[5072]: I1124 11:09:31.925592 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:31Z","lastTransitionTime":"2025-11-24T11:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.015575 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.015634 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.015725 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:09:32 crc kubenswrapper[5072]: E1124 11:09:32.015718 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:09:32 crc kubenswrapper[5072]: E1124 11:09:32.015841 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:09:32 crc kubenswrapper[5072]: E1124 11:09:32.015949 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.028300 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.028345 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.028357 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.028394 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.028409 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:32Z","lastTransitionTime":"2025-11-24T11:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.130254 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.130285 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.130294 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.130308 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.130317 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:32Z","lastTransitionTime":"2025-11-24T11:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.175953 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-t8b9x" event={"ID":"1a9fe7b3-71a3-4388-8ee4-7531ceef6049","Type":"ContainerStarted","Data":"96637ece9dca11a6b9e2a8fff8e78ca37f48e9f86e3f076e80cbd56aa353ca74"} Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.175994 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-t8b9x" event={"ID":"1a9fe7b3-71a3-4388-8ee4-7531ceef6049","Type":"ContainerStarted","Data":"7384c3205424942d05e7b807aeff42369405b6ff5bb131f23504f3dbe7e859cc"} Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.179021 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" event={"ID":"85ee6420-36f0-467c-acf4-ebea8b02c8d5","Type":"ContainerStarted","Data":"a3509fd52379451e43594c096ef652d92778331f2aef6b689e547f35a384b976"} Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.179055 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" event={"ID":"85ee6420-36f0-467c-acf4-ebea8b02c8d5","Type":"ContainerStarted","Data":"de6aaa75fd99f76fd48fce06c07420967ea3d8bff8584bfb7a7b70bc1ab6eb63"} Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.180159 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qjsxf" event={"ID":"74eb978f-00ff-4ed3-a5da-8026a3211592","Type":"ContainerStarted","Data":"e1fe17623644d34073db021204fbb77ef6abd9e1dd2f576eeec37430e9da0662"} Nov 24 11:09:32 crc kubenswrapper[5072]: E1124 11:09:32.188275 5072 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.194232 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85ee6420-36f0-467c-acf4-ebea8b02c8d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jfxnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:32Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.205630 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b45fbff892ae7b15dc056d52d6485a995bb8a62ae423498027fe4866ef51e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcaa27616bc15c5ce26c371eb8a8f155914434949662b30894cd1ef7aa8e04a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:32Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.216732 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:32Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.225587 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bkjf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"175fd540-009b-4cb4-9c3e-e2ebc7e787f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d000a9d98b0e3ed54c1cc50148360bb8103d332c45ee03e745f14929132d2c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcts8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bkjf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:32Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.231719 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.231765 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.231774 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.231813 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.231825 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:32Z","lastTransitionTime":"2025-11-24T11:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.237343 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t8b9x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a9fe7b3-71a3-4388-8ee4-7531ceef6049\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96637ece9dca11a6b9e2a8fff8e78ca37f48e9f86e3f076e80cbd56aa353ca74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmbvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t8b9x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:32Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.250736 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:32Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.264686 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9007e2c-ce36-49d5-ac3f-a2a0ced4e662\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://631c19835680cfbfc94d8d2864f79bb327a834aae717a2c9c525383029e44001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a299161b21fb4a4bc255d765f39eaafa3c87549cc62d458d28ff57fbb4b5fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25ce4f3c52e2096622385f0bd213a058de7ddd3967ed8ba918e79fc63b00429c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28c581f99dcf7d549d235350230e7c3ef380dfeb4fdff577353410642700cb1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:32Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.278252 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:32Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.289735 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a948c39e09b468da8df5726e7734af35e1d5324d44a6ad11f6e30031f27060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:32Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.301968 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a60343a1-7193-420d-b6ef-81505cfad266\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6597a19c8ed876fea1aaa8077315a8f39d0a79dee6af94970a3abcd552d673e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89e652bfaac124e13e0b3dfd3f167688a6b417b3613fb94d5422e2134ad95a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c9b314ea6e67a2866adfd0dc2e429523b6db6dab450a1a95fe5528548a0fcb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5f54ddd554c2e52a492be6b3e237793c7b7bed201d942c23d11983e154863a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e03b85333c8be2e5efe40f082369652f009482373f8e230fd948b2dee4e2ee39\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:09:23Z\\\",\\\"message\\\":\\\"W1124 11:09:12.543261 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 11:09:12.543592 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763982552 cert, and key in /tmp/serving-cert-2249531990/serving-signer.crt, /tmp/serving-cert-2249531990/serving-signer.key\\\\nI1124 11:09:13.042739 1 observer_polling.go:159] Starting file observer\\\\nW1124 11:09:13.046128 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 11:09:13.046351 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:09:13.048981 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2249531990/tls.crt::/tmp/serving-cert-2249531990/tls.key\\\\\\\"\\\\nF1124 11:09:23.567420 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d2187669c4dc9aae8ca2f2141104aee1e20df96f0bccf45ecd4c8528f51d1af\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a6b0468c00ca40213d12dd7b80c9f0dcfb93509a44ae37414053672e674f9f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6b0468c00ca40213d12dd7b80c9f0dcfb93509a44ae37414053672e674f9f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:32Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.308850 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-n4qmw"] Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.309584 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.310782 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.311067 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.311281 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.311316 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.311356 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.311948 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.315495 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:32Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.318427 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.330296 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qjsxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74eb978f-00ff-4ed3-a5da-8026a3211592\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qjsxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:32Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.333655 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.333689 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.333698 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.333730 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.333740 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:32Z","lastTransitionTime":"2025-11-24T11:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.342545 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a60343a1-7193-420d-b6ef-81505cfad266\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6597a19c8ed876fea1aaa8077315a8f39d0a79dee6af94970a3abcd552d673e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89e652bfaac124e13e0b3dfd3f167688a6b417b3613fb94d5422e2134ad95a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c9b314ea6e67a2866adfd0dc2e429523b6db6dab450a1a95fe5528548a0fcb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5f54ddd554c2e52a492be6b3e237793c7b7bed201d942c23d11983e154863a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e03b85333c8be2e5efe40f082369652f009482373f8e230fd948b2dee4e2ee39\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:09:23Z\\\",\\\"message\\\":\\\"W1124 11:09:12.543261 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 11:09:12.543592 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763982552 cert, and key in /tmp/serving-cert-2249531990/serving-signer.crt, /tmp/serving-cert-2249531990/serving-signer.key\\\\nI1124 11:09:13.042739 1 observer_polling.go:159] Starting file observer\\\\nW1124 11:09:13.046128 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 11:09:13.046351 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:09:13.048981 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2249531990/tls.crt::/tmp/serving-cert-2249531990/tls.key\\\\\\\"\\\\nF1124 11:09:23.567420 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d2187669c4dc9aae8ca2f2141104aee1e20df96f0bccf45ecd4c8528f51d1af\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a6b0468c00ca40213d12dd7b80c9f0dcfb93509a44ae37414053672e674f9f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6b0468c00ca40213d12dd7b80c9f0dcfb93509a44ae37414053672e674f9f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:32Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.353794 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:32Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.372674 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4qmw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:32Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.385920 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qjsxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74eb978f-00ff-4ed3-a5da-8026a3211592\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qjsxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:32Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.396039 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b45fbff892ae7b15dc056d52d6485a995bb8a62ae423498027fe4866ef51e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcaa27616bc15c5ce26c371eb8a8f155914434949662b30894cd1ef7aa8e04a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:32Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.410298 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:32Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.421809 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bkjf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"175fd540-009b-4cb4-9c3e-e2ebc7e787f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d000a9d98b0e3ed54c1cc50148360bb8103d332c45ee03e745f14929132d2c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcts8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bkjf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:32Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.433248 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t8b9x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a9fe7b3-71a3-4388-8ee4-7531ceef6049\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96637ece9dca11a6b9e2a8fff8e78ca37f48e9f86e3f076e80cbd56aa353ca74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmbvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t8b9x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:32Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.435775 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.435810 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.435820 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.435835 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.435844 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:32Z","lastTransitionTime":"2025-11-24T11:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.440184 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-systemd-units\") pod \"ovnkube-node-n4qmw\" (UID: \"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.440216 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-host-cni-bin\") pod \"ovnkube-node-n4qmw\" (UID: \"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.440231 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-ovnkube-config\") pod \"ovnkube-node-n4qmw\" (UID: \"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.440249 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-host-cni-netd\") pod \"ovnkube-node-n4qmw\" (UID: \"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.440276 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-var-lib-openvswitch\") pod \"ovnkube-node-n4qmw\" (UID: \"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.440402 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-env-overrides\") pod \"ovnkube-node-n4qmw\" (UID: \"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.440442 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-run-systemd\") pod \"ovnkube-node-n4qmw\" (UID: \"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.440516 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-node-log\") pod \"ovnkube-node-n4qmw\" (UID: \"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.440561 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-log-socket\") pod \"ovnkube-node-n4qmw\" (UID: \"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.440584 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-ovn-node-metrics-cert\") pod \"ovnkube-node-n4qmw\" (UID: \"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.440629 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trpxh\" (UniqueName: \"kubernetes.io/projected/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-kube-api-access-trpxh\") pod \"ovnkube-node-n4qmw\" (UID: \"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.440654 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-host-slash\") pod \"ovnkube-node-n4qmw\" (UID: \"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.440669 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-run-openvswitch\") pod \"ovnkube-node-n4qmw\" (UID: \"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.440687 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-ovnkube-script-lib\") pod \"ovnkube-node-n4qmw\" (UID: \"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.440708 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-n4qmw\" (UID: \"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.440724 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-etc-openvswitch\") pod \"ovnkube-node-n4qmw\" (UID: \"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.440871 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-run-ovn\") pod \"ovnkube-node-n4qmw\" (UID: \"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.440915 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-host-run-netns\") pod \"ovnkube-node-n4qmw\" (UID: \"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.440946 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-host-kubelet\") pod \"ovnkube-node-n4qmw\" (UID: \"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.440961 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-host-run-ovn-kubernetes\") pod \"ovnkube-node-n4qmw\" (UID: \"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.444537 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85ee6420-36f0-467c-acf4-ebea8b02c8d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jfxnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:32Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.456450 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9007e2c-ce36-49d5-ac3f-a2a0ced4e662\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://631c19835680cfbfc94d8d2864f79bb327a834aae717a2c9c525383029e44001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a299161b21fb4a4bc255d765f39eaafa3c87549cc62d458d28ff57fbb4b5fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25ce4f3c52e2096622385f0bd213a058de7ddd3967ed8ba918e79fc63b00429c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28c581f99dcf7d549d235350230e7c3ef380dfeb4fdff577353410642700cb1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:32Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.471278 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:32Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.486219 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a948c39e09b468da8df5726e7734af35e1d5324d44a6ad11f6e30031f27060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:32Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.501865 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:32Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.538473 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.538511 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.538520 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.538533 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.538543 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:32Z","lastTransitionTime":"2025-11-24T11:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.542030 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-var-lib-openvswitch\") pod \"ovnkube-node-n4qmw\" (UID: \"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.542072 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-env-overrides\") pod \"ovnkube-node-n4qmw\" (UID: \"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.542090 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-run-systemd\") pod \"ovnkube-node-n4qmw\" (UID: \"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.542106 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-node-log\") pod \"ovnkube-node-n4qmw\" (UID: \"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.542123 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-log-socket\") pod \"ovnkube-node-n4qmw\" (UID: \"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.542141 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-ovn-node-metrics-cert\") pod \"ovnkube-node-n4qmw\" (UID: \"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.542158 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trpxh\" (UniqueName: \"kubernetes.io/projected/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-kube-api-access-trpxh\") pod \"ovnkube-node-n4qmw\" (UID: \"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.542169 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-run-systemd\") pod \"ovnkube-node-n4qmw\" (UID: \"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.542199 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-log-socket\") pod \"ovnkube-node-n4qmw\" (UID: \"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.542227 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-host-slash\") pod \"ovnkube-node-n4qmw\" (UID: \"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.542250 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-node-log\") pod \"ovnkube-node-n4qmw\" (UID: \"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.542178 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-host-slash\") pod \"ovnkube-node-n4qmw\" (UID: \"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.542288 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-run-openvswitch\") pod \"ovnkube-node-n4qmw\" (UID: \"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.542307 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-ovnkube-script-lib\") pod \"ovnkube-node-n4qmw\" (UID: \"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.542328 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-n4qmw\" (UID: \"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.542345 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-etc-openvswitch\") pod \"ovnkube-node-n4qmw\" (UID: \"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.542398 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-run-ovn\") pod \"ovnkube-node-n4qmw\" (UID: \"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.542414 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-host-kubelet\") pod \"ovnkube-node-n4qmw\" (UID: \"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.542428 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-host-run-netns\") pod \"ovnkube-node-n4qmw\" (UID: \"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.542413 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-run-openvswitch\") pod \"ovnkube-node-n4qmw\" (UID: \"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.542453 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-host-run-ovn-kubernetes\") pod \"ovnkube-node-n4qmw\" (UID: \"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.542469 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-systemd-units\") pod \"ovnkube-node-n4qmw\" (UID: \"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.542475 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-run-ovn\") pod \"ovnkube-node-n4qmw\" (UID: \"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.542485 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-host-cni-bin\") pod \"ovnkube-node-n4qmw\" (UID: \"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.542497 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-n4qmw\" (UID: \"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.542507 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-etc-openvswitch\") pod \"ovnkube-node-n4qmw\" (UID: \"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.542501 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-ovnkube-config\") pod \"ovnkube-node-n4qmw\" (UID: \"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.542532 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-host-run-ovn-kubernetes\") pod \"ovnkube-node-n4qmw\" (UID: \"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.542550 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-host-cni-netd\") pod \"ovnkube-node-n4qmw\" (UID: \"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.542560 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-host-kubelet\") pod \"ovnkube-node-n4qmw\" (UID: \"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.542600 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-host-run-netns\") pod \"ovnkube-node-n4qmw\" (UID: \"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.542605 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-systemd-units\") pod \"ovnkube-node-n4qmw\" (UID: \"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.542625 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-host-cni-netd\") pod \"ovnkube-node-n4qmw\" (UID: \"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.542634 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-host-cni-bin\") pod \"ovnkube-node-n4qmw\" (UID: \"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.542652 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-var-lib-openvswitch\") pod \"ovnkube-node-n4qmw\" (UID: \"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.542700 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-env-overrides\") pod \"ovnkube-node-n4qmw\" (UID: \"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.542978 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-ovnkube-script-lib\") pod \"ovnkube-node-n4qmw\" (UID: \"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.542996 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-ovnkube-config\") pod \"ovnkube-node-n4qmw\" (UID: \"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.546750 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-ovn-node-metrics-cert\") pod \"ovnkube-node-n4qmw\" (UID: \"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.557842 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trpxh\" (UniqueName: \"kubernetes.io/projected/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-kube-api-access-trpxh\") pod \"ovnkube-node-n4qmw\" (UID: \"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.622143 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" Nov 24 11:09:32 crc kubenswrapper[5072]: W1124 11:09:32.634766 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod80fda759_ddfd_438a_b5a2_cb775ee1bf7e.slice/crio-c1373cc5d09a0d75178ee71120ac335cf3b3503e019ef93010195b148b5501b9 WatchSource:0}: Error finding container c1373cc5d09a0d75178ee71120ac335cf3b3503e019ef93010195b148b5501b9: Status 404 returned error can't find the container with id c1373cc5d09a0d75178ee71120ac335cf3b3503e019ef93010195b148b5501b9 Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.639969 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.640071 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.640126 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.640182 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.640255 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:32Z","lastTransitionTime":"2025-11-24T11:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.742550 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.742590 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.742599 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.742612 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.742622 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:32Z","lastTransitionTime":"2025-11-24T11:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.846041 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.846089 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.846143 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.846174 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.846207 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:32Z","lastTransitionTime":"2025-11-24T11:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.948596 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.948648 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.948667 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.948688 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:32 crc kubenswrapper[5072]: I1124 11:09:32.948703 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:32Z","lastTransitionTime":"2025-11-24T11:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.052247 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.052317 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.052341 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.052413 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.052450 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:33Z","lastTransitionTime":"2025-11-24T11:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.157666 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.157996 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.158006 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.158022 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.158032 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:33Z","lastTransitionTime":"2025-11-24T11:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.187779 5072 generic.go:334] "Generic (PLEG): container finished" podID="80fda759-ddfd-438a-b5a2-cb775ee1bf7e" containerID="c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413" exitCode=0 Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.187836 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" event={"ID":"80fda759-ddfd-438a-b5a2-cb775ee1bf7e","Type":"ContainerDied","Data":"c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413"} Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.187858 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" event={"ID":"80fda759-ddfd-438a-b5a2-cb775ee1bf7e","Type":"ContainerStarted","Data":"c1373cc5d09a0d75178ee71120ac335cf3b3503e019ef93010195b148b5501b9"} Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.190910 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"3973b61727227663fde759ad817fc73088f78293c67fc1bbbf5d5543afa7bbb4"} Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.193859 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" event={"ID":"85ee6420-36f0-467c-acf4-ebea8b02c8d5","Type":"ContainerStarted","Data":"21d57225dc522c1ee3621c75ac8f9f93c47d21afb8b0cb1aae2d6aea1d17a252"} Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.194979 5072 generic.go:334] "Generic (PLEG): container finished" podID="74eb978f-00ff-4ed3-a5da-8026a3211592" containerID="911b5942d35c25032791bf5a43559a6234acf215f5d3f84a30e69aced0caecc3" exitCode=0 Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.196166 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qjsxf" event={"ID":"74eb978f-00ff-4ed3-a5da-8026a3211592","Type":"ContainerDied","Data":"911b5942d35c25032791bf5a43559a6234acf215f5d3f84a30e69aced0caecc3"} Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.201102 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qjsxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74eb978f-00ff-4ed3-a5da-8026a3211592\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qjsxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:33Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.211916 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bkjf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"175fd540-009b-4cb4-9c3e-e2ebc7e787f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d000a9d98b0e3ed54c1cc50148360bb8103d332c45ee03e745f14929132d2c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcts8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bkjf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:33Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.223385 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t8b9x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a9fe7b3-71a3-4388-8ee4-7531ceef6049\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96637ece9dca11a6b9e2a8fff8e78ca37f48e9f86e3f076e80cbd56aa353ca74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmbvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t8b9x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:33Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.233956 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85ee6420-36f0-467c-acf4-ebea8b02c8d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jfxnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:33Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.248218 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b45fbff892ae7b15dc056d52d6485a995bb8a62ae423498027fe4866ef51e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcaa27616bc15c5ce26c371eb8a8f155914434949662b30894cd1ef7aa8e04a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:33Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.260527 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.260583 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.260597 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.260617 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.260631 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:33Z","lastTransitionTime":"2025-11-24T11:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.262276 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:33Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.276421 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:33Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.300589 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a948c39e09b468da8df5726e7734af35e1d5324d44a6ad11f6e30031f27060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:33Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.326712 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:33Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.351759 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9007e2c-ce36-49d5-ac3f-a2a0ced4e662\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://631c19835680cfbfc94d8d2864f79bb327a834aae717a2c9c525383029e44001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a299161b21fb4a4bc255d765f39eaafa3c87549cc62d458d28ff57fbb4b5fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25ce4f3c52e2096622385f0bd213a058de7ddd3967ed8ba918e79fc63b00429c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28c581f99dcf7d549d235350230e7c3ef380dfeb4fdff577353410642700cb1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:33Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.363025 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.363073 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.363088 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.363106 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.363118 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:33Z","lastTransitionTime":"2025-11-24T11:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.369620 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a60343a1-7193-420d-b6ef-81505cfad266\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6597a19c8ed876fea1aaa8077315a8f39d0a79dee6af94970a3abcd552d673e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89e652bfaac124e13e0b3dfd3f167688a6b417b3613fb94d5422e2134ad95a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c9b314ea6e67a2866adfd0dc2e429523b6db6dab450a1a95fe5528548a0fcb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5f54ddd554c2e52a492be6b3e237793c7b7bed201d942c23d11983e154863a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e03b85333c8be2e5efe40f082369652f009482373f8e230fd948b2dee4e2ee39\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:09:23Z\\\",\\\"message\\\":\\\"W1124 11:09:12.543261 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 11:09:12.543592 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763982552 cert, and key in /tmp/serving-cert-2249531990/serving-signer.crt, /tmp/serving-cert-2249531990/serving-signer.key\\\\nI1124 11:09:13.042739 1 observer_polling.go:159] Starting file observer\\\\nW1124 11:09:13.046128 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 11:09:13.046351 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:09:13.048981 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2249531990/tls.crt::/tmp/serving-cert-2249531990/tls.key\\\\\\\"\\\\nF1124 11:09:23.567420 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d2187669c4dc9aae8ca2f2141104aee1e20df96f0bccf45ecd4c8528f51d1af\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a6b0468c00ca40213d12dd7b80c9f0dcfb93509a44ae37414053672e674f9f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6b0468c00ca40213d12dd7b80c9f0dcfb93509a44ae37414053672e674f9f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:33Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.380978 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:33Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.400263 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4qmw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:33Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.411276 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b45fbff892ae7b15dc056d52d6485a995bb8a62ae423498027fe4866ef51e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcaa27616bc15c5ce26c371eb8a8f155914434949662b30894cd1ef7aa8e04a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:33Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.420280 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3973b61727227663fde759ad817fc73088f78293c67fc1bbbf5d5543afa7bbb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:33Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.427809 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bkjf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"175fd540-009b-4cb4-9c3e-e2ebc7e787f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d000a9d98b0e3ed54c1cc50148360bb8103d332c45ee03e745f14929132d2c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcts8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bkjf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:33Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.437521 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t8b9x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a9fe7b3-71a3-4388-8ee4-7531ceef6049\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96637ece9dca11a6b9e2a8fff8e78ca37f48e9f86e3f076e80cbd56aa353ca74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmbvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t8b9x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:33Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.440308 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-jz4mm"] Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.440662 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-jz4mm" Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.442215 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.442409 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.442515 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.443078 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.450687 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85ee6420-36f0-467c-acf4-ebea8b02c8d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21d57225dc522c1ee3621c75ac8f9f93c47d21afb8b0cb1aae2d6aea1d17a252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3509fd52379451e43594c096ef652d92778331f2aef6b689e547f35a384b976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jfxnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:33Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.461532 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9007e2c-ce36-49d5-ac3f-a2a0ced4e662\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://631c19835680cfbfc94d8d2864f79bb327a834aae717a2c9c525383029e44001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a299161b21fb4a4bc255d765f39eaafa3c87549cc62d458d28ff57fbb4b5fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25ce4f3c52e2096622385f0bd213a058de7ddd3967ed8ba918e79fc63b00429c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28c581f99dcf7d549d235350230e7c3ef380dfeb4fdff577353410642700cb1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:33Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.465059 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.465099 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.465113 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.465129 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.465143 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:33Z","lastTransitionTime":"2025-11-24T11:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.475407 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:33Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.487110 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a948c39e09b468da8df5726e7734af35e1d5324d44a6ad11f6e30031f27060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:33Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.499935 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:33Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.513754 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a60343a1-7193-420d-b6ef-81505cfad266\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6597a19c8ed876fea1aaa8077315a8f39d0a79dee6af94970a3abcd552d673e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89e652bfaac124e13e0b3dfd3f167688a6b417b3613fb94d5422e2134ad95a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c9b314ea6e67a2866adfd0dc2e429523b6db6dab450a1a95fe5528548a0fcb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5f54ddd554c2e52a492be6b3e237793c7b7bed201d942c23d11983e154863a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e03b85333c8be2e5efe40f082369652f009482373f8e230fd948b2dee4e2ee39\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:09:23Z\\\",\\\"message\\\":\\\"W1124 11:09:12.543261 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 11:09:12.543592 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763982552 cert, and key in /tmp/serving-cert-2249531990/serving-signer.crt, /tmp/serving-cert-2249531990/serving-signer.key\\\\nI1124 11:09:13.042739 1 observer_polling.go:159] Starting file observer\\\\nW1124 11:09:13.046128 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 11:09:13.046351 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:09:13.048981 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2249531990/tls.crt::/tmp/serving-cert-2249531990/tls.key\\\\\\\"\\\\nF1124 11:09:23.567420 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d2187669c4dc9aae8ca2f2141104aee1e20df96f0bccf45ecd4c8528f51d1af\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a6b0468c00ca40213d12dd7b80c9f0dcfb93509a44ae37414053672e674f9f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6b0468c00ca40213d12dd7b80c9f0dcfb93509a44ae37414053672e674f9f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:33Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.529716 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:33Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.551115 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/19d555ef-9635-4aa7-bce1-7b1eb4805445-serviceca\") pod \"node-ca-jz4mm\" (UID: \"19d555ef-9635-4aa7-bce1-7b1eb4805445\") " pod="openshift-image-registry/node-ca-jz4mm" Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.551508 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/19d555ef-9635-4aa7-bce1-7b1eb4805445-host\") pod \"node-ca-jz4mm\" (UID: \"19d555ef-9635-4aa7-bce1-7b1eb4805445\") " pod="openshift-image-registry/node-ca-jz4mm" Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.551529 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8k8p\" (UniqueName: \"kubernetes.io/projected/19d555ef-9635-4aa7-bce1-7b1eb4805445-kube-api-access-f8k8p\") pod \"node-ca-jz4mm\" (UID: \"19d555ef-9635-4aa7-bce1-7b1eb4805445\") " pod="openshift-image-registry/node-ca-jz4mm" Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.557036 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4qmw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:33Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.567417 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.567462 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.567475 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.567493 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.567507 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:33Z","lastTransitionTime":"2025-11-24T11:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.570801 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qjsxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74eb978f-00ff-4ed3-a5da-8026a3211592\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://911b5942d35c25032791bf5a43559a6234acf215f5d3f84a30e69aced0caecc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://911b5942d35c25032791bf5a43559a6234acf215f5d3f84a30e69aced0caecc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qjsxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:33Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.585346 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:33Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.608233 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4qmw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:33Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.623257 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a60343a1-7193-420d-b6ef-81505cfad266\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6597a19c8ed876fea1aaa8077315a8f39d0a79dee6af94970a3abcd552d673e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89e652bfaac124e13e0b3dfd3f167688a6b417b3613fb94d5422e2134ad95a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c9b314ea6e67a2866adfd0dc2e429523b6db6dab450a1a95fe5528548a0fcb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5f54ddd554c2e52a492be6b3e237793c7b7bed201d942c23d11983e154863a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e03b85333c8be2e5efe40f082369652f009482373f8e230fd948b2dee4e2ee39\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:09:23Z\\\",\\\"message\\\":\\\"W1124 11:09:12.543261 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 11:09:12.543592 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763982552 cert, and key in /tmp/serving-cert-2249531990/serving-signer.crt, /tmp/serving-cert-2249531990/serving-signer.key\\\\nI1124 11:09:13.042739 1 observer_polling.go:159] Starting file observer\\\\nW1124 11:09:13.046128 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 11:09:13.046351 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:09:13.048981 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2249531990/tls.crt::/tmp/serving-cert-2249531990/tls.key\\\\\\\"\\\\nF1124 11:09:23.567420 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d2187669c4dc9aae8ca2f2141104aee1e20df96f0bccf45ecd4c8528f51d1af\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a6b0468c00ca40213d12dd7b80c9f0dcfb93509a44ae37414053672e674f9f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6b0468c00ca40213d12dd7b80c9f0dcfb93509a44ae37414053672e674f9f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:33Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.642418 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qjsxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74eb978f-00ff-4ed3-a5da-8026a3211592\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://911b5942d35c25032791bf5a43559a6234acf215f5d3f84a30e69aced0caecc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://911b5942d35c25032791bf5a43559a6234acf215f5d3f84a30e69aced0caecc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qjsxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:33Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.652053 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/19d555ef-9635-4aa7-bce1-7b1eb4805445-serviceca\") pod \"node-ca-jz4mm\" (UID: \"19d555ef-9635-4aa7-bce1-7b1eb4805445\") " pod="openshift-image-registry/node-ca-jz4mm" Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.652108 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.652146 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/19d555ef-9635-4aa7-bce1-7b1eb4805445-host\") pod \"node-ca-jz4mm\" (UID: \"19d555ef-9635-4aa7-bce1-7b1eb4805445\") " pod="openshift-image-registry/node-ca-jz4mm" Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.652170 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8k8p\" (UniqueName: \"kubernetes.io/projected/19d555ef-9635-4aa7-bce1-7b1eb4805445-kube-api-access-f8k8p\") pod \"node-ca-jz4mm\" (UID: \"19d555ef-9635-4aa7-bce1-7b1eb4805445\") " pod="openshift-image-registry/node-ca-jz4mm" Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.652535 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/19d555ef-9635-4aa7-bce1-7b1eb4805445-host\") pod \"node-ca-jz4mm\" (UID: \"19d555ef-9635-4aa7-bce1-7b1eb4805445\") " pod="openshift-image-registry/node-ca-jz4mm" Nov 24 11:09:33 crc kubenswrapper[5072]: E1124 11:09:33.652708 5072 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 11:09:33 crc kubenswrapper[5072]: E1124 11:09:33.652865 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 11:09:37.65284078 +0000 UTC m=+29.364365316 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.653540 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/19d555ef-9635-4aa7-bce1-7b1eb4805445-serviceca\") pod \"node-ca-jz4mm\" (UID: \"19d555ef-9635-4aa7-bce1-7b1eb4805445\") " pod="openshift-image-registry/node-ca-jz4mm" Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.659694 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t8b9x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a9fe7b3-71a3-4388-8ee4-7531ceef6049\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96637ece9dca11a6b9e2a8fff8e78ca37f48e9f86e3f076e80cbd56aa353ca74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmbvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t8b9x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:33Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.670167 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.670212 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.670228 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.670249 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.670263 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:33Z","lastTransitionTime":"2025-11-24T11:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.674112 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85ee6420-36f0-467c-acf4-ebea8b02c8d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21d57225dc522c1ee3621c75ac8f9f93c47d21afb8b0cb1aae2d6aea1d17a252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3509fd52379451e43594c096ef652d92778331f2aef6b689e547f35a384b976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jfxnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:33Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.682046 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8k8p\" (UniqueName: \"kubernetes.io/projected/19d555ef-9635-4aa7-bce1-7b1eb4805445-kube-api-access-f8k8p\") pod \"node-ca-jz4mm\" (UID: \"19d555ef-9635-4aa7-bce1-7b1eb4805445\") " pod="openshift-image-registry/node-ca-jz4mm" Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.689260 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jz4mm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19d555ef-9635-4aa7-bce1-7b1eb4805445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8k8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jz4mm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:33Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.704403 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b45fbff892ae7b15dc056d52d6485a995bb8a62ae423498027fe4866ef51e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcaa27616bc15c5ce26c371eb8a8f155914434949662b30894cd1ef7aa8e04a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:33Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.714189 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3973b61727227663fde759ad817fc73088f78293c67fc1bbbf5d5543afa7bbb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:33Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.725435 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bkjf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"175fd540-009b-4cb4-9c3e-e2ebc7e787f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d000a9d98b0e3ed54c1cc50148360bb8103d332c45ee03e745f14929132d2c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcts8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bkjf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:33Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.740026 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a948c39e09b468da8df5726e7734af35e1d5324d44a6ad11f6e30031f27060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:33Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.752527 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.752656 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:09:33 crc kubenswrapper[5072]: E1124 11:09:33.752701 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:09:37.752682207 +0000 UTC m=+29.464206683 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.752729 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:09:33 crc kubenswrapper[5072]: E1124 11:09:33.752740 5072 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.752761 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:09:33 crc kubenswrapper[5072]: E1124 11:09:33.752776 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 11:09:37.752766469 +0000 UTC m=+29.464290945 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 11:09:33 crc kubenswrapper[5072]: E1124 11:09:33.752859 5072 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 11:09:33 crc kubenswrapper[5072]: E1124 11:09:33.752868 5072 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 11:09:33 crc kubenswrapper[5072]: E1124 11:09:33.752876 5072 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 11:09:33 crc kubenswrapper[5072]: E1124 11:09:33.752883 5072 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 11:09:33 crc kubenswrapper[5072]: E1124 11:09:33.752889 5072 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:09:33 crc kubenswrapper[5072]: E1124 11:09:33.752893 5072 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:09:33 crc kubenswrapper[5072]: E1124 11:09:33.752917 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 11:09:37.752908262 +0000 UTC m=+29.464432738 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:09:33 crc kubenswrapper[5072]: E1124 11:09:33.752932 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 11:09:37.752925772 +0000 UTC m=+29.464450248 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.753527 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:33Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.771792 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.771817 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.771825 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.771839 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.771848 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:33Z","lastTransitionTime":"2025-11-24T11:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.779927 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9007e2c-ce36-49d5-ac3f-a2a0ced4e662\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://631c19835680cfbfc94d8d2864f79bb327a834aae717a2c9c525383029e44001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a299161b21fb4a4bc255d765f39eaafa3c87549cc62d458d28ff57fbb4b5fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25ce4f3c52e2096622385f0bd213a058de7ddd3967ed8ba918e79fc63b00429c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28c581f99dcf7d549d235350230e7c3ef380dfeb4fdff577353410642700cb1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:33Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.818902 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:33Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.833075 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-jz4mm" Nov 24 11:09:33 crc kubenswrapper[5072]: W1124 11:09:33.843984 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod19d555ef_9635_4aa7_bce1_7b1eb4805445.slice/crio-ade6e707c0622528f92d002b28832c08fb3cf4843c6e1047324cc973b24c867b WatchSource:0}: Error finding container ade6e707c0622528f92d002b28832c08fb3cf4843c6e1047324cc973b24c867b: Status 404 returned error can't find the container with id ade6e707c0622528f92d002b28832c08fb3cf4843c6e1047324cc973b24c867b Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.876152 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.876194 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.876206 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.876223 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.876235 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:33Z","lastTransitionTime":"2025-11-24T11:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.978219 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.978252 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.978260 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.978804 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:33 crc kubenswrapper[5072]: I1124 11:09:33.978828 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:33Z","lastTransitionTime":"2025-11-24T11:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:34 crc kubenswrapper[5072]: I1124 11:09:34.015689 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:09:34 crc kubenswrapper[5072]: I1124 11:09:34.015745 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:09:34 crc kubenswrapper[5072]: I1124 11:09:34.015697 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:09:34 crc kubenswrapper[5072]: E1124 11:09:34.015826 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:09:34 crc kubenswrapper[5072]: E1124 11:09:34.015971 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:09:34 crc kubenswrapper[5072]: E1124 11:09:34.016054 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:09:34 crc kubenswrapper[5072]: I1124 11:09:34.081425 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:34 crc kubenswrapper[5072]: I1124 11:09:34.081467 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:34 crc kubenswrapper[5072]: I1124 11:09:34.081479 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:34 crc kubenswrapper[5072]: I1124 11:09:34.081498 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:34 crc kubenswrapper[5072]: I1124 11:09:34.081510 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:34Z","lastTransitionTime":"2025-11-24T11:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:34 crc kubenswrapper[5072]: I1124 11:09:34.183238 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:34 crc kubenswrapper[5072]: I1124 11:09:34.183520 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:34 crc kubenswrapper[5072]: I1124 11:09:34.183528 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:34 crc kubenswrapper[5072]: I1124 11:09:34.183542 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:34 crc kubenswrapper[5072]: I1124 11:09:34.183552 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:34Z","lastTransitionTime":"2025-11-24T11:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:34 crc kubenswrapper[5072]: I1124 11:09:34.199476 5072 generic.go:334] "Generic (PLEG): container finished" podID="74eb978f-00ff-4ed3-a5da-8026a3211592" containerID="829da19d26a0ee0192a826e0b355266bcc48c77cf7b1fcf97a9e56add5d48645" exitCode=0 Nov 24 11:09:34 crc kubenswrapper[5072]: I1124 11:09:34.199523 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qjsxf" event={"ID":"74eb978f-00ff-4ed3-a5da-8026a3211592","Type":"ContainerDied","Data":"829da19d26a0ee0192a826e0b355266bcc48c77cf7b1fcf97a9e56add5d48645"} Nov 24 11:09:34 crc kubenswrapper[5072]: I1124 11:09:34.203518 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" event={"ID":"80fda759-ddfd-438a-b5a2-cb775ee1bf7e","Type":"ContainerStarted","Data":"89dd7133a078fe05808fdf20f22b6939004406ae85d3b6ef854a3e4031350491"} Nov 24 11:09:34 crc kubenswrapper[5072]: I1124 11:09:34.203562 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" event={"ID":"80fda759-ddfd-438a-b5a2-cb775ee1bf7e","Type":"ContainerStarted","Data":"9f6526ffcce8bc139bd9442203e460c71b46e2e8cf9e1f0d03beb067f5dc1c39"} Nov 24 11:09:34 crc kubenswrapper[5072]: I1124 11:09:34.203578 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" event={"ID":"80fda759-ddfd-438a-b5a2-cb775ee1bf7e","Type":"ContainerStarted","Data":"c82cb1df0677da29463f84139b09b8ee263695e4c994ef7d17846556260b5c24"} Nov 24 11:09:34 crc kubenswrapper[5072]: I1124 11:09:34.203591 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" event={"ID":"80fda759-ddfd-438a-b5a2-cb775ee1bf7e","Type":"ContainerStarted","Data":"1421e4bd297d99e68c36da933221bbabf8d74aa5fbfa7cbfe831215de52d4790"} Nov 24 11:09:34 crc kubenswrapper[5072]: I1124 11:09:34.203603 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" event={"ID":"80fda759-ddfd-438a-b5a2-cb775ee1bf7e","Type":"ContainerStarted","Data":"98470930757c0529cc831f91feab9f4b004c808efbfdf40e3e95b12e6af1c6d9"} Nov 24 11:09:34 crc kubenswrapper[5072]: I1124 11:09:34.203618 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" event={"ID":"80fda759-ddfd-438a-b5a2-cb775ee1bf7e","Type":"ContainerStarted","Data":"7621cb39fa8d0330ee899d4962150519618be95eabfc592e6678bb5f5fbbdbfb"} Nov 24 11:09:34 crc kubenswrapper[5072]: I1124 11:09:34.210184 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-jz4mm" event={"ID":"19d555ef-9635-4aa7-bce1-7b1eb4805445","Type":"ContainerStarted","Data":"4fc7d5e96171aeadf92196d2b795c03ec634abd92814569a974200484569c145"} Nov 24 11:09:34 crc kubenswrapper[5072]: I1124 11:09:34.210249 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-jz4mm" event={"ID":"19d555ef-9635-4aa7-bce1-7b1eb4805445","Type":"ContainerStarted","Data":"ade6e707c0622528f92d002b28832c08fb3cf4843c6e1047324cc973b24c867b"} Nov 24 11:09:34 crc kubenswrapper[5072]: I1124 11:09:34.214543 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3973b61727227663fde759ad817fc73088f78293c67fc1bbbf5d5543afa7bbb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:34Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:34 crc kubenswrapper[5072]: I1124 11:09:34.226193 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bkjf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"175fd540-009b-4cb4-9c3e-e2ebc7e787f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d000a9d98b0e3ed54c1cc50148360bb8103d332c45ee03e745f14929132d2c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcts8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bkjf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:34Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:34 crc kubenswrapper[5072]: I1124 11:09:34.241338 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t8b9x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a9fe7b3-71a3-4388-8ee4-7531ceef6049\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96637ece9dca11a6b9e2a8fff8e78ca37f48e9f86e3f076e80cbd56aa353ca74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmbvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t8b9x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:34Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:34 crc kubenswrapper[5072]: I1124 11:09:34.254332 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85ee6420-36f0-467c-acf4-ebea8b02c8d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21d57225dc522c1ee3621c75ac8f9f93c47d21afb8b0cb1aae2d6aea1d17a252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3509fd52379451e43594c096ef652d92778331f2aef6b689e547f35a384b976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jfxnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:34Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:34 crc kubenswrapper[5072]: I1124 11:09:34.267227 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jz4mm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19d555ef-9635-4aa7-bce1-7b1eb4805445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8k8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jz4mm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:34Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:34 crc kubenswrapper[5072]: I1124 11:09:34.278911 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b45fbff892ae7b15dc056d52d6485a995bb8a62ae423498027fe4866ef51e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcaa27616bc15c5ce26c371eb8a8f155914434949662b30894cd1ef7aa8e04a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:34Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:34 crc kubenswrapper[5072]: I1124 11:09:34.287032 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:34 crc kubenswrapper[5072]: I1124 11:09:34.287063 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:34 crc kubenswrapper[5072]: I1124 11:09:34.287073 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:34 crc kubenswrapper[5072]: I1124 11:09:34.287086 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:34 crc kubenswrapper[5072]: I1124 11:09:34.287095 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:34Z","lastTransitionTime":"2025-11-24T11:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:34 crc kubenswrapper[5072]: I1124 11:09:34.289694 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9007e2c-ce36-49d5-ac3f-a2a0ced4e662\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://631c19835680cfbfc94d8d2864f79bb327a834aae717a2c9c525383029e44001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a299161b21fb4a4bc255d765f39eaafa3c87549cc62d458d28ff57fbb4b5fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25ce4f3c52e2096622385f0bd213a058de7ddd3967ed8ba918e79fc63b00429c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28c581f99dcf7d549d235350230e7c3ef380dfeb4fdff577353410642700cb1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:34Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:34 crc kubenswrapper[5072]: I1124 11:09:34.299444 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:34Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:34 crc kubenswrapper[5072]: I1124 11:09:34.310062 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a948c39e09b468da8df5726e7734af35e1d5324d44a6ad11f6e30031f27060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:34Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:34 crc kubenswrapper[5072]: I1124 11:09:34.325110 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:34Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:34 crc kubenswrapper[5072]: I1124 11:09:34.343365 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a60343a1-7193-420d-b6ef-81505cfad266\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6597a19c8ed876fea1aaa8077315a8f39d0a79dee6af94970a3abcd552d673e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89e652bfaac124e13e0b3dfd3f167688a6b417b3613fb94d5422e2134ad95a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c9b314ea6e67a2866adfd0dc2e429523b6db6dab450a1a95fe5528548a0fcb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5f54ddd554c2e52a492be6b3e237793c7b7bed201d942c23d11983e154863a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e03b85333c8be2e5efe40f082369652f009482373f8e230fd948b2dee4e2ee39\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:09:23Z\\\",\\\"message\\\":\\\"W1124 11:09:12.543261 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 11:09:12.543592 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763982552 cert, and key in /tmp/serving-cert-2249531990/serving-signer.crt, /tmp/serving-cert-2249531990/serving-signer.key\\\\nI1124 11:09:13.042739 1 observer_polling.go:159] Starting file observer\\\\nW1124 11:09:13.046128 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 11:09:13.046351 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:09:13.048981 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2249531990/tls.crt::/tmp/serving-cert-2249531990/tls.key\\\\\\\"\\\\nF1124 11:09:23.567420 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d2187669c4dc9aae8ca2f2141104aee1e20df96f0bccf45ecd4c8528f51d1af\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a6b0468c00ca40213d12dd7b80c9f0dcfb93509a44ae37414053672e674f9f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6b0468c00ca40213d12dd7b80c9f0dcfb93509a44ae37414053672e674f9f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:34Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:34 crc kubenswrapper[5072]: I1124 11:09:34.353789 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:34Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:34 crc kubenswrapper[5072]: I1124 11:09:34.370463 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4qmw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:34Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:34 crc kubenswrapper[5072]: I1124 11:09:34.385804 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qjsxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74eb978f-00ff-4ed3-a5da-8026a3211592\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://911b5942d35c25032791bf5a43559a6234acf215f5d3f84a30e69aced0caecc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://911b5942d35c25032791bf5a43559a6234acf215f5d3f84a30e69aced0caecc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://829da19d26a0ee0192a826e0b355266bcc48c77cf7b1fcf97a9e56add5d48645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://829da19d26a0ee0192a826e0b355266bcc48c77cf7b1fcf97a9e56add5d48645\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qjsxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:34Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:34 crc kubenswrapper[5072]: I1124 11:09:34.389422 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:34 crc kubenswrapper[5072]: I1124 11:09:34.389457 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:34 crc kubenswrapper[5072]: I1124 11:09:34.389469 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:34 crc kubenswrapper[5072]: I1124 11:09:34.389484 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:34 crc kubenswrapper[5072]: I1124 11:09:34.389496 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:34Z","lastTransitionTime":"2025-11-24T11:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:34 crc kubenswrapper[5072]: I1124 11:09:34.424869 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qjsxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74eb978f-00ff-4ed3-a5da-8026a3211592\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://911b5942d35c25032791bf5a43559a6234acf215f5d3f84a30e69aced0caecc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://911b5942d35c25032791bf5a43559a6234acf215f5d3f84a30e69aced0caecc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://829da19d26a0ee0192a826e0b355266bcc48c77cf7b1fcf97a9e56add5d48645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://829da19d26a0ee0192a826e0b355266bcc48c77cf7b1fcf97a9e56add5d48645\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qjsxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:34Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:34 crc kubenswrapper[5072]: I1124 11:09:34.459260 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b45fbff892ae7b15dc056d52d6485a995bb8a62ae423498027fe4866ef51e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcaa27616bc15c5ce26c371eb8a8f155914434949662b30894cd1ef7aa8e04a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:34Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:34 crc kubenswrapper[5072]: I1124 11:09:34.493122 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:34 crc kubenswrapper[5072]: I1124 11:09:34.493177 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:34 crc kubenswrapper[5072]: I1124 11:09:34.493192 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:34 crc kubenswrapper[5072]: I1124 11:09:34.493218 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:34 crc kubenswrapper[5072]: I1124 11:09:34.493244 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:34Z","lastTransitionTime":"2025-11-24T11:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:34 crc kubenswrapper[5072]: I1124 11:09:34.503529 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3973b61727227663fde759ad817fc73088f78293c67fc1bbbf5d5543afa7bbb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:34Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:34 crc kubenswrapper[5072]: I1124 11:09:34.536903 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bkjf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"175fd540-009b-4cb4-9c3e-e2ebc7e787f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d000a9d98b0e3ed54c1cc50148360bb8103d332c45ee03e745f14929132d2c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcts8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bkjf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:34Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:34 crc kubenswrapper[5072]: I1124 11:09:34.587026 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t8b9x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a9fe7b3-71a3-4388-8ee4-7531ceef6049\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96637ece9dca11a6b9e2a8fff8e78ca37f48e9f86e3f076e80cbd56aa353ca74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmbvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t8b9x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:34Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:34 crc kubenswrapper[5072]: I1124 11:09:34.595493 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:34 crc kubenswrapper[5072]: I1124 11:09:34.595566 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:34 crc kubenswrapper[5072]: I1124 11:09:34.595583 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:34 crc kubenswrapper[5072]: I1124 11:09:34.595608 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:34 crc kubenswrapper[5072]: I1124 11:09:34.595629 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:34Z","lastTransitionTime":"2025-11-24T11:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:34 crc kubenswrapper[5072]: I1124 11:09:34.619486 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85ee6420-36f0-467c-acf4-ebea8b02c8d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21d57225dc522c1ee3621c75ac8f9f93c47d21afb8b0cb1aae2d6aea1d17a252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3509fd52379451e43594c096ef652d92778331f2aef6b689e547f35a384b976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jfxnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:34Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:34 crc kubenswrapper[5072]: I1124 11:09:34.658140 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jz4mm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19d555ef-9635-4aa7-bce1-7b1eb4805445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc7d5e96171aeadf92196d2b795c03ec634abd92814569a974200484569c145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8k8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jz4mm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:34Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:34 crc kubenswrapper[5072]: I1124 11:09:34.698130 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:34 crc kubenswrapper[5072]: I1124 11:09:34.698198 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:34 crc kubenswrapper[5072]: I1124 11:09:34.698224 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:34 crc kubenswrapper[5072]: I1124 11:09:34.698258 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:34 crc kubenswrapper[5072]: I1124 11:09:34.698283 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:34Z","lastTransitionTime":"2025-11-24T11:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:34 crc kubenswrapper[5072]: I1124 11:09:34.706848 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9007e2c-ce36-49d5-ac3f-a2a0ced4e662\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://631c19835680cfbfc94d8d2864f79bb327a834aae717a2c9c525383029e44001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a299161b21fb4a4bc255d765f39eaafa3c87549cc62d458d28ff57fbb4b5fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25ce4f3c52e2096622385f0bd213a058de7ddd3967ed8ba918e79fc63b00429c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28c581f99dcf7d549d235350230e7c3ef380dfeb4fdff577353410642700cb1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:34Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:34 crc kubenswrapper[5072]: I1124 11:09:34.741507 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:34Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:34 crc kubenswrapper[5072]: I1124 11:09:34.785758 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a948c39e09b468da8df5726e7734af35e1d5324d44a6ad11f6e30031f27060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:34Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:34 crc kubenswrapper[5072]: I1124 11:09:34.800754 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:34 crc kubenswrapper[5072]: I1124 11:09:34.800802 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:34 crc kubenswrapper[5072]: I1124 11:09:34.800813 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:34 crc kubenswrapper[5072]: I1124 11:09:34.800830 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:34 crc kubenswrapper[5072]: I1124 11:09:34.800842 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:34Z","lastTransitionTime":"2025-11-24T11:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:34 crc kubenswrapper[5072]: I1124 11:09:34.824673 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:34Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:34 crc kubenswrapper[5072]: I1124 11:09:34.864236 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a60343a1-7193-420d-b6ef-81505cfad266\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6597a19c8ed876fea1aaa8077315a8f39d0a79dee6af94970a3abcd552d673e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89e652bfaac124e13e0b3dfd3f167688a6b417b3613fb94d5422e2134ad95a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c9b314ea6e67a2866adfd0dc2e429523b6db6dab450a1a95fe5528548a0fcb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5f54ddd554c2e52a492be6b3e237793c7b7bed201d942c23d11983e154863a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e03b85333c8be2e5efe40f082369652f009482373f8e230fd948b2dee4e2ee39\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:09:23Z\\\",\\\"message\\\":\\\"W1124 11:09:12.543261 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 11:09:12.543592 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763982552 cert, and key in /tmp/serving-cert-2249531990/serving-signer.crt, /tmp/serving-cert-2249531990/serving-signer.key\\\\nI1124 11:09:13.042739 1 observer_polling.go:159] Starting file observer\\\\nW1124 11:09:13.046128 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 11:09:13.046351 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:09:13.048981 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2249531990/tls.crt::/tmp/serving-cert-2249531990/tls.key\\\\\\\"\\\\nF1124 11:09:23.567420 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d2187669c4dc9aae8ca2f2141104aee1e20df96f0bccf45ecd4c8528f51d1af\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a6b0468c00ca40213d12dd7b80c9f0dcfb93509a44ae37414053672e674f9f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6b0468c00ca40213d12dd7b80c9f0dcfb93509a44ae37414053672e674f9f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:34Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:34 crc kubenswrapper[5072]: I1124 11:09:34.899688 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:34Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:34 crc kubenswrapper[5072]: I1124 11:09:34.903591 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:34 crc kubenswrapper[5072]: I1124 11:09:34.903650 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:34 crc kubenswrapper[5072]: I1124 11:09:34.903670 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:34 crc kubenswrapper[5072]: I1124 11:09:34.903694 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:34 crc kubenswrapper[5072]: I1124 11:09:34.903711 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:34Z","lastTransitionTime":"2025-11-24T11:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:34 crc kubenswrapper[5072]: I1124 11:09:34.958835 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4qmw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:34Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:35 crc kubenswrapper[5072]: I1124 11:09:35.005780 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:35 crc kubenswrapper[5072]: I1124 11:09:35.005807 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:35 crc kubenswrapper[5072]: I1124 11:09:35.005817 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:35 crc kubenswrapper[5072]: I1124 11:09:35.005832 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:35 crc kubenswrapper[5072]: I1124 11:09:35.005844 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:35Z","lastTransitionTime":"2025-11-24T11:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:35 crc kubenswrapper[5072]: I1124 11:09:35.108124 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:35 crc kubenswrapper[5072]: I1124 11:09:35.108159 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:35 crc kubenswrapper[5072]: I1124 11:09:35.108169 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:35 crc kubenswrapper[5072]: I1124 11:09:35.108182 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:35 crc kubenswrapper[5072]: I1124 11:09:35.108191 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:35Z","lastTransitionTime":"2025-11-24T11:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:35 crc kubenswrapper[5072]: I1124 11:09:35.211200 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:35 crc kubenswrapper[5072]: I1124 11:09:35.211256 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:35 crc kubenswrapper[5072]: I1124 11:09:35.211273 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:35 crc kubenswrapper[5072]: I1124 11:09:35.211296 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:35 crc kubenswrapper[5072]: I1124 11:09:35.211315 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:35Z","lastTransitionTime":"2025-11-24T11:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:35 crc kubenswrapper[5072]: I1124 11:09:35.216139 5072 generic.go:334] "Generic (PLEG): container finished" podID="74eb978f-00ff-4ed3-a5da-8026a3211592" containerID="5add393950b53ed615d28b3d65833ae6a5174616b7170577babf1f4b7b6a2336" exitCode=0 Nov 24 11:09:35 crc kubenswrapper[5072]: I1124 11:09:35.216196 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qjsxf" event={"ID":"74eb978f-00ff-4ed3-a5da-8026a3211592","Type":"ContainerDied","Data":"5add393950b53ed615d28b3d65833ae6a5174616b7170577babf1f4b7b6a2336"} Nov 24 11:09:35 crc kubenswrapper[5072]: I1124 11:09:35.242775 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qjsxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74eb978f-00ff-4ed3-a5da-8026a3211592\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://911b5942d35c25032791bf5a43559a6234acf215f5d3f84a30e69aced0caecc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://911b5942d35c25032791bf5a43559a6234acf215f5d3f84a30e69aced0caecc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://829da19d26a0ee0192a826e0b355266bcc48c77cf7b1fcf97a9e56add5d48645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://829da19d26a0ee0192a826e0b355266bcc48c77cf7b1fcf97a9e56add5d48645\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5add393950b53ed615d28b3d65833ae6a5174616b7170577babf1f4b7b6a2336\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5add393950b53ed615d28b3d65833ae6a5174616b7170577babf1f4b7b6a2336\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qjsxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:35Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:35 crc kubenswrapper[5072]: I1124 11:09:35.256289 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b45fbff892ae7b15dc056d52d6485a995bb8a62ae423498027fe4866ef51e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcaa27616bc15c5ce26c371eb8a8f155914434949662b30894cd1ef7aa8e04a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:35Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:35 crc kubenswrapper[5072]: I1124 11:09:35.269831 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3973b61727227663fde759ad817fc73088f78293c67fc1bbbf5d5543afa7bbb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:35Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:35 crc kubenswrapper[5072]: I1124 11:09:35.284642 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bkjf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"175fd540-009b-4cb4-9c3e-e2ebc7e787f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d000a9d98b0e3ed54c1cc50148360bb8103d332c45ee03e745f14929132d2c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcts8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bkjf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:35Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:35 crc kubenswrapper[5072]: I1124 11:09:35.301826 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t8b9x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a9fe7b3-71a3-4388-8ee4-7531ceef6049\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96637ece9dca11a6b9e2a8fff8e78ca37f48e9f86e3f076e80cbd56aa353ca74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmbvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t8b9x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:35Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:35 crc kubenswrapper[5072]: I1124 11:09:35.314058 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:35 crc kubenswrapper[5072]: I1124 11:09:35.314112 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:35 crc kubenswrapper[5072]: I1124 11:09:35.314129 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:35 crc kubenswrapper[5072]: I1124 11:09:35.314154 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:35 crc kubenswrapper[5072]: I1124 11:09:35.314171 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:35Z","lastTransitionTime":"2025-11-24T11:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:35 crc kubenswrapper[5072]: I1124 11:09:35.314415 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85ee6420-36f0-467c-acf4-ebea8b02c8d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21d57225dc522c1ee3621c75ac8f9f93c47d21afb8b0cb1aae2d6aea1d17a252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3509fd52379451e43594c096ef652d92778331f2aef6b689e547f35a384b976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jfxnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:35Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:35 crc kubenswrapper[5072]: I1124 11:09:35.324076 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jz4mm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19d555ef-9635-4aa7-bce1-7b1eb4805445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc7d5e96171aeadf92196d2b795c03ec634abd92814569a974200484569c145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8k8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jz4mm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:35Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:35 crc kubenswrapper[5072]: I1124 11:09:35.335478 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9007e2c-ce36-49d5-ac3f-a2a0ced4e662\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://631c19835680cfbfc94d8d2864f79bb327a834aae717a2c9c525383029e44001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a299161b21fb4a4bc255d765f39eaafa3c87549cc62d458d28ff57fbb4b5fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25ce4f3c52e2096622385f0bd213a058de7ddd3967ed8ba918e79fc63b00429c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28c581f99dcf7d549d235350230e7c3ef380dfeb4fdff577353410642700cb1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:35Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:35 crc kubenswrapper[5072]: I1124 11:09:35.347022 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:35Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:35 crc kubenswrapper[5072]: I1124 11:09:35.360603 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a948c39e09b468da8df5726e7734af35e1d5324d44a6ad11f6e30031f27060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:35Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:35 crc kubenswrapper[5072]: I1124 11:09:35.379572 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:35Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:35 crc kubenswrapper[5072]: I1124 11:09:35.416136 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:35 crc kubenswrapper[5072]: I1124 11:09:35.416169 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:35 crc kubenswrapper[5072]: I1124 11:09:35.416177 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:35 crc kubenswrapper[5072]: I1124 11:09:35.416191 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:35 crc kubenswrapper[5072]: I1124 11:09:35.416199 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:35Z","lastTransitionTime":"2025-11-24T11:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:35 crc kubenswrapper[5072]: I1124 11:09:35.422849 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a60343a1-7193-420d-b6ef-81505cfad266\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6597a19c8ed876fea1aaa8077315a8f39d0a79dee6af94970a3abcd552d673e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89e652bfaac124e13e0b3dfd3f167688a6b417b3613fb94d5422e2134ad95a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c9b314ea6e67a2866adfd0dc2e429523b6db6dab450a1a95fe5528548a0fcb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5f54ddd554c2e52a492be6b3e237793c7b7bed201d942c23d11983e154863a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e03b85333c8be2e5efe40f082369652f009482373f8e230fd948b2dee4e2ee39\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:09:23Z\\\",\\\"message\\\":\\\"W1124 11:09:12.543261 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 11:09:12.543592 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763982552 cert, and key in /tmp/serving-cert-2249531990/serving-signer.crt, /tmp/serving-cert-2249531990/serving-signer.key\\\\nI1124 11:09:13.042739 1 observer_polling.go:159] Starting file observer\\\\nW1124 11:09:13.046128 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 11:09:13.046351 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:09:13.048981 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2249531990/tls.crt::/tmp/serving-cert-2249531990/tls.key\\\\\\\"\\\\nF1124 11:09:23.567420 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d2187669c4dc9aae8ca2f2141104aee1e20df96f0bccf45ecd4c8528f51d1af\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a6b0468c00ca40213d12dd7b80c9f0dcfb93509a44ae37414053672e674f9f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6b0468c00ca40213d12dd7b80c9f0dcfb93509a44ae37414053672e674f9f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:35Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:35 crc kubenswrapper[5072]: I1124 11:09:35.461127 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:35Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:35 crc kubenswrapper[5072]: I1124 11:09:35.516993 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4qmw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:35Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:35 crc kubenswrapper[5072]: I1124 11:09:35.518345 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:35 crc kubenswrapper[5072]: I1124 11:09:35.518389 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:35 crc kubenswrapper[5072]: I1124 11:09:35.518403 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:35 crc kubenswrapper[5072]: I1124 11:09:35.518419 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:35 crc kubenswrapper[5072]: I1124 11:09:35.518431 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:35Z","lastTransitionTime":"2025-11-24T11:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:35 crc kubenswrapper[5072]: I1124 11:09:35.621887 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:35 crc kubenswrapper[5072]: I1124 11:09:35.621942 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:35 crc kubenswrapper[5072]: I1124 11:09:35.621954 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:35 crc kubenswrapper[5072]: I1124 11:09:35.621972 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:35 crc kubenswrapper[5072]: I1124 11:09:35.621986 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:35Z","lastTransitionTime":"2025-11-24T11:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:35 crc kubenswrapper[5072]: I1124 11:09:35.725212 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:35 crc kubenswrapper[5072]: I1124 11:09:35.725255 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:35 crc kubenswrapper[5072]: I1124 11:09:35.725266 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:35 crc kubenswrapper[5072]: I1124 11:09:35.725283 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:35 crc kubenswrapper[5072]: I1124 11:09:35.725295 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:35Z","lastTransitionTime":"2025-11-24T11:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:35 crc kubenswrapper[5072]: I1124 11:09:35.828303 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:35 crc kubenswrapper[5072]: I1124 11:09:35.828672 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:35 crc kubenswrapper[5072]: I1124 11:09:35.828694 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:35 crc kubenswrapper[5072]: I1124 11:09:35.828724 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:35 crc kubenswrapper[5072]: I1124 11:09:35.828747 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:35Z","lastTransitionTime":"2025-11-24T11:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:35 crc kubenswrapper[5072]: I1124 11:09:35.931493 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:35 crc kubenswrapper[5072]: I1124 11:09:35.931534 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:35 crc kubenswrapper[5072]: I1124 11:09:35.931547 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:35 crc kubenswrapper[5072]: I1124 11:09:35.931563 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:35 crc kubenswrapper[5072]: I1124 11:09:35.931578 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:35Z","lastTransitionTime":"2025-11-24T11:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:36 crc kubenswrapper[5072]: I1124 11:09:36.016273 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:09:36 crc kubenswrapper[5072]: I1124 11:09:36.016323 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:09:36 crc kubenswrapper[5072]: I1124 11:09:36.016484 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:09:36 crc kubenswrapper[5072]: E1124 11:09:36.016491 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:09:36 crc kubenswrapper[5072]: E1124 11:09:36.016623 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:09:36 crc kubenswrapper[5072]: E1124 11:09:36.016806 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:09:36 crc kubenswrapper[5072]: I1124 11:09:36.034822 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:36 crc kubenswrapper[5072]: I1124 11:09:36.034976 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:36 crc kubenswrapper[5072]: I1124 11:09:36.035001 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:36 crc kubenswrapper[5072]: I1124 11:09:36.035087 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:36 crc kubenswrapper[5072]: I1124 11:09:36.035112 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:36Z","lastTransitionTime":"2025-11-24T11:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:36 crc kubenswrapper[5072]: I1124 11:09:36.138263 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:36 crc kubenswrapper[5072]: I1124 11:09:36.138319 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:36 crc kubenswrapper[5072]: I1124 11:09:36.138332 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:36 crc kubenswrapper[5072]: I1124 11:09:36.138348 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:36 crc kubenswrapper[5072]: I1124 11:09:36.138359 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:36Z","lastTransitionTime":"2025-11-24T11:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:36 crc kubenswrapper[5072]: I1124 11:09:36.223488 5072 generic.go:334] "Generic (PLEG): container finished" podID="74eb978f-00ff-4ed3-a5da-8026a3211592" containerID="4771d3054f62a25ec9be8b6628ead9e7eb99ad4ae545d803919cb0122343c0ea" exitCode=0 Nov 24 11:09:36 crc kubenswrapper[5072]: I1124 11:09:36.223536 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qjsxf" event={"ID":"74eb978f-00ff-4ed3-a5da-8026a3211592","Type":"ContainerDied","Data":"4771d3054f62a25ec9be8b6628ead9e7eb99ad4ae545d803919cb0122343c0ea"} Nov 24 11:09:36 crc kubenswrapper[5072]: I1124 11:09:36.241090 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:36 crc kubenswrapper[5072]: I1124 11:09:36.241138 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:36 crc kubenswrapper[5072]: I1124 11:09:36.241152 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:36 crc kubenswrapper[5072]: I1124 11:09:36.241168 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:36 crc kubenswrapper[5072]: I1124 11:09:36.241182 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:36Z","lastTransitionTime":"2025-11-24T11:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:36 crc kubenswrapper[5072]: I1124 11:09:36.257098 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4qmw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:36Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:36 crc kubenswrapper[5072]: I1124 11:09:36.274877 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a60343a1-7193-420d-b6ef-81505cfad266\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6597a19c8ed876fea1aaa8077315a8f39d0a79dee6af94970a3abcd552d673e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89e652bfaac124e13e0b3dfd3f167688a6b417b3613fb94d5422e2134ad95a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c9b314ea6e67a2866adfd0dc2e429523b6db6dab450a1a95fe5528548a0fcb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5f54ddd554c2e52a492be6b3e237793c7b7bed201d942c23d11983e154863a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e03b85333c8be2e5efe40f082369652f009482373f8e230fd948b2dee4e2ee39\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:09:23Z\\\",\\\"message\\\":\\\"W1124 11:09:12.543261 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 11:09:12.543592 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763982552 cert, and key in /tmp/serving-cert-2249531990/serving-signer.crt, /tmp/serving-cert-2249531990/serving-signer.key\\\\nI1124 11:09:13.042739 1 observer_polling.go:159] Starting file observer\\\\nW1124 11:09:13.046128 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 11:09:13.046351 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:09:13.048981 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2249531990/tls.crt::/tmp/serving-cert-2249531990/tls.key\\\\\\\"\\\\nF1124 11:09:23.567420 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d2187669c4dc9aae8ca2f2141104aee1e20df96f0bccf45ecd4c8528f51d1af\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a6b0468c00ca40213d12dd7b80c9f0dcfb93509a44ae37414053672e674f9f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6b0468c00ca40213d12dd7b80c9f0dcfb93509a44ae37414053672e674f9f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:36Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:36 crc kubenswrapper[5072]: I1124 11:09:36.294291 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:36Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:36 crc kubenswrapper[5072]: I1124 11:09:36.311210 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qjsxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74eb978f-00ff-4ed3-a5da-8026a3211592\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://911b5942d35c25032791bf5a43559a6234acf215f5d3f84a30e69aced0caecc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://911b5942d35c25032791bf5a43559a6234acf215f5d3f84a30e69aced0caecc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://829da19d26a0ee0192a826e0b355266bcc48c77cf7b1fcf97a9e56add5d48645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://829da19d26a0ee0192a826e0b355266bcc48c77cf7b1fcf97a9e56add5d48645\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5add393950b53ed615d28b3d65833ae6a5174616b7170577babf1f4b7b6a2336\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5add393950b53ed615d28b3d65833ae6a5174616b7170577babf1f4b7b6a2336\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4771d3054f62a25ec9be8b6628ead9e7eb99ad4ae545d803919cb0122343c0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4771d3054f62a25ec9be8b6628ead9e7eb99ad4ae545d803919cb0122343c0ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qjsxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:36Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:36 crc kubenswrapper[5072]: I1124 11:09:36.328519 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85ee6420-36f0-467c-acf4-ebea8b02c8d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21d57225dc522c1ee3621c75ac8f9f93c47d21afb8b0cb1aae2d6aea1d17a252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3509fd52379451e43594c096ef652d92778331f2aef6b689e547f35a384b976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jfxnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:36Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:36 crc kubenswrapper[5072]: I1124 11:09:36.343750 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:36 crc kubenswrapper[5072]: I1124 11:09:36.343809 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:36 crc kubenswrapper[5072]: I1124 11:09:36.343826 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:36 crc kubenswrapper[5072]: I1124 11:09:36.343847 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:36 crc kubenswrapper[5072]: I1124 11:09:36.343860 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:36Z","lastTransitionTime":"2025-11-24T11:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:36 crc kubenswrapper[5072]: I1124 11:09:36.344010 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jz4mm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19d555ef-9635-4aa7-bce1-7b1eb4805445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc7d5e96171aeadf92196d2b795c03ec634abd92814569a974200484569c145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8k8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jz4mm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:36Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:36 crc kubenswrapper[5072]: I1124 11:09:36.360438 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b45fbff892ae7b15dc056d52d6485a995bb8a62ae423498027fe4866ef51e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcaa27616bc15c5ce26c371eb8a8f155914434949662b30894cd1ef7aa8e04a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:36Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:36 crc kubenswrapper[5072]: I1124 11:09:36.375344 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3973b61727227663fde759ad817fc73088f78293c67fc1bbbf5d5543afa7bbb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:36Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:36 crc kubenswrapper[5072]: I1124 11:09:36.388100 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bkjf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"175fd540-009b-4cb4-9c3e-e2ebc7e787f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d000a9d98b0e3ed54c1cc50148360bb8103d332c45ee03e745f14929132d2c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcts8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bkjf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:36Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:36 crc kubenswrapper[5072]: I1124 11:09:36.404230 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t8b9x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a9fe7b3-71a3-4388-8ee4-7531ceef6049\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96637ece9dca11a6b9e2a8fff8e78ca37f48e9f86e3f076e80cbd56aa353ca74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmbvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t8b9x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:36Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:36 crc kubenswrapper[5072]: I1124 11:09:36.419637 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:36Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:36 crc kubenswrapper[5072]: I1124 11:09:36.434538 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9007e2c-ce36-49d5-ac3f-a2a0ced4e662\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://631c19835680cfbfc94d8d2864f79bb327a834aae717a2c9c525383029e44001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a299161b21fb4a4bc255d765f39eaafa3c87549cc62d458d28ff57fbb4b5fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25ce4f3c52e2096622385f0bd213a058de7ddd3967ed8ba918e79fc63b00429c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28c581f99dcf7d549d235350230e7c3ef380dfeb4fdff577353410642700cb1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:36Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:36 crc kubenswrapper[5072]: I1124 11:09:36.446457 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:36 crc kubenswrapper[5072]: I1124 11:09:36.446497 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:36 crc kubenswrapper[5072]: I1124 11:09:36.446509 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:36 crc kubenswrapper[5072]: I1124 11:09:36.446529 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:36 crc kubenswrapper[5072]: I1124 11:09:36.446542 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:36Z","lastTransitionTime":"2025-11-24T11:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:36 crc kubenswrapper[5072]: I1124 11:09:36.449650 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:36Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:36 crc kubenswrapper[5072]: I1124 11:09:36.468207 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a948c39e09b468da8df5726e7734af35e1d5324d44a6ad11f6e30031f27060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:36Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:36 crc kubenswrapper[5072]: I1124 11:09:36.549334 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:36 crc kubenswrapper[5072]: I1124 11:09:36.549424 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:36 crc kubenswrapper[5072]: I1124 11:09:36.549443 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:36 crc kubenswrapper[5072]: I1124 11:09:36.549468 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:36 crc kubenswrapper[5072]: I1124 11:09:36.549485 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:36Z","lastTransitionTime":"2025-11-24T11:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:36 crc kubenswrapper[5072]: I1124 11:09:36.652218 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:36 crc kubenswrapper[5072]: I1124 11:09:36.652294 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:36 crc kubenswrapper[5072]: I1124 11:09:36.652316 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:36 crc kubenswrapper[5072]: I1124 11:09:36.652346 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:36 crc kubenswrapper[5072]: I1124 11:09:36.652377 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:36Z","lastTransitionTime":"2025-11-24T11:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:36 crc kubenswrapper[5072]: I1124 11:09:36.756780 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:36 crc kubenswrapper[5072]: I1124 11:09:36.756840 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:36 crc kubenswrapper[5072]: I1124 11:09:36.756856 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:36 crc kubenswrapper[5072]: I1124 11:09:36.756883 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:36 crc kubenswrapper[5072]: I1124 11:09:36.756899 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:36Z","lastTransitionTime":"2025-11-24T11:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:36 crc kubenswrapper[5072]: I1124 11:09:36.860238 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:36 crc kubenswrapper[5072]: I1124 11:09:36.860284 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:36 crc kubenswrapper[5072]: I1124 11:09:36.860300 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:36 crc kubenswrapper[5072]: I1124 11:09:36.860323 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:36 crc kubenswrapper[5072]: I1124 11:09:36.860340 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:36Z","lastTransitionTime":"2025-11-24T11:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:36 crc kubenswrapper[5072]: I1124 11:09:36.963743 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:36 crc kubenswrapper[5072]: I1124 11:09:36.963794 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:36 crc kubenswrapper[5072]: I1124 11:09:36.963811 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:36 crc kubenswrapper[5072]: I1124 11:09:36.963834 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:36 crc kubenswrapper[5072]: I1124 11:09:36.963850 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:36Z","lastTransitionTime":"2025-11-24T11:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:37 crc kubenswrapper[5072]: I1124 11:09:37.066661 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:37 crc kubenswrapper[5072]: I1124 11:09:37.066707 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:37 crc kubenswrapper[5072]: I1124 11:09:37.066725 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:37 crc kubenswrapper[5072]: I1124 11:09:37.066747 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:37 crc kubenswrapper[5072]: I1124 11:09:37.066766 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:37Z","lastTransitionTime":"2025-11-24T11:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:37 crc kubenswrapper[5072]: I1124 11:09:37.169921 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:37 crc kubenswrapper[5072]: I1124 11:09:37.169956 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:37 crc kubenswrapper[5072]: I1124 11:09:37.169965 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:37 crc kubenswrapper[5072]: I1124 11:09:37.169979 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:37 crc kubenswrapper[5072]: I1124 11:09:37.169990 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:37Z","lastTransitionTime":"2025-11-24T11:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:37 crc kubenswrapper[5072]: I1124 11:09:37.232202 5072 generic.go:334] "Generic (PLEG): container finished" podID="74eb978f-00ff-4ed3-a5da-8026a3211592" containerID="cd19ed803c2b441c4dde807b4cd4461c581058658db24f32dea39ad73b9cef14" exitCode=0 Nov 24 11:09:37 crc kubenswrapper[5072]: I1124 11:09:37.232295 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qjsxf" event={"ID":"74eb978f-00ff-4ed3-a5da-8026a3211592","Type":"ContainerDied","Data":"cd19ed803c2b441c4dde807b4cd4461c581058658db24f32dea39ad73b9cef14"} Nov 24 11:09:37 crc kubenswrapper[5072]: I1124 11:09:37.238492 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" event={"ID":"80fda759-ddfd-438a-b5a2-cb775ee1bf7e","Type":"ContainerStarted","Data":"af4c3d6857b6aaa6a401604f5423cfb55488de707a08698b4cf9f420b9c07975"} Nov 24 11:09:37 crc kubenswrapper[5072]: I1124 11:09:37.265168 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qjsxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74eb978f-00ff-4ed3-a5da-8026a3211592\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://911b5942d35c25032791bf5a43559a6234acf215f5d3f84a30e69aced0caecc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://911b5942d35c25032791bf5a43559a6234acf215f5d3f84a30e69aced0caecc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://829da19d26a0ee0192a826e0b355266bcc48c77cf7b1fcf97a9e56add5d48645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://829da19d26a0ee0192a826e0b355266bcc48c77cf7b1fcf97a9e56add5d48645\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5add393950b53ed615d28b3d65833ae6a5174616b7170577babf1f4b7b6a2336\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5add393950b53ed615d28b3d65833ae6a5174616b7170577babf1f4b7b6a2336\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4771d3054f62a25ec9be8b6628ead9e7eb99ad4ae545d803919cb0122343c0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4771d3054f62a25ec9be8b6628ead9e7eb99ad4ae545d803919cb0122343c0ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd19ed803c2b441c4dde807b4cd4461c581058658db24f32dea39ad73b9cef14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd19ed803c2b441c4dde807b4cd4461c581058658db24f32dea39ad73b9cef14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qjsxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:37Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:37 crc kubenswrapper[5072]: I1124 11:09:37.273610 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:37 crc kubenswrapper[5072]: I1124 11:09:37.273854 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:37 crc kubenswrapper[5072]: I1124 11:09:37.274026 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:37 crc kubenswrapper[5072]: I1124 11:09:37.274207 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:37 crc kubenswrapper[5072]: I1124 11:09:37.274451 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:37Z","lastTransitionTime":"2025-11-24T11:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:37 crc kubenswrapper[5072]: I1124 11:09:37.288906 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t8b9x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a9fe7b3-71a3-4388-8ee4-7531ceef6049\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96637ece9dca11a6b9e2a8fff8e78ca37f48e9f86e3f076e80cbd56aa353ca74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmbvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t8b9x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:37Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:37 crc kubenswrapper[5072]: I1124 11:09:37.307848 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85ee6420-36f0-467c-acf4-ebea8b02c8d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21d57225dc522c1ee3621c75ac8f9f93c47d21afb8b0cb1aae2d6aea1d17a252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3509fd52379451e43594c096ef652d92778331f2aef6b689e547f35a384b976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jfxnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:37Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:37 crc kubenswrapper[5072]: I1124 11:09:37.321660 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jz4mm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19d555ef-9635-4aa7-bce1-7b1eb4805445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc7d5e96171aeadf92196d2b795c03ec634abd92814569a974200484569c145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8k8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jz4mm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:37Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:37 crc kubenswrapper[5072]: I1124 11:09:37.339651 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b45fbff892ae7b15dc056d52d6485a995bb8a62ae423498027fe4866ef51e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcaa27616bc15c5ce26c371eb8a8f155914434949662b30894cd1ef7aa8e04a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:37Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:37 crc kubenswrapper[5072]: I1124 11:09:37.354041 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3973b61727227663fde759ad817fc73088f78293c67fc1bbbf5d5543afa7bbb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:37Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:37 crc kubenswrapper[5072]: I1124 11:09:37.367105 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bkjf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"175fd540-009b-4cb4-9c3e-e2ebc7e787f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d000a9d98b0e3ed54c1cc50148360bb8103d332c45ee03e745f14929132d2c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcts8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bkjf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:37Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:37 crc kubenswrapper[5072]: I1124 11:09:37.376825 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:37 crc kubenswrapper[5072]: I1124 11:09:37.376868 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:37 crc kubenswrapper[5072]: I1124 11:09:37.376879 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:37 crc kubenswrapper[5072]: I1124 11:09:37.376929 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:37 crc kubenswrapper[5072]: I1124 11:09:37.376962 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:37Z","lastTransitionTime":"2025-11-24T11:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:37 crc kubenswrapper[5072]: I1124 11:09:37.382195 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a948c39e09b468da8df5726e7734af35e1d5324d44a6ad11f6e30031f27060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:37Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:37 crc kubenswrapper[5072]: I1124 11:09:37.396770 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:37Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:37 crc kubenswrapper[5072]: I1124 11:09:37.415992 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9007e2c-ce36-49d5-ac3f-a2a0ced4e662\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://631c19835680cfbfc94d8d2864f79bb327a834aae717a2c9c525383029e44001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a299161b21fb4a4bc255d765f39eaafa3c87549cc62d458d28ff57fbb4b5fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25ce4f3c52e2096622385f0bd213a058de7ddd3967ed8ba918e79fc63b00429c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28c581f99dcf7d549d235350230e7c3ef380dfeb4fdff577353410642700cb1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:37Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:37 crc kubenswrapper[5072]: I1124 11:09:37.428480 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:37Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:37 crc kubenswrapper[5072]: I1124 11:09:37.440252 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:37Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:37 crc kubenswrapper[5072]: I1124 11:09:37.462353 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4qmw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:37Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:37 crc kubenswrapper[5072]: I1124 11:09:37.474635 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a60343a1-7193-420d-b6ef-81505cfad266\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6597a19c8ed876fea1aaa8077315a8f39d0a79dee6af94970a3abcd552d673e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89e652bfaac124e13e0b3dfd3f167688a6b417b3613fb94d5422e2134ad95a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c9b314ea6e67a2866adfd0dc2e429523b6db6dab450a1a95fe5528548a0fcb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5f54ddd554c2e52a492be6b3e237793c7b7bed201d942c23d11983e154863a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e03b85333c8be2e5efe40f082369652f009482373f8e230fd948b2dee4e2ee39\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:09:23Z\\\",\\\"message\\\":\\\"W1124 11:09:12.543261 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 11:09:12.543592 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763982552 cert, and key in /tmp/serving-cert-2249531990/serving-signer.crt, /tmp/serving-cert-2249531990/serving-signer.key\\\\nI1124 11:09:13.042739 1 observer_polling.go:159] Starting file observer\\\\nW1124 11:09:13.046128 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 11:09:13.046351 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:09:13.048981 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2249531990/tls.crt::/tmp/serving-cert-2249531990/tls.key\\\\\\\"\\\\nF1124 11:09:23.567420 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d2187669c4dc9aae8ca2f2141104aee1e20df96f0bccf45ecd4c8528f51d1af\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a6b0468c00ca40213d12dd7b80c9f0dcfb93509a44ae37414053672e674f9f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6b0468c00ca40213d12dd7b80c9f0dcfb93509a44ae37414053672e674f9f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:37Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:37 crc kubenswrapper[5072]: I1124 11:09:37.479228 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:37 crc kubenswrapper[5072]: I1124 11:09:37.479261 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:37 crc kubenswrapper[5072]: I1124 11:09:37.479271 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:37 crc kubenswrapper[5072]: I1124 11:09:37.479287 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:37 crc kubenswrapper[5072]: I1124 11:09:37.479298 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:37Z","lastTransitionTime":"2025-11-24T11:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:37 crc kubenswrapper[5072]: I1124 11:09:37.582345 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:37 crc kubenswrapper[5072]: I1124 11:09:37.582458 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:37 crc kubenswrapper[5072]: I1124 11:09:37.582476 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:37 crc kubenswrapper[5072]: I1124 11:09:37.582500 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:37 crc kubenswrapper[5072]: I1124 11:09:37.582517 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:37Z","lastTransitionTime":"2025-11-24T11:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:37 crc kubenswrapper[5072]: I1124 11:09:37.685761 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:37 crc kubenswrapper[5072]: I1124 11:09:37.685815 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:37 crc kubenswrapper[5072]: I1124 11:09:37.685831 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:37 crc kubenswrapper[5072]: I1124 11:09:37.685854 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:37 crc kubenswrapper[5072]: I1124 11:09:37.685871 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:37Z","lastTransitionTime":"2025-11-24T11:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:37 crc kubenswrapper[5072]: I1124 11:09:37.689363 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:09:37 crc kubenswrapper[5072]: E1124 11:09:37.689595 5072 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 11:09:37 crc kubenswrapper[5072]: E1124 11:09:37.689719 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 11:09:45.689687834 +0000 UTC m=+37.401212350 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 11:09:37 crc kubenswrapper[5072]: I1124 11:09:37.789244 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:37 crc kubenswrapper[5072]: I1124 11:09:37.789305 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:37 crc kubenswrapper[5072]: I1124 11:09:37.789322 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:37 crc kubenswrapper[5072]: I1124 11:09:37.789345 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:37 crc kubenswrapper[5072]: I1124 11:09:37.789361 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:37Z","lastTransitionTime":"2025-11-24T11:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:37 crc kubenswrapper[5072]: I1124 11:09:37.790455 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:09:37 crc kubenswrapper[5072]: I1124 11:09:37.790567 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:09:37 crc kubenswrapper[5072]: I1124 11:09:37.790641 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:09:37 crc kubenswrapper[5072]: E1124 11:09:37.790726 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:09:45.790691069 +0000 UTC m=+37.502215585 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:09:37 crc kubenswrapper[5072]: E1124 11:09:37.790776 5072 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 11:09:37 crc kubenswrapper[5072]: I1124 11:09:37.790807 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:09:37 crc kubenswrapper[5072]: E1124 11:09:37.790873 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 11:09:45.790845933 +0000 UTC m=+37.502370479 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 11:09:37 crc kubenswrapper[5072]: E1124 11:09:37.790787 5072 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 11:09:37 crc kubenswrapper[5072]: E1124 11:09:37.790911 5072 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 11:09:37 crc kubenswrapper[5072]: E1124 11:09:37.790927 5072 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:09:37 crc kubenswrapper[5072]: E1124 11:09:37.790989 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 11:09:45.790968826 +0000 UTC m=+37.502493362 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:09:37 crc kubenswrapper[5072]: E1124 11:09:37.791003 5072 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 11:09:37 crc kubenswrapper[5072]: E1124 11:09:37.791031 5072 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 11:09:37 crc kubenswrapper[5072]: E1124 11:09:37.791053 5072 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:09:37 crc kubenswrapper[5072]: E1124 11:09:37.791113 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 11:09:45.791092609 +0000 UTC m=+37.502617225 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:09:37 crc kubenswrapper[5072]: I1124 11:09:37.891668 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:37 crc kubenswrapper[5072]: I1124 11:09:37.891709 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:37 crc kubenswrapper[5072]: I1124 11:09:37.891721 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:37 crc kubenswrapper[5072]: I1124 11:09:37.891738 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:37 crc kubenswrapper[5072]: I1124 11:09:37.891750 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:37Z","lastTransitionTime":"2025-11-24T11:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:37 crc kubenswrapper[5072]: I1124 11:09:37.994882 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:37 crc kubenswrapper[5072]: I1124 11:09:37.994957 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:37 crc kubenswrapper[5072]: I1124 11:09:37.995007 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:37 crc kubenswrapper[5072]: I1124 11:09:37.995040 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:37 crc kubenswrapper[5072]: I1124 11:09:37.995084 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:37Z","lastTransitionTime":"2025-11-24T11:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:38 crc kubenswrapper[5072]: I1124 11:09:38.016331 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:09:38 crc kubenswrapper[5072]: I1124 11:09:38.016332 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:09:38 crc kubenswrapper[5072]: I1124 11:09:38.016478 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:09:38 crc kubenswrapper[5072]: E1124 11:09:38.016587 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:09:38 crc kubenswrapper[5072]: E1124 11:09:38.016725 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:09:38 crc kubenswrapper[5072]: E1124 11:09:38.016852 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:09:38 crc kubenswrapper[5072]: I1124 11:09:38.097351 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:38 crc kubenswrapper[5072]: I1124 11:09:38.097441 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:38 crc kubenswrapper[5072]: I1124 11:09:38.097465 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:38 crc kubenswrapper[5072]: I1124 11:09:38.097496 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:38 crc kubenswrapper[5072]: I1124 11:09:38.097515 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:38Z","lastTransitionTime":"2025-11-24T11:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:38 crc kubenswrapper[5072]: I1124 11:09:38.200502 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:38 crc kubenswrapper[5072]: I1124 11:09:38.200556 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:38 crc kubenswrapper[5072]: I1124 11:09:38.200574 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:38 crc kubenswrapper[5072]: I1124 11:09:38.200597 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:38 crc kubenswrapper[5072]: I1124 11:09:38.200613 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:38Z","lastTransitionTime":"2025-11-24T11:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:38 crc kubenswrapper[5072]: I1124 11:09:38.247076 5072 generic.go:334] "Generic (PLEG): container finished" podID="74eb978f-00ff-4ed3-a5da-8026a3211592" containerID="09dba82c18fac19ddd5bbbeecab58a5dc685dbda72e7570cde5d445990066d2c" exitCode=0 Nov 24 11:09:38 crc kubenswrapper[5072]: I1124 11:09:38.247128 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qjsxf" event={"ID":"74eb978f-00ff-4ed3-a5da-8026a3211592","Type":"ContainerDied","Data":"09dba82c18fac19ddd5bbbeecab58a5dc685dbda72e7570cde5d445990066d2c"} Nov 24 11:09:38 crc kubenswrapper[5072]: I1124 11:09:38.271128 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a60343a1-7193-420d-b6ef-81505cfad266\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6597a19c8ed876fea1aaa8077315a8f39d0a79dee6af94970a3abcd552d673e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89e652bfaac124e13e0b3dfd3f167688a6b417b3613fb94d5422e2134ad95a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c9b314ea6e67a2866adfd0dc2e429523b6db6dab450a1a95fe5528548a0fcb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5f54ddd554c2e52a492be6b3e237793c7b7bed201d942c23d11983e154863a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e03b85333c8be2e5efe40f082369652f009482373f8e230fd948b2dee4e2ee39\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:09:23Z\\\",\\\"message\\\":\\\"W1124 11:09:12.543261 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 11:09:12.543592 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763982552 cert, and key in /tmp/serving-cert-2249531990/serving-signer.crt, /tmp/serving-cert-2249531990/serving-signer.key\\\\nI1124 11:09:13.042739 1 observer_polling.go:159] Starting file observer\\\\nW1124 11:09:13.046128 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 11:09:13.046351 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:09:13.048981 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2249531990/tls.crt::/tmp/serving-cert-2249531990/tls.key\\\\\\\"\\\\nF1124 11:09:23.567420 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d2187669c4dc9aae8ca2f2141104aee1e20df96f0bccf45ecd4c8528f51d1af\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a6b0468c00ca40213d12dd7b80c9f0dcfb93509a44ae37414053672e674f9f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6b0468c00ca40213d12dd7b80c9f0dcfb93509a44ae37414053672e674f9f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:38Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:38 crc kubenswrapper[5072]: I1124 11:09:38.295856 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:38Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:38 crc kubenswrapper[5072]: I1124 11:09:38.303986 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:38 crc kubenswrapper[5072]: I1124 11:09:38.304047 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:38 crc kubenswrapper[5072]: I1124 11:09:38.304067 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:38 crc kubenswrapper[5072]: I1124 11:09:38.304091 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:38 crc kubenswrapper[5072]: I1124 11:09:38.304110 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:38Z","lastTransitionTime":"2025-11-24T11:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:38 crc kubenswrapper[5072]: I1124 11:09:38.328289 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4qmw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:38Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:38 crc kubenswrapper[5072]: I1124 11:09:38.351123 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qjsxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74eb978f-00ff-4ed3-a5da-8026a3211592\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://911b5942d35c25032791bf5a43559a6234acf215f5d3f84a30e69aced0caecc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://911b5942d35c25032791bf5a43559a6234acf215f5d3f84a30e69aced0caecc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://829da19d26a0ee0192a826e0b355266bcc48c77cf7b1fcf97a9e56add5d48645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://829da19d26a0ee0192a826e0b355266bcc48c77cf7b1fcf97a9e56add5d48645\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5add393950b53ed615d28b3d65833ae6a5174616b7170577babf1f4b7b6a2336\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5add393950b53ed615d28b3d65833ae6a5174616b7170577babf1f4b7b6a2336\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4771d3054f62a25ec9be8b6628ead9e7eb99ad4ae545d803919cb0122343c0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4771d3054f62a25ec9be8b6628ead9e7eb99ad4ae545d803919cb0122343c0ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd19ed803c2b441c4dde807b4cd4461c581058658db24f32dea39ad73b9cef14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd19ed803c2b441c4dde807b4cd4461c581058658db24f32dea39ad73b9cef14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09dba82c18fac19ddd5bbbeecab58a5dc685dbda72e7570cde5d445990066d2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09dba82c18fac19ddd5bbbeecab58a5dc685dbda72e7570cde5d445990066d2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qjsxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:38Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:38 crc kubenswrapper[5072]: I1124 11:09:38.370061 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bkjf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"175fd540-009b-4cb4-9c3e-e2ebc7e787f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d000a9d98b0e3ed54c1cc50148360bb8103d332c45ee03e745f14929132d2c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcts8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bkjf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:38Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:38 crc kubenswrapper[5072]: I1124 11:09:38.388888 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t8b9x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a9fe7b3-71a3-4388-8ee4-7531ceef6049\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96637ece9dca11a6b9e2a8fff8e78ca37f48e9f86e3f076e80cbd56aa353ca74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmbvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t8b9x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:38Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:38 crc kubenswrapper[5072]: I1124 11:09:38.404063 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85ee6420-36f0-467c-acf4-ebea8b02c8d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21d57225dc522c1ee3621c75ac8f9f93c47d21afb8b0cb1aae2d6aea1d17a252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3509fd52379451e43594c096ef652d92778331f2aef6b689e547f35a384b976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jfxnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:38Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:38 crc kubenswrapper[5072]: I1124 11:09:38.406407 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:38 crc kubenswrapper[5072]: I1124 11:09:38.406463 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:38 crc kubenswrapper[5072]: I1124 11:09:38.406480 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:38 crc kubenswrapper[5072]: I1124 11:09:38.406896 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:38 crc kubenswrapper[5072]: I1124 11:09:38.406924 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:38Z","lastTransitionTime":"2025-11-24T11:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:38 crc kubenswrapper[5072]: I1124 11:09:38.422543 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jz4mm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19d555ef-9635-4aa7-bce1-7b1eb4805445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc7d5e96171aeadf92196d2b795c03ec634abd92814569a974200484569c145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8k8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jz4mm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:38Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:38 crc kubenswrapper[5072]: I1124 11:09:38.442696 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b45fbff892ae7b15dc056d52d6485a995bb8a62ae423498027fe4866ef51e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcaa27616bc15c5ce26c371eb8a8f155914434949662b30894cd1ef7aa8e04a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:38Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:38 crc kubenswrapper[5072]: I1124 11:09:38.462913 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3973b61727227663fde759ad817fc73088f78293c67fc1bbbf5d5543afa7bbb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:38Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:38 crc kubenswrapper[5072]: I1124 11:09:38.476114 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:38Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:38 crc kubenswrapper[5072]: I1124 11:09:38.491695 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a948c39e09b468da8df5726e7734af35e1d5324d44a6ad11f6e30031f27060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:38Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:38 crc kubenswrapper[5072]: I1124 11:09:38.504245 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:38Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:38 crc kubenswrapper[5072]: I1124 11:09:38.509274 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:38 crc kubenswrapper[5072]: I1124 11:09:38.509321 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:38 crc kubenswrapper[5072]: I1124 11:09:38.509336 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:38 crc kubenswrapper[5072]: I1124 11:09:38.509358 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:38 crc kubenswrapper[5072]: I1124 11:09:38.509399 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:38Z","lastTransitionTime":"2025-11-24T11:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:38 crc kubenswrapper[5072]: I1124 11:09:38.521538 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9007e2c-ce36-49d5-ac3f-a2a0ced4e662\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://631c19835680cfbfc94d8d2864f79bb327a834aae717a2c9c525383029e44001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a299161b21fb4a4bc255d765f39eaafa3c87549cc62d458d28ff57fbb4b5fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25ce4f3c52e2096622385f0bd213a058de7ddd3967ed8ba918e79fc63b00429c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28c581f99dcf7d549d235350230e7c3ef380dfeb4fdff577353410642700cb1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:38Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:38 crc kubenswrapper[5072]: I1124 11:09:38.611975 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:38 crc kubenswrapper[5072]: I1124 11:09:38.612039 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:38 crc kubenswrapper[5072]: I1124 11:09:38.612054 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:38 crc kubenswrapper[5072]: I1124 11:09:38.612074 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:38 crc kubenswrapper[5072]: I1124 11:09:38.612090 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:38Z","lastTransitionTime":"2025-11-24T11:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:38 crc kubenswrapper[5072]: I1124 11:09:38.714264 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:38 crc kubenswrapper[5072]: I1124 11:09:38.714293 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:38 crc kubenswrapper[5072]: I1124 11:09:38.714302 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:38 crc kubenswrapper[5072]: I1124 11:09:38.714314 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:38 crc kubenswrapper[5072]: I1124 11:09:38.714322 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:38Z","lastTransitionTime":"2025-11-24T11:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:38 crc kubenswrapper[5072]: I1124 11:09:38.816949 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:38 crc kubenswrapper[5072]: I1124 11:09:38.816984 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:38 crc kubenswrapper[5072]: I1124 11:09:38.816993 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:38 crc kubenswrapper[5072]: I1124 11:09:38.817007 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:38 crc kubenswrapper[5072]: I1124 11:09:38.817016 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:38Z","lastTransitionTime":"2025-11-24T11:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:38 crc kubenswrapper[5072]: I1124 11:09:38.919450 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:38 crc kubenswrapper[5072]: I1124 11:09:38.919507 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:38 crc kubenswrapper[5072]: I1124 11:09:38.919526 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:38 crc kubenswrapper[5072]: I1124 11:09:38.919550 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:38 crc kubenswrapper[5072]: I1124 11:09:38.919566 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:38Z","lastTransitionTime":"2025-11-24T11:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.021887 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.021939 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.021955 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.021979 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.022001 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:39Z","lastTransitionTime":"2025-11-24T11:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.067501 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b45fbff892ae7b15dc056d52d6485a995bb8a62ae423498027fe4866ef51e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcaa27616bc15c5ce26c371eb8a8f155914434949662b30894cd1ef7aa8e04a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.082036 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3973b61727227663fde759ad817fc73088f78293c67fc1bbbf5d5543afa7bbb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.097154 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bkjf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"175fd540-009b-4cb4-9c3e-e2ebc7e787f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d000a9d98b0e3ed54c1cc50148360bb8103d332c45ee03e745f14929132d2c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcts8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bkjf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.122354 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t8b9x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a9fe7b3-71a3-4388-8ee4-7531ceef6049\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96637ece9dca11a6b9e2a8fff8e78ca37f48e9f86e3f076e80cbd56aa353ca74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmbvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t8b9x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.123742 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.123784 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.123796 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.123812 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.123824 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:39Z","lastTransitionTime":"2025-11-24T11:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.138334 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85ee6420-36f0-467c-acf4-ebea8b02c8d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21d57225dc522c1ee3621c75ac8f9f93c47d21afb8b0cb1aae2d6aea1d17a252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3509fd52379451e43594c096ef652d92778331f2aef6b689e547f35a384b976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jfxnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.151732 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jz4mm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19d555ef-9635-4aa7-bce1-7b1eb4805445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc7d5e96171aeadf92196d2b795c03ec634abd92814569a974200484569c145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8k8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jz4mm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.169816 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9007e2c-ce36-49d5-ac3f-a2a0ced4e662\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://631c19835680cfbfc94d8d2864f79bb327a834aae717a2c9c525383029e44001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a299161b21fb4a4bc255d765f39eaafa3c87549cc62d458d28ff57fbb4b5fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25ce4f3c52e2096622385f0bd213a058de7ddd3967ed8ba918e79fc63b00429c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28c581f99dcf7d549d235350230e7c3ef380dfeb4fdff577353410642700cb1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.183260 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.196663 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a948c39e09b468da8df5726e7734af35e1d5324d44a6ad11f6e30031f27060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.212341 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.227029 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.227102 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.227132 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.227166 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.227184 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:39Z","lastTransitionTime":"2025-11-24T11:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.229631 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a60343a1-7193-420d-b6ef-81505cfad266\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6597a19c8ed876fea1aaa8077315a8f39d0a79dee6af94970a3abcd552d673e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89e652bfaac124e13e0b3dfd3f167688a6b417b3613fb94d5422e2134ad95a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c9b314ea6e67a2866adfd0dc2e429523b6db6dab450a1a95fe5528548a0fcb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5f54ddd554c2e52a492be6b3e237793c7b7bed201d942c23d11983e154863a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e03b85333c8be2e5efe40f082369652f009482373f8e230fd948b2dee4e2ee39\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:09:23Z\\\",\\\"message\\\":\\\"W1124 11:09:12.543261 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 11:09:12.543592 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763982552 cert, and key in /tmp/serving-cert-2249531990/serving-signer.crt, /tmp/serving-cert-2249531990/serving-signer.key\\\\nI1124 11:09:13.042739 1 observer_polling.go:159] Starting file observer\\\\nW1124 11:09:13.046128 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 11:09:13.046351 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:09:13.048981 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2249531990/tls.crt::/tmp/serving-cert-2249531990/tls.key\\\\\\\"\\\\nF1124 11:09:23.567420 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d2187669c4dc9aae8ca2f2141104aee1e20df96f0bccf45ecd4c8528f51d1af\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a6b0468c00ca40213d12dd7b80c9f0dcfb93509a44ae37414053672e674f9f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6b0468c00ca40213d12dd7b80c9f0dcfb93509a44ae37414053672e674f9f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.241951 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.257036 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qjsxf" event={"ID":"74eb978f-00ff-4ed3-a5da-8026a3211592","Type":"ContainerStarted","Data":"a69b8017daa872327d88eab8150845309e30c5cf37b229292e7c8a80e5d599c6"} Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.264046 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" event={"ID":"80fda759-ddfd-438a-b5a2-cb775ee1bf7e","Type":"ContainerStarted","Data":"07e6e7ab2f5cf671ed26130bd75177f315add4c324c1f8ca873c79b389c6d8d9"} Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.264417 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.272005 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4qmw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.293300 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qjsxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74eb978f-00ff-4ed3-a5da-8026a3211592\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://911b5942d35c25032791bf5a43559a6234acf215f5d3f84a30e69aced0caecc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://911b5942d35c25032791bf5a43559a6234acf215f5d3f84a30e69aced0caecc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://829da19d26a0ee0192a826e0b355266bcc48c77cf7b1fcf97a9e56add5d48645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://829da19d26a0ee0192a826e0b355266bcc48c77cf7b1fcf97a9e56add5d48645\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5add393950b53ed615d28b3d65833ae6a5174616b7170577babf1f4b7b6a2336\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5add393950b53ed615d28b3d65833ae6a5174616b7170577babf1f4b7b6a2336\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4771d3054f62a25ec9be8b6628ead9e7eb99ad4ae545d803919cb0122343c0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4771d3054f62a25ec9be8b6628ead9e7eb99ad4ae545d803919cb0122343c0ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd19ed803c2b441c4dde807b4cd4461c581058658db24f32dea39ad73b9cef14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd19ed803c2b441c4dde807b4cd4461c581058658db24f32dea39ad73b9cef14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09dba82c18fac19ddd5bbbeecab58a5dc685dbda72e7570cde5d445990066d2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09dba82c18fac19ddd5bbbeecab58a5dc685dbda72e7570cde5d445990066d2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qjsxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.299603 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.314697 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t8b9x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a9fe7b3-71a3-4388-8ee4-7531ceef6049\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96637ece9dca11a6b9e2a8fff8e78ca37f48e9f86e3f076e80cbd56aa353ca74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmbvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t8b9x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.325281 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85ee6420-36f0-467c-acf4-ebea8b02c8d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21d57225dc522c1ee3621c75ac8f9f93c47d21afb8b0cb1aae2d6aea1d17a252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3509fd52379451e43594c096ef652d92778331f2aef6b689e547f35a384b976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jfxnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.326359 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.326455 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.326473 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.326494 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.326510 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:39Z","lastTransitionTime":"2025-11-24T11:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.334984 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jz4mm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19d555ef-9635-4aa7-bce1-7b1eb4805445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc7d5e96171aeadf92196d2b795c03ec634abd92814569a974200484569c145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8k8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jz4mm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:39 crc kubenswrapper[5072]: E1124 11:09:39.342362 5072 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:09:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:09:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:09:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:09:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a41d3a9c-0834-482e-9391-dff98db0f196\\\",\\\"systemUUID\\\":\\\"d0383649-b062-48ed-9fc1-5e553cb9256a\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.344914 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.344942 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.344951 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.344963 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.344972 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:39Z","lastTransitionTime":"2025-11-24T11:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.347078 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b45fbff892ae7b15dc056d52d6485a995bb8a62ae423498027fe4866ef51e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcaa27616bc15c5ce26c371eb8a8f155914434949662b30894cd1ef7aa8e04a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:39 crc kubenswrapper[5072]: E1124 11:09:39.355655 5072 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:09:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:09:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:09:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:09:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a41d3a9c-0834-482e-9391-dff98db0f196\\\",\\\"systemUUID\\\":\\\"d0383649-b062-48ed-9fc1-5e553cb9256a\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.358735 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.358777 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.358794 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.358816 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.358831 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:39Z","lastTransitionTime":"2025-11-24T11:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.360945 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3973b61727227663fde759ad817fc73088f78293c67fc1bbbf5d5543afa7bbb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.370020 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bkjf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"175fd540-009b-4cb4-9c3e-e2ebc7e787f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d000a9d98b0e3ed54c1cc50148360bb8103d332c45ee03e745f14929132d2c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcts8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bkjf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:39 crc kubenswrapper[5072]: E1124 11:09:39.374760 5072 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:09:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:09:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:09:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:09:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a41d3a9c-0834-482e-9391-dff98db0f196\\\",\\\"systemUUID\\\":\\\"d0383649-b062-48ed-9fc1-5e553cb9256a\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.378052 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.378099 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.378109 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.378172 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.378183 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:39Z","lastTransitionTime":"2025-11-24T11:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.384477 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a948c39e09b468da8df5726e7734af35e1d5324d44a6ad11f6e30031f27060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:39 crc kubenswrapper[5072]: E1124 11:09:39.390825 5072 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:09:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:09:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:09:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:09:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a41d3a9c-0834-482e-9391-dff98db0f196\\\",\\\"systemUUID\\\":\\\"d0383649-b062-48ed-9fc1-5e553cb9256a\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.394337 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.394420 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.394440 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.394462 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.394480 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:39Z","lastTransitionTime":"2025-11-24T11:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.397467 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.410195 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9007e2c-ce36-49d5-ac3f-a2a0ced4e662\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://631c19835680cfbfc94d8d2864f79bb327a834aae717a2c9c525383029e44001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a299161b21fb4a4bc255d765f39eaafa3c87549cc62d458d28ff57fbb4b5fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25ce4f3c52e2096622385f0bd213a058de7ddd3967ed8ba918e79fc63b00429c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28c581f99dcf7d549d235350230e7c3ef380dfeb4fdff577353410642700cb1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:39 crc kubenswrapper[5072]: E1124 11:09:39.411169 5072 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:09:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:09:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:09:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:09:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a41d3a9c-0834-482e-9391-dff98db0f196\\\",\\\"systemUUID\\\":\\\"d0383649-b062-48ed-9fc1-5e553cb9256a\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:39 crc kubenswrapper[5072]: E1124 11:09:39.411433 5072 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.413277 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.413327 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.413345 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.413371 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.413412 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:39Z","lastTransitionTime":"2025-11-24T11:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.426941 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.437741 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.462215 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1421e4bd297d99e68c36da933221bbabf8d74aa5fbfa7cbfe831215de52d4790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c82cb1df0677da29463f84139b09b8ee263695e4c994ef7d17846556260b5c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89dd7133a078fe05808fdf20f22b6939004406ae85d3b6ef854a3e4031350491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f6526ffcce8bc139bd9442203e460c71b46e2e8cf9e1f0d03beb067f5dc1c39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98470930757c0529cc831f91feab9f4b004c808efbfdf40e3e95b12e6af1c6d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7621cb39fa8d0330ee899d4962150519618be95eabfc592e6678bb5f5fbbdbfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07e6e7ab2f5cf671ed26130bd75177f315add4c324c1f8ca873c79b389c6d8d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af4c3d6857b6aaa6a401604f5423cfb55488de707a08698b4cf9f420b9c07975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4qmw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.478649 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a60343a1-7193-420d-b6ef-81505cfad266\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6597a19c8ed876fea1aaa8077315a8f39d0a79dee6af94970a3abcd552d673e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89e652bfaac124e13e0b3dfd3f167688a6b417b3613fb94d5422e2134ad95a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c9b314ea6e67a2866adfd0dc2e429523b6db6dab450a1a95fe5528548a0fcb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5f54ddd554c2e52a492be6b3e237793c7b7bed201d942c23d11983e154863a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e03b85333c8be2e5efe40f082369652f009482373f8e230fd948b2dee4e2ee39\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:09:23Z\\\",\\\"message\\\":\\\"W1124 11:09:12.543261 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 11:09:12.543592 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763982552 cert, and key in /tmp/serving-cert-2249531990/serving-signer.crt, /tmp/serving-cert-2249531990/serving-signer.key\\\\nI1124 11:09:13.042739 1 observer_polling.go:159] Starting file observer\\\\nW1124 11:09:13.046128 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 11:09:13.046351 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:09:13.048981 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2249531990/tls.crt::/tmp/serving-cert-2249531990/tls.key\\\\\\\"\\\\nF1124 11:09:23.567420 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d2187669c4dc9aae8ca2f2141104aee1e20df96f0bccf45ecd4c8528f51d1af\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a6b0468c00ca40213d12dd7b80c9f0dcfb93509a44ae37414053672e674f9f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6b0468c00ca40213d12dd7b80c9f0dcfb93509a44ae37414053672e674f9f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.495016 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qjsxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74eb978f-00ff-4ed3-a5da-8026a3211592\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a69b8017daa872327d88eab8150845309e30c5cf37b229292e7c8a80e5d599c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://911b5942d35c25032791bf5a43559a6234acf215f5d3f84a30e69aced0caecc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://911b5942d35c25032791bf5a43559a6234acf215f5d3f84a30e69aced0caecc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://829da19d26a0ee0192a826e0b355266bcc48c77cf7b1fcf97a9e56add5d48645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://829da19d26a0ee0192a826e0b355266bcc48c77cf7b1fcf97a9e56add5d48645\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5add393950b53ed615d28b3d65833ae6a5174616b7170577babf1f4b7b6a2336\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5add393950b53ed615d28b3d65833ae6a5174616b7170577babf1f4b7b6a2336\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4771d3054f62a25ec9be8b6628ead9e7eb99ad4ae545d803919cb0122343c0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4771d3054f62a25ec9be8b6628ead9e7eb99ad4ae545d803919cb0122343c0ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd19ed803c2b441c4dde807b4cd4461c581058658db24f32dea39ad73b9cef14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd19ed803c2b441c4dde807b4cd4461c581058658db24f32dea39ad73b9cef14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09dba82c18fac19ddd5bbbeecab58a5dc685dbda72e7570cde5d445990066d2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09dba82c18fac19ddd5bbbeecab58a5dc685dbda72e7570cde5d445990066d2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qjsxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.512895 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qjsxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74eb978f-00ff-4ed3-a5da-8026a3211592\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a69b8017daa872327d88eab8150845309e30c5cf37b229292e7c8a80e5d599c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://911b5942d35c25032791bf5a43559a6234acf215f5d3f84a30e69aced0caecc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://911b5942d35c25032791bf5a43559a6234acf215f5d3f84a30e69aced0caecc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://829da19d26a0ee0192a826e0b355266bcc48c77cf7b1fcf97a9e56add5d48645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://829da19d26a0ee0192a826e0b355266bcc48c77cf7b1fcf97a9e56add5d48645\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5add393950b53ed615d28b3d65833ae6a5174616b7170577babf1f4b7b6a2336\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5add393950b53ed615d28b3d65833ae6a5174616b7170577babf1f4b7b6a2336\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4771d3054f62a25ec9be8b6628ead9e7eb99ad4ae545d803919cb0122343c0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4771d3054f62a25ec9be8b6628ead9e7eb99ad4ae545d803919cb0122343c0ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd19ed803c2b441c4dde807b4cd4461c581058658db24f32dea39ad73b9cef14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd19ed803c2b441c4dde807b4cd4461c581058658db24f32dea39ad73b9cef14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09dba82c18fac19ddd5bbbeecab58a5dc685dbda72e7570cde5d445990066d2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09dba82c18fac19ddd5bbbeecab58a5dc685dbda72e7570cde5d445990066d2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qjsxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.516623 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.516681 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.516699 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.516723 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.516741 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:39Z","lastTransitionTime":"2025-11-24T11:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.529585 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b45fbff892ae7b15dc056d52d6485a995bb8a62ae423498027fe4866ef51e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcaa27616bc15c5ce26c371eb8a8f155914434949662b30894cd1ef7aa8e04a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.543997 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3973b61727227663fde759ad817fc73088f78293c67fc1bbbf5d5543afa7bbb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.553869 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bkjf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"175fd540-009b-4cb4-9c3e-e2ebc7e787f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d000a9d98b0e3ed54c1cc50148360bb8103d332c45ee03e745f14929132d2c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcts8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bkjf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.564823 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t8b9x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a9fe7b3-71a3-4388-8ee4-7531ceef6049\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96637ece9dca11a6b9e2a8fff8e78ca37f48e9f86e3f076e80cbd56aa353ca74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmbvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t8b9x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.576472 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85ee6420-36f0-467c-acf4-ebea8b02c8d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21d57225dc522c1ee3621c75ac8f9f93c47d21afb8b0cb1aae2d6aea1d17a252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3509fd52379451e43594c096ef652d92778331f2aef6b689e547f35a384b976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jfxnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.588256 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jz4mm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19d555ef-9635-4aa7-bce1-7b1eb4805445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc7d5e96171aeadf92196d2b795c03ec634abd92814569a974200484569c145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8k8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jz4mm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.604043 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9007e2c-ce36-49d5-ac3f-a2a0ced4e662\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://631c19835680cfbfc94d8d2864f79bb327a834aae717a2c9c525383029e44001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a299161b21fb4a4bc255d765f39eaafa3c87549cc62d458d28ff57fbb4b5fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25ce4f3c52e2096622385f0bd213a058de7ddd3967ed8ba918e79fc63b00429c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28c581f99dcf7d549d235350230e7c3ef380dfeb4fdff577353410642700cb1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.616671 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.619344 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.619450 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.619470 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.619492 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.619509 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:39Z","lastTransitionTime":"2025-11-24T11:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.637733 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a948c39e09b468da8df5726e7734af35e1d5324d44a6ad11f6e30031f27060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.649619 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.671456 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a60343a1-7193-420d-b6ef-81505cfad266\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6597a19c8ed876fea1aaa8077315a8f39d0a79dee6af94970a3abcd552d673e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89e652bfaac124e13e0b3dfd3f167688a6b417b3613fb94d5422e2134ad95a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c9b314ea6e67a2866adfd0dc2e429523b6db6dab450a1a95fe5528548a0fcb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5f54ddd554c2e52a492be6b3e237793c7b7bed201d942c23d11983e154863a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e03b85333c8be2e5efe40f082369652f009482373f8e230fd948b2dee4e2ee39\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:09:23Z\\\",\\\"message\\\":\\\"W1124 11:09:12.543261 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 11:09:12.543592 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763982552 cert, and key in /tmp/serving-cert-2249531990/serving-signer.crt, /tmp/serving-cert-2249531990/serving-signer.key\\\\nI1124 11:09:13.042739 1 observer_polling.go:159] Starting file observer\\\\nW1124 11:09:13.046128 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 11:09:13.046351 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:09:13.048981 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2249531990/tls.crt::/tmp/serving-cert-2249531990/tls.key\\\\\\\"\\\\nF1124 11:09:23.567420 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d2187669c4dc9aae8ca2f2141104aee1e20df96f0bccf45ecd4c8528f51d1af\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a6b0468c00ca40213d12dd7b80c9f0dcfb93509a44ae37414053672e674f9f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6b0468c00ca40213d12dd7b80c9f0dcfb93509a44ae37414053672e674f9f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.687973 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.717728 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1421e4bd297d99e68c36da933221bbabf8d74aa5fbfa7cbfe831215de52d4790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c82cb1df0677da29463f84139b09b8ee263695e4c994ef7d17846556260b5c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89dd7133a078fe05808fdf20f22b6939004406ae85d3b6ef854a3e4031350491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f6526ffcce8bc139bd9442203e460c71b46e2e8cf9e1f0d03beb067f5dc1c39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98470930757c0529cc831f91feab9f4b004c808efbfdf40e3e95b12e6af1c6d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7621cb39fa8d0330ee899d4962150519618be95eabfc592e6678bb5f5fbbdbfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07e6e7ab2f5cf671ed26130bd75177f315add4c324c1f8ca873c79b389c6d8d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af4c3d6857b6aaa6a401604f5423cfb55488de707a08698b4cf9f420b9c07975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4qmw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.722817 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.722876 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.722892 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.722916 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.722934 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:39Z","lastTransitionTime":"2025-11-24T11:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.826095 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.826157 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.826174 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.826196 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.826213 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:39Z","lastTransitionTime":"2025-11-24T11:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.970272 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.970346 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.970363 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.970425 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:39 crc kubenswrapper[5072]: I1124 11:09:39.970445 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:39Z","lastTransitionTime":"2025-11-24T11:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:40 crc kubenswrapper[5072]: I1124 11:09:40.015976 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:09:40 crc kubenswrapper[5072]: I1124 11:09:40.016111 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:09:40 crc kubenswrapper[5072]: I1124 11:09:40.015976 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:09:40 crc kubenswrapper[5072]: E1124 11:09:40.016165 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:09:40 crc kubenswrapper[5072]: E1124 11:09:40.016293 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:09:40 crc kubenswrapper[5072]: E1124 11:09:40.016464 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:09:40 crc kubenswrapper[5072]: I1124 11:09:40.073973 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:40 crc kubenswrapper[5072]: I1124 11:09:40.074042 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:40 crc kubenswrapper[5072]: I1124 11:09:40.074065 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:40 crc kubenswrapper[5072]: I1124 11:09:40.074091 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:40 crc kubenswrapper[5072]: I1124 11:09:40.074111 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:40Z","lastTransitionTime":"2025-11-24T11:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:40 crc kubenswrapper[5072]: I1124 11:09:40.176157 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:40 crc kubenswrapper[5072]: I1124 11:09:40.176197 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:40 crc kubenswrapper[5072]: I1124 11:09:40.176208 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:40 crc kubenswrapper[5072]: I1124 11:09:40.176225 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:40 crc kubenswrapper[5072]: I1124 11:09:40.176236 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:40Z","lastTransitionTime":"2025-11-24T11:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:40 crc kubenswrapper[5072]: I1124 11:09:40.269323 5072 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 11:09:40 crc kubenswrapper[5072]: I1124 11:09:40.270259 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" Nov 24 11:09:40 crc kubenswrapper[5072]: I1124 11:09:40.279569 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:40 crc kubenswrapper[5072]: I1124 11:09:40.279638 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:40 crc kubenswrapper[5072]: I1124 11:09:40.279662 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:40 crc kubenswrapper[5072]: I1124 11:09:40.279690 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:40 crc kubenswrapper[5072]: I1124 11:09:40.279714 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:40Z","lastTransitionTime":"2025-11-24T11:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:40 crc kubenswrapper[5072]: I1124 11:09:40.305347 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" Nov 24 11:09:40 crc kubenswrapper[5072]: I1124 11:09:40.324864 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b45fbff892ae7b15dc056d52d6485a995bb8a62ae423498027fe4866ef51e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcaa27616bc15c5ce26c371eb8a8f155914434949662b30894cd1ef7aa8e04a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:40Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:40 crc kubenswrapper[5072]: I1124 11:09:40.340675 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3973b61727227663fde759ad817fc73088f78293c67fc1bbbf5d5543afa7bbb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:40Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:40 crc kubenswrapper[5072]: I1124 11:09:40.354534 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bkjf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"175fd540-009b-4cb4-9c3e-e2ebc7e787f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d000a9d98b0e3ed54c1cc50148360bb8103d332c45ee03e745f14929132d2c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcts8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bkjf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:40Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:40 crc kubenswrapper[5072]: I1124 11:09:40.371486 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t8b9x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a9fe7b3-71a3-4388-8ee4-7531ceef6049\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96637ece9dca11a6b9e2a8fff8e78ca37f48e9f86e3f076e80cbd56aa353ca74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmbvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t8b9x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:40Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:40 crc kubenswrapper[5072]: I1124 11:09:40.382252 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:40 crc kubenswrapper[5072]: I1124 11:09:40.382298 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:40 crc kubenswrapper[5072]: I1124 11:09:40.382309 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:40 crc kubenswrapper[5072]: I1124 11:09:40.382326 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:40 crc kubenswrapper[5072]: I1124 11:09:40.382340 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:40Z","lastTransitionTime":"2025-11-24T11:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:40 crc kubenswrapper[5072]: I1124 11:09:40.391564 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85ee6420-36f0-467c-acf4-ebea8b02c8d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21d57225dc522c1ee3621c75ac8f9f93c47d21afb8b0cb1aae2d6aea1d17a252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3509fd52379451e43594c096ef652d92778331f2aef6b689e547f35a384b976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jfxnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:40Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:40 crc kubenswrapper[5072]: I1124 11:09:40.405819 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jz4mm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19d555ef-9635-4aa7-bce1-7b1eb4805445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc7d5e96171aeadf92196d2b795c03ec634abd92814569a974200484569c145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8k8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jz4mm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:40Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:40 crc kubenswrapper[5072]: I1124 11:09:40.423164 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9007e2c-ce36-49d5-ac3f-a2a0ced4e662\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://631c19835680cfbfc94d8d2864f79bb327a834aae717a2c9c525383029e44001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a299161b21fb4a4bc255d765f39eaafa3c87549cc62d458d28ff57fbb4b5fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25ce4f3c52e2096622385f0bd213a058de7ddd3967ed8ba918e79fc63b00429c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28c581f99dcf7d549d235350230e7c3ef380dfeb4fdff577353410642700cb1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:40Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:40 crc kubenswrapper[5072]: I1124 11:09:40.439557 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:40Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:40 crc kubenswrapper[5072]: I1124 11:09:40.455149 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a948c39e09b468da8df5726e7734af35e1d5324d44a6ad11f6e30031f27060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:40Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:40 crc kubenswrapper[5072]: I1124 11:09:40.469892 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:40Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:40 crc kubenswrapper[5072]: I1124 11:09:40.485659 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:40 crc kubenswrapper[5072]: I1124 11:09:40.485720 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:40 crc kubenswrapper[5072]: I1124 11:09:40.485738 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:40 crc kubenswrapper[5072]: I1124 11:09:40.485762 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:40 crc kubenswrapper[5072]: I1124 11:09:40.485776 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:40Z","lastTransitionTime":"2025-11-24T11:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:40 crc kubenswrapper[5072]: I1124 11:09:40.486966 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a60343a1-7193-420d-b6ef-81505cfad266\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6597a19c8ed876fea1aaa8077315a8f39d0a79dee6af94970a3abcd552d673e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89e652bfaac124e13e0b3dfd3f167688a6b417b3613fb94d5422e2134ad95a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c9b314ea6e67a2866adfd0dc2e429523b6db6dab450a1a95fe5528548a0fcb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5f54ddd554c2e52a492be6b3e237793c7b7bed201d942c23d11983e154863a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e03b85333c8be2e5efe40f082369652f009482373f8e230fd948b2dee4e2ee39\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:09:23Z\\\",\\\"message\\\":\\\"W1124 11:09:12.543261 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 11:09:12.543592 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763982552 cert, and key in /tmp/serving-cert-2249531990/serving-signer.crt, /tmp/serving-cert-2249531990/serving-signer.key\\\\nI1124 11:09:13.042739 1 observer_polling.go:159] Starting file observer\\\\nW1124 11:09:13.046128 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 11:09:13.046351 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:09:13.048981 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2249531990/tls.crt::/tmp/serving-cert-2249531990/tls.key\\\\\\\"\\\\nF1124 11:09:23.567420 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d2187669c4dc9aae8ca2f2141104aee1e20df96f0bccf45ecd4c8528f51d1af\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a6b0468c00ca40213d12dd7b80c9f0dcfb93509a44ae37414053672e674f9f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6b0468c00ca40213d12dd7b80c9f0dcfb93509a44ae37414053672e674f9f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:40Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:40 crc kubenswrapper[5072]: I1124 11:09:40.502526 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:40Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:40 crc kubenswrapper[5072]: I1124 11:09:40.523030 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1421e4bd297d99e68c36da933221bbabf8d74aa5fbfa7cbfe831215de52d4790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c82cb1df0677da29463f84139b09b8ee263695e4c994ef7d17846556260b5c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89dd7133a078fe05808fdf20f22b6939004406ae85d3b6ef854a3e4031350491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f6526ffcce8bc139bd9442203e460c71b46e2e8cf9e1f0d03beb067f5dc1c39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98470930757c0529cc831f91feab9f4b004c808efbfdf40e3e95b12e6af1c6d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7621cb39fa8d0330ee899d4962150519618be95eabfc592e6678bb5f5fbbdbfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07e6e7ab2f5cf671ed26130bd75177f315add4c324c1f8ca873c79b389c6d8d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af4c3d6857b6aaa6a401604f5423cfb55488de707a08698b4cf9f420b9c07975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4qmw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:40Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:40 crc kubenswrapper[5072]: I1124 11:09:40.539153 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qjsxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74eb978f-00ff-4ed3-a5da-8026a3211592\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a69b8017daa872327d88eab8150845309e30c5cf37b229292e7c8a80e5d599c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://911b5942d35c25032791bf5a43559a6234acf215f5d3f84a30e69aced0caecc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://911b5942d35c25032791bf5a43559a6234acf215f5d3f84a30e69aced0caecc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://829da19d26a0ee0192a826e0b355266bcc48c77cf7b1fcf97a9e56add5d48645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://829da19d26a0ee0192a826e0b355266bcc48c77cf7b1fcf97a9e56add5d48645\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5add393950b53ed615d28b3d65833ae6a5174616b7170577babf1f4b7b6a2336\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5add393950b53ed615d28b3d65833ae6a5174616b7170577babf1f4b7b6a2336\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4771d3054f62a25ec9be8b6628ead9e7eb99ad4ae545d803919cb0122343c0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4771d3054f62a25ec9be8b6628ead9e7eb99ad4ae545d803919cb0122343c0ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd19ed803c2b441c4dde807b4cd4461c581058658db24f32dea39ad73b9cef14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd19ed803c2b441c4dde807b4cd4461c581058658db24f32dea39ad73b9cef14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09dba82c18fac19ddd5bbbeecab58a5dc685dbda72e7570cde5d445990066d2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09dba82c18fac19ddd5bbbeecab58a5dc685dbda72e7570cde5d445990066d2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qjsxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:40Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:40 crc kubenswrapper[5072]: I1124 11:09:40.588269 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:40 crc kubenswrapper[5072]: I1124 11:09:40.588340 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:40 crc kubenswrapper[5072]: I1124 11:09:40.588365 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:40 crc kubenswrapper[5072]: I1124 11:09:40.588423 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:40 crc kubenswrapper[5072]: I1124 11:09:40.588458 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:40Z","lastTransitionTime":"2025-11-24T11:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:40 crc kubenswrapper[5072]: I1124 11:09:40.690895 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:40 crc kubenswrapper[5072]: I1124 11:09:40.690934 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:40 crc kubenswrapper[5072]: I1124 11:09:40.690945 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:40 crc kubenswrapper[5072]: I1124 11:09:40.690960 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:40 crc kubenswrapper[5072]: I1124 11:09:40.690971 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:40Z","lastTransitionTime":"2025-11-24T11:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:40 crc kubenswrapper[5072]: I1124 11:09:40.794588 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:40 crc kubenswrapper[5072]: I1124 11:09:40.794645 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:40 crc kubenswrapper[5072]: I1124 11:09:40.794660 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:40 crc kubenswrapper[5072]: I1124 11:09:40.794678 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:40 crc kubenswrapper[5072]: I1124 11:09:40.794696 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:40Z","lastTransitionTime":"2025-11-24T11:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:40 crc kubenswrapper[5072]: I1124 11:09:40.896805 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:40 crc kubenswrapper[5072]: I1124 11:09:40.896849 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:40 crc kubenswrapper[5072]: I1124 11:09:40.896857 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:40 crc kubenswrapper[5072]: I1124 11:09:40.896871 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:40 crc kubenswrapper[5072]: I1124 11:09:40.896884 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:40Z","lastTransitionTime":"2025-11-24T11:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:41 crc kubenswrapper[5072]: I1124 11:09:40.999871 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:41 crc kubenswrapper[5072]: I1124 11:09:40.999934 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:41 crc kubenswrapper[5072]: I1124 11:09:40.999951 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:41 crc kubenswrapper[5072]: I1124 11:09:40.999975 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:41 crc kubenswrapper[5072]: I1124 11:09:40.999992 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:40Z","lastTransitionTime":"2025-11-24T11:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:41 crc kubenswrapper[5072]: I1124 11:09:41.103050 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:41 crc kubenswrapper[5072]: I1124 11:09:41.103107 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:41 crc kubenswrapper[5072]: I1124 11:09:41.103125 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:41 crc kubenswrapper[5072]: I1124 11:09:41.103148 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:41 crc kubenswrapper[5072]: I1124 11:09:41.103165 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:41Z","lastTransitionTime":"2025-11-24T11:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:41 crc kubenswrapper[5072]: I1124 11:09:41.206509 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:41 crc kubenswrapper[5072]: I1124 11:09:41.206564 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:41 crc kubenswrapper[5072]: I1124 11:09:41.206581 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:41 crc kubenswrapper[5072]: I1124 11:09:41.206603 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:41 crc kubenswrapper[5072]: I1124 11:09:41.206622 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:41Z","lastTransitionTime":"2025-11-24T11:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:41 crc kubenswrapper[5072]: I1124 11:09:41.271769 5072 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 11:09:41 crc kubenswrapper[5072]: I1124 11:09:41.310005 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:41 crc kubenswrapper[5072]: I1124 11:09:41.310061 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:41 crc kubenswrapper[5072]: I1124 11:09:41.310077 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:41 crc kubenswrapper[5072]: I1124 11:09:41.310100 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:41 crc kubenswrapper[5072]: I1124 11:09:41.310117 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:41Z","lastTransitionTime":"2025-11-24T11:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:41 crc kubenswrapper[5072]: I1124 11:09:41.413914 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:41 crc kubenswrapper[5072]: I1124 11:09:41.413971 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:41 crc kubenswrapper[5072]: I1124 11:09:41.413996 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:41 crc kubenswrapper[5072]: I1124 11:09:41.414028 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:41 crc kubenswrapper[5072]: I1124 11:09:41.414051 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:41Z","lastTransitionTime":"2025-11-24T11:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:41 crc kubenswrapper[5072]: I1124 11:09:41.516173 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:41 crc kubenswrapper[5072]: I1124 11:09:41.516242 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:41 crc kubenswrapper[5072]: I1124 11:09:41.516258 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:41 crc kubenswrapper[5072]: I1124 11:09:41.516282 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:41 crc kubenswrapper[5072]: I1124 11:09:41.516300 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:41Z","lastTransitionTime":"2025-11-24T11:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:41 crc kubenswrapper[5072]: I1124 11:09:41.619555 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:41 crc kubenswrapper[5072]: I1124 11:09:41.619624 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:41 crc kubenswrapper[5072]: I1124 11:09:41.619646 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:41 crc kubenswrapper[5072]: I1124 11:09:41.619671 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:41 crc kubenswrapper[5072]: I1124 11:09:41.619689 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:41Z","lastTransitionTime":"2025-11-24T11:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:41 crc kubenswrapper[5072]: I1124 11:09:41.722313 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:41 crc kubenswrapper[5072]: I1124 11:09:41.722418 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:41 crc kubenswrapper[5072]: I1124 11:09:41.722437 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:41 crc kubenswrapper[5072]: I1124 11:09:41.722459 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:41 crc kubenswrapper[5072]: I1124 11:09:41.722478 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:41Z","lastTransitionTime":"2025-11-24T11:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:41 crc kubenswrapper[5072]: I1124 11:09:41.824193 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:41 crc kubenswrapper[5072]: I1124 11:09:41.824230 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:41 crc kubenswrapper[5072]: I1124 11:09:41.824238 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:41 crc kubenswrapper[5072]: I1124 11:09:41.824250 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:41 crc kubenswrapper[5072]: I1124 11:09:41.824259 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:41Z","lastTransitionTime":"2025-11-24T11:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:41 crc kubenswrapper[5072]: I1124 11:09:41.926660 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:41 crc kubenswrapper[5072]: I1124 11:09:41.926721 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:41 crc kubenswrapper[5072]: I1124 11:09:41.926740 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:41 crc kubenswrapper[5072]: I1124 11:09:41.926763 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:41 crc kubenswrapper[5072]: I1124 11:09:41.926781 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:41Z","lastTransitionTime":"2025-11-24T11:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:42 crc kubenswrapper[5072]: I1124 11:09:42.016350 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:09:42 crc kubenswrapper[5072]: I1124 11:09:42.016362 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:09:42 crc kubenswrapper[5072]: I1124 11:09:42.016397 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:09:42 crc kubenswrapper[5072]: E1124 11:09:42.016596 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:09:42 crc kubenswrapper[5072]: E1124 11:09:42.016692 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:09:42 crc kubenswrapper[5072]: E1124 11:09:42.016852 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:09:42 crc kubenswrapper[5072]: I1124 11:09:42.029189 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:42 crc kubenswrapper[5072]: I1124 11:09:42.029246 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:42 crc kubenswrapper[5072]: I1124 11:09:42.029264 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:42 crc kubenswrapper[5072]: I1124 11:09:42.029291 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:42 crc kubenswrapper[5072]: I1124 11:09:42.029310 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:42Z","lastTransitionTime":"2025-11-24T11:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:42 crc kubenswrapper[5072]: I1124 11:09:42.132265 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:42 crc kubenswrapper[5072]: I1124 11:09:42.132627 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:42 crc kubenswrapper[5072]: I1124 11:09:42.132646 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:42 crc kubenswrapper[5072]: I1124 11:09:42.132671 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:42 crc kubenswrapper[5072]: I1124 11:09:42.132689 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:42Z","lastTransitionTime":"2025-11-24T11:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:42 crc kubenswrapper[5072]: I1124 11:09:42.236272 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:42 crc kubenswrapper[5072]: I1124 11:09:42.236353 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:42 crc kubenswrapper[5072]: I1124 11:09:42.236424 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:42 crc kubenswrapper[5072]: I1124 11:09:42.236460 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:42 crc kubenswrapper[5072]: I1124 11:09:42.236480 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:42Z","lastTransitionTime":"2025-11-24T11:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:42 crc kubenswrapper[5072]: I1124 11:09:42.277756 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n4qmw_80fda759-ddfd-438a-b5a2-cb775ee1bf7e/ovnkube-controller/0.log" Nov 24 11:09:42 crc kubenswrapper[5072]: I1124 11:09:42.285618 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 11:09:42 crc kubenswrapper[5072]: I1124 11:09:42.286113 5072 generic.go:334] "Generic (PLEG): container finished" podID="80fda759-ddfd-438a-b5a2-cb775ee1bf7e" containerID="07e6e7ab2f5cf671ed26130bd75177f315add4c324c1f8ca873c79b389c6d8d9" exitCode=1 Nov 24 11:09:42 crc kubenswrapper[5072]: I1124 11:09:42.286161 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" event={"ID":"80fda759-ddfd-438a-b5a2-cb775ee1bf7e","Type":"ContainerDied","Data":"07e6e7ab2f5cf671ed26130bd75177f315add4c324c1f8ca873c79b389c6d8d9"} Nov 24 11:09:42 crc kubenswrapper[5072]: I1124 11:09:42.287503 5072 scope.go:117] "RemoveContainer" containerID="07e6e7ab2f5cf671ed26130bd75177f315add4c324c1f8ca873c79b389c6d8d9" Nov 24 11:09:42 crc kubenswrapper[5072]: I1124 11:09:42.312345 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b45fbff892ae7b15dc056d52d6485a995bb8a62ae423498027fe4866ef51e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcaa27616bc15c5ce26c371eb8a8f155914434949662b30894cd1ef7aa8e04a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:42Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:42 crc kubenswrapper[5072]: I1124 11:09:42.333624 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3973b61727227663fde759ad817fc73088f78293c67fc1bbbf5d5543afa7bbb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:42Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:42 crc kubenswrapper[5072]: I1124 11:09:42.340735 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:42 crc kubenswrapper[5072]: I1124 11:09:42.340799 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:42 crc kubenswrapper[5072]: I1124 11:09:42.340825 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:42 crc kubenswrapper[5072]: I1124 11:09:42.340849 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:42 crc kubenswrapper[5072]: I1124 11:09:42.340864 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:42Z","lastTransitionTime":"2025-11-24T11:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:42 crc kubenswrapper[5072]: I1124 11:09:42.349239 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bkjf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"175fd540-009b-4cb4-9c3e-e2ebc7e787f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d000a9d98b0e3ed54c1cc50148360bb8103d332c45ee03e745f14929132d2c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcts8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bkjf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:42Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:42 crc kubenswrapper[5072]: I1124 11:09:42.370577 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t8b9x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a9fe7b3-71a3-4388-8ee4-7531ceef6049\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96637ece9dca11a6b9e2a8fff8e78ca37f48e9f86e3f076e80cbd56aa353ca74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmbvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t8b9x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:42Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:42 crc kubenswrapper[5072]: I1124 11:09:42.387186 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85ee6420-36f0-467c-acf4-ebea8b02c8d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21d57225dc522c1ee3621c75ac8f9f93c47d21afb8b0cb1aae2d6aea1d17a252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3509fd52379451e43594c096ef652d92778331f2aef6b689e547f35a384b976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jfxnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:42Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:42 crc kubenswrapper[5072]: I1124 11:09:42.399401 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jz4mm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19d555ef-9635-4aa7-bce1-7b1eb4805445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc7d5e96171aeadf92196d2b795c03ec634abd92814569a974200484569c145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8k8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jz4mm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:42Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:42 crc kubenswrapper[5072]: I1124 11:09:42.415065 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9007e2c-ce36-49d5-ac3f-a2a0ced4e662\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://631c19835680cfbfc94d8d2864f79bb327a834aae717a2c9c525383029e44001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a299161b21fb4a4bc255d765f39eaafa3c87549cc62d458d28ff57fbb4b5fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25ce4f3c52e2096622385f0bd213a058de7ddd3967ed8ba918e79fc63b00429c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28c581f99dcf7d549d235350230e7c3ef380dfeb4fdff577353410642700cb1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:42Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:42 crc kubenswrapper[5072]: I1124 11:09:42.432546 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:42Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:42 crc kubenswrapper[5072]: I1124 11:09:42.443191 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:42 crc kubenswrapper[5072]: I1124 11:09:42.443248 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:42 crc kubenswrapper[5072]: I1124 11:09:42.443262 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:42 crc kubenswrapper[5072]: I1124 11:09:42.443281 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:42 crc kubenswrapper[5072]: I1124 11:09:42.443294 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:42Z","lastTransitionTime":"2025-11-24T11:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:42 crc kubenswrapper[5072]: I1124 11:09:42.450148 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a948c39e09b468da8df5726e7734af35e1d5324d44a6ad11f6e30031f27060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:42Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:42 crc kubenswrapper[5072]: I1124 11:09:42.468699 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:42Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:42 crc kubenswrapper[5072]: I1124 11:09:42.484147 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a60343a1-7193-420d-b6ef-81505cfad266\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6597a19c8ed876fea1aaa8077315a8f39d0a79dee6af94970a3abcd552d673e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89e652bfaac124e13e0b3dfd3f167688a6b417b3613fb94d5422e2134ad95a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c9b314ea6e67a2866adfd0dc2e429523b6db6dab450a1a95fe5528548a0fcb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5f54ddd554c2e52a492be6b3e237793c7b7bed201d942c23d11983e154863a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e03b85333c8be2e5efe40f082369652f009482373f8e230fd948b2dee4e2ee39\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:09:23Z\\\",\\\"message\\\":\\\"W1124 11:09:12.543261 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 11:09:12.543592 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763982552 cert, and key in /tmp/serving-cert-2249531990/serving-signer.crt, /tmp/serving-cert-2249531990/serving-signer.key\\\\nI1124 11:09:13.042739 1 observer_polling.go:159] Starting file observer\\\\nW1124 11:09:13.046128 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 11:09:13.046351 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:09:13.048981 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2249531990/tls.crt::/tmp/serving-cert-2249531990/tls.key\\\\\\\"\\\\nF1124 11:09:23.567420 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d2187669c4dc9aae8ca2f2141104aee1e20df96f0bccf45ecd4c8528f51d1af\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a6b0468c00ca40213d12dd7b80c9f0dcfb93509a44ae37414053672e674f9f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6b0468c00ca40213d12dd7b80c9f0dcfb93509a44ae37414053672e674f9f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:42Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:42 crc kubenswrapper[5072]: I1124 11:09:42.498259 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:42Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:42 crc kubenswrapper[5072]: I1124 11:09:42.528521 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1421e4bd297d99e68c36da933221bbabf8d74aa5fbfa7cbfe831215de52d4790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c82cb1df0677da29463f84139b09b8ee263695e4c994ef7d17846556260b5c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89dd7133a078fe05808fdf20f22b6939004406ae85d3b6ef854a3e4031350491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f6526ffcce8bc139bd9442203e460c71b46e2e8cf9e1f0d03beb067f5dc1c39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98470930757c0529cc831f91feab9f4b004c808efbfdf40e3e95b12e6af1c6d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7621cb39fa8d0330ee899d4962150519618be95eabfc592e6678bb5f5fbbdbfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07e6e7ab2f5cf671ed26130bd75177f315add4c324c1f8ca873c79b389c6d8d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af4c3d6857b6aaa6a401604f5423cfb55488de707a08698b4cf9f420b9c07975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4qmw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:42Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:42 crc kubenswrapper[5072]: I1124 11:09:42.545150 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:42 crc kubenswrapper[5072]: I1124 11:09:42.545197 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:42 crc kubenswrapper[5072]: I1124 11:09:42.545207 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:42 crc kubenswrapper[5072]: I1124 11:09:42.545258 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:42 crc kubenswrapper[5072]: I1124 11:09:42.545280 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:42Z","lastTransitionTime":"2025-11-24T11:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:42 crc kubenswrapper[5072]: I1124 11:09:42.549072 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qjsxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74eb978f-00ff-4ed3-a5da-8026a3211592\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a69b8017daa872327d88eab8150845309e30c5cf37b229292e7c8a80e5d599c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://911b5942d35c25032791bf5a43559a6234acf215f5d3f84a30e69aced0caecc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://911b5942d35c25032791bf5a43559a6234acf215f5d3f84a30e69aced0caecc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://829da19d26a0ee0192a826e0b355266bcc48c77cf7b1fcf97a9e56add5d48645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://829da19d26a0ee0192a826e0b355266bcc48c77cf7b1fcf97a9e56add5d48645\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5add393950b53ed615d28b3d65833ae6a5174616b7170577babf1f4b7b6a2336\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5add393950b53ed615d28b3d65833ae6a5174616b7170577babf1f4b7b6a2336\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4771d3054f62a25ec9be8b6628ead9e7eb99ad4ae545d803919cb0122343c0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4771d3054f62a25ec9be8b6628ead9e7eb99ad4ae545d803919cb0122343c0ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd19ed803c2b441c4dde807b4cd4461c581058658db24f32dea39ad73b9cef14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd19ed803c2b441c4dde807b4cd4461c581058658db24f32dea39ad73b9cef14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09dba82c18fac19ddd5bbbeecab58a5dc685dbda72e7570cde5d445990066d2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09dba82c18fac19ddd5bbbeecab58a5dc685dbda72e7570cde5d445990066d2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qjsxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:42Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:42 crc kubenswrapper[5072]: I1124 11:09:42.567618 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a60343a1-7193-420d-b6ef-81505cfad266\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6597a19c8ed876fea1aaa8077315a8f39d0a79dee6af94970a3abcd552d673e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89e652bfaac124e13e0b3dfd3f167688a6b417b3613fb94d5422e2134ad95a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c9b314ea6e67a2866adfd0dc2e429523b6db6dab450a1a95fe5528548a0fcb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5f54ddd554c2e52a492be6b3e237793c7b7bed201d942c23d11983e154863a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e03b85333c8be2e5efe40f082369652f009482373f8e230fd948b2dee4e2ee39\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:09:23Z\\\",\\\"message\\\":\\\"W1124 11:09:12.543261 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 11:09:12.543592 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763982552 cert, and key in /tmp/serving-cert-2249531990/serving-signer.crt, /tmp/serving-cert-2249531990/serving-signer.key\\\\nI1124 11:09:13.042739 1 observer_polling.go:159] Starting file observer\\\\nW1124 11:09:13.046128 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 11:09:13.046351 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:09:13.048981 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2249531990/tls.crt::/tmp/serving-cert-2249531990/tls.key\\\\\\\"\\\\nF1124 11:09:23.567420 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d2187669c4dc9aae8ca2f2141104aee1e20df96f0bccf45ecd4c8528f51d1af\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a6b0468c00ca40213d12dd7b80c9f0dcfb93509a44ae37414053672e674f9f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6b0468c00ca40213d12dd7b80c9f0dcfb93509a44ae37414053672e674f9f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:42Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:42 crc kubenswrapper[5072]: I1124 11:09:42.578913 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:42Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:42 crc kubenswrapper[5072]: I1124 11:09:42.600022 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1421e4bd297d99e68c36da933221bbabf8d74aa5fbfa7cbfe831215de52d4790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c82cb1df0677da29463f84139b09b8ee263695e4c994ef7d17846556260b5c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89dd7133a078fe05808fdf20f22b6939004406ae85d3b6ef854a3e4031350491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f6526ffcce8bc139bd9442203e460c71b46e2e8cf9e1f0d03beb067f5dc1c39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98470930757c0529cc831f91feab9f4b004c808efbfdf40e3e95b12e6af1c6d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7621cb39fa8d0330ee899d4962150519618be95eabfc592e6678bb5f5fbbdbfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07e6e7ab2f5cf671ed26130bd75177f315add4c324c1f8ca873c79b389c6d8d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07e6e7ab2f5cf671ed26130bd75177f315add4c324c1f8ca873c79b389c6d8d9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:09:41Z\\\",\\\"message\\\":\\\"\\\\nI1124 11:09:41.542791 6400 handler.go:208] Removed *v1.Node event handler 2\\\\nI1124 11:09:41.542822 6400 handler.go:208] Removed *v1.Node event handler 7\\\\nI1124 11:09:41.542853 6400 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1124 11:09:41.542876 6400 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1124 11:09:41.542898 6400 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1124 11:09:41.542912 6400 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 11:09:41.542952 6400 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1124 11:09:41.542994 6400 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1124 11:09:41.543016 6400 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1124 11:09:41.543049 6400 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1124 11:09:41.543050 6400 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1124 11:09:41.543088 6400 factory.go:656] Stopping watch factory\\\\nI1124 11:09:41.543107 6400 ovnkube.go:599] Stopped ovnkube\\\\nI1124 11:09:41.543015 6400 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1124 11:09:41.543088 6400 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1124 11:09:41.542995 6400 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1124 11:09:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af4c3d6857b6aaa6a401604f5423cfb55488de707a08698b4cf9f420b9c07975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4qmw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:42Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:42 crc kubenswrapper[5072]: I1124 11:09:42.623605 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qjsxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74eb978f-00ff-4ed3-a5da-8026a3211592\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a69b8017daa872327d88eab8150845309e30c5cf37b229292e7c8a80e5d599c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://911b5942d35c25032791bf5a43559a6234acf215f5d3f84a30e69aced0caecc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://911b5942d35c25032791bf5a43559a6234acf215f5d3f84a30e69aced0caecc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://829da19d26a0ee0192a826e0b355266bcc48c77cf7b1fcf97a9e56add5d48645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://829da19d26a0ee0192a826e0b355266bcc48c77cf7b1fcf97a9e56add5d48645\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5add393950b53ed615d28b3d65833ae6a5174616b7170577babf1f4b7b6a2336\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5add393950b53ed615d28b3d65833ae6a5174616b7170577babf1f4b7b6a2336\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4771d3054f62a25ec9be8b6628ead9e7eb99ad4ae545d803919cb0122343c0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4771d3054f62a25ec9be8b6628ead9e7eb99ad4ae545d803919cb0122343c0ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd19ed803c2b441c4dde807b4cd4461c581058658db24f32dea39ad73b9cef14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd19ed803c2b441c4dde807b4cd4461c581058658db24f32dea39ad73b9cef14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09dba82c18fac19ddd5bbbeecab58a5dc685dbda72e7570cde5d445990066d2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09dba82c18fac19ddd5bbbeecab58a5dc685dbda72e7570cde5d445990066d2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qjsxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:42Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:42 crc kubenswrapper[5072]: I1124 11:09:42.641659 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3973b61727227663fde759ad817fc73088f78293c67fc1bbbf5d5543afa7bbb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:42Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:42 crc kubenswrapper[5072]: I1124 11:09:42.648104 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:42 crc kubenswrapper[5072]: I1124 11:09:42.648168 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:42 crc kubenswrapper[5072]: I1124 11:09:42.648187 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:42 crc kubenswrapper[5072]: I1124 11:09:42.648212 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:42 crc kubenswrapper[5072]: I1124 11:09:42.648232 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:42Z","lastTransitionTime":"2025-11-24T11:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:42 crc kubenswrapper[5072]: I1124 11:09:42.654586 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bkjf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"175fd540-009b-4cb4-9c3e-e2ebc7e787f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d000a9d98b0e3ed54c1cc50148360bb8103d332c45ee03e745f14929132d2c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcts8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bkjf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:42Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:42 crc kubenswrapper[5072]: I1124 11:09:42.674845 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t8b9x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a9fe7b3-71a3-4388-8ee4-7531ceef6049\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96637ece9dca11a6b9e2a8fff8e78ca37f48e9f86e3f076e80cbd56aa353ca74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmbvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t8b9x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:42Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:42 crc kubenswrapper[5072]: I1124 11:09:42.694525 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85ee6420-36f0-467c-acf4-ebea8b02c8d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21d57225dc522c1ee3621c75ac8f9f93c47d21afb8b0cb1aae2d6aea1d17a252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3509fd52379451e43594c096ef652d92778331f2aef6b689e547f35a384b976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jfxnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:42Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:42 crc kubenswrapper[5072]: I1124 11:09:42.712115 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jz4mm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19d555ef-9635-4aa7-bce1-7b1eb4805445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc7d5e96171aeadf92196d2b795c03ec634abd92814569a974200484569c145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8k8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jz4mm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:42Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:42 crc kubenswrapper[5072]: I1124 11:09:42.733649 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b45fbff892ae7b15dc056d52d6485a995bb8a62ae423498027fe4866ef51e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcaa27616bc15c5ce26c371eb8a8f155914434949662b30894cd1ef7aa8e04a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:42Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:42 crc kubenswrapper[5072]: I1124 11:09:42.751356 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:42 crc kubenswrapper[5072]: I1124 11:09:42.751458 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:42 crc kubenswrapper[5072]: I1124 11:09:42.751481 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:42 crc kubenswrapper[5072]: I1124 11:09:42.751509 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:42 crc kubenswrapper[5072]: I1124 11:09:42.751528 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:42Z","lastTransitionTime":"2025-11-24T11:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:42 crc kubenswrapper[5072]: I1124 11:09:42.756202 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9007e2c-ce36-49d5-ac3f-a2a0ced4e662\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://631c19835680cfbfc94d8d2864f79bb327a834aae717a2c9c525383029e44001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a299161b21fb4a4bc255d765f39eaafa3c87549cc62d458d28ff57fbb4b5fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25ce4f3c52e2096622385f0bd213a058de7ddd3967ed8ba918e79fc63b00429c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28c581f99dcf7d549d235350230e7c3ef380dfeb4fdff577353410642700cb1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:42Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:42 crc kubenswrapper[5072]: I1124 11:09:42.779233 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:42Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:42 crc kubenswrapper[5072]: I1124 11:09:42.803734 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a948c39e09b468da8df5726e7734af35e1d5324d44a6ad11f6e30031f27060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:42Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:42 crc kubenswrapper[5072]: I1124 11:09:42.822360 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:42Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:42 crc kubenswrapper[5072]: I1124 11:09:42.854460 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:42 crc kubenswrapper[5072]: I1124 11:09:42.854530 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:42 crc kubenswrapper[5072]: I1124 11:09:42.854554 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:42 crc kubenswrapper[5072]: I1124 11:09:42.854579 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:42 crc kubenswrapper[5072]: I1124 11:09:42.854600 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:42Z","lastTransitionTime":"2025-11-24T11:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:42 crc kubenswrapper[5072]: I1124 11:09:42.956680 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:42 crc kubenswrapper[5072]: I1124 11:09:42.956763 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:42 crc kubenswrapper[5072]: I1124 11:09:42.956778 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:42 crc kubenswrapper[5072]: I1124 11:09:42.956798 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:42 crc kubenswrapper[5072]: I1124 11:09:42.956811 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:42Z","lastTransitionTime":"2025-11-24T11:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:43 crc kubenswrapper[5072]: I1124 11:09:43.058850 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:43 crc kubenswrapper[5072]: I1124 11:09:43.058888 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:43 crc kubenswrapper[5072]: I1124 11:09:43.058897 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:43 crc kubenswrapper[5072]: I1124 11:09:43.058911 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:43 crc kubenswrapper[5072]: I1124 11:09:43.058921 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:43Z","lastTransitionTime":"2025-11-24T11:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:43 crc kubenswrapper[5072]: I1124 11:09:43.161103 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:43 crc kubenswrapper[5072]: I1124 11:09:43.161161 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:43 crc kubenswrapper[5072]: I1124 11:09:43.161179 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:43 crc kubenswrapper[5072]: I1124 11:09:43.161202 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:43 crc kubenswrapper[5072]: I1124 11:09:43.161221 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:43Z","lastTransitionTime":"2025-11-24T11:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:43 crc kubenswrapper[5072]: I1124 11:09:43.264414 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:43 crc kubenswrapper[5072]: I1124 11:09:43.264523 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:43 crc kubenswrapper[5072]: I1124 11:09:43.264540 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:43 crc kubenswrapper[5072]: I1124 11:09:43.264563 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:43 crc kubenswrapper[5072]: I1124 11:09:43.264579 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:43Z","lastTransitionTime":"2025-11-24T11:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:43 crc kubenswrapper[5072]: I1124 11:09:43.293000 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n4qmw_80fda759-ddfd-438a-b5a2-cb775ee1bf7e/ovnkube-controller/0.log" Nov 24 11:09:43 crc kubenswrapper[5072]: I1124 11:09:43.297113 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" event={"ID":"80fda759-ddfd-438a-b5a2-cb775ee1bf7e","Type":"ContainerStarted","Data":"17a209788447e8d556a2f5d4611b2979e998e017c2ad7a81d88b9d005f215721"} Nov 24 11:09:43 crc kubenswrapper[5072]: I1124 11:09:43.297259 5072 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 11:09:43 crc kubenswrapper[5072]: I1124 11:09:43.321825 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qjsxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74eb978f-00ff-4ed3-a5da-8026a3211592\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a69b8017daa872327d88eab8150845309e30c5cf37b229292e7c8a80e5d599c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://911b5942d35c25032791bf5a43559a6234acf215f5d3f84a30e69aced0caecc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://911b5942d35c25032791bf5a43559a6234acf215f5d3f84a30e69aced0caecc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://829da19d26a0ee0192a826e0b355266bcc48c77cf7b1fcf97a9e56add5d48645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://829da19d26a0ee0192a826e0b355266bcc48c77cf7b1fcf97a9e56add5d48645\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5add393950b53ed615d28b3d65833ae6a5174616b7170577babf1f4b7b6a2336\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5add393950b53ed615d28b3d65833ae6a5174616b7170577babf1f4b7b6a2336\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4771d3054f62a25ec9be8b6628ead9e7eb99ad4ae545d803919cb0122343c0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4771d3054f62a25ec9be8b6628ead9e7eb99ad4ae545d803919cb0122343c0ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd19ed803c2b441c4dde807b4cd4461c581058658db24f32dea39ad73b9cef14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd19ed803c2b441c4dde807b4cd4461c581058658db24f32dea39ad73b9cef14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09dba82c18fac19ddd5bbbeecab58a5dc685dbda72e7570cde5d445990066d2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09dba82c18fac19ddd5bbbeecab58a5dc685dbda72e7570cde5d445990066d2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qjsxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:43Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:43 crc kubenswrapper[5072]: I1124 11:09:43.343812 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85ee6420-36f0-467c-acf4-ebea8b02c8d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21d57225dc522c1ee3621c75ac8f9f93c47d21afb8b0cb1aae2d6aea1d17a252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3509fd52379451e43594c096ef652d92778331f2aef6b689e547f35a384b976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jfxnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:43Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:43 crc kubenswrapper[5072]: I1124 11:09:43.360012 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jz4mm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19d555ef-9635-4aa7-bce1-7b1eb4805445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc7d5e96171aeadf92196d2b795c03ec634abd92814569a974200484569c145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8k8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jz4mm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:43Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:43 crc kubenswrapper[5072]: I1124 11:09:43.366816 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:43 crc kubenswrapper[5072]: I1124 11:09:43.366880 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:43 crc kubenswrapper[5072]: I1124 11:09:43.366899 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:43 crc kubenswrapper[5072]: I1124 11:09:43.366923 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:43 crc kubenswrapper[5072]: I1124 11:09:43.366946 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:43Z","lastTransitionTime":"2025-11-24T11:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:43 crc kubenswrapper[5072]: I1124 11:09:43.378480 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b45fbff892ae7b15dc056d52d6485a995bb8a62ae423498027fe4866ef51e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcaa27616bc15c5ce26c371eb8a8f155914434949662b30894cd1ef7aa8e04a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:43Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:43 crc kubenswrapper[5072]: I1124 11:09:43.392258 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3973b61727227663fde759ad817fc73088f78293c67fc1bbbf5d5543afa7bbb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:43Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:43 crc kubenswrapper[5072]: I1124 11:09:43.409743 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bkjf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"175fd540-009b-4cb4-9c3e-e2ebc7e787f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d000a9d98b0e3ed54c1cc50148360bb8103d332c45ee03e745f14929132d2c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcts8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bkjf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:43Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:43 crc kubenswrapper[5072]: I1124 11:09:43.423356 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t8b9x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a9fe7b3-71a3-4388-8ee4-7531ceef6049\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96637ece9dca11a6b9e2a8fff8e78ca37f48e9f86e3f076e80cbd56aa353ca74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmbvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t8b9x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:43Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:43 crc kubenswrapper[5072]: I1124 11:09:43.438933 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:43Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:43 crc kubenswrapper[5072]: I1124 11:09:43.451718 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9007e2c-ce36-49d5-ac3f-a2a0ced4e662\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://631c19835680cfbfc94d8d2864f79bb327a834aae717a2c9c525383029e44001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a299161b21fb4a4bc255d765f39eaafa3c87549cc62d458d28ff57fbb4b5fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25ce4f3c52e2096622385f0bd213a058de7ddd3967ed8ba918e79fc63b00429c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28c581f99dcf7d549d235350230e7c3ef380dfeb4fdff577353410642700cb1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:43Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:43 crc kubenswrapper[5072]: I1124 11:09:43.464856 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:43Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:43 crc kubenswrapper[5072]: I1124 11:09:43.469206 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:43 crc kubenswrapper[5072]: I1124 11:09:43.469248 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:43 crc kubenswrapper[5072]: I1124 11:09:43.469264 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:43 crc kubenswrapper[5072]: I1124 11:09:43.469285 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:43 crc kubenswrapper[5072]: I1124 11:09:43.469301 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:43Z","lastTransitionTime":"2025-11-24T11:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:43 crc kubenswrapper[5072]: I1124 11:09:43.480124 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a948c39e09b468da8df5726e7734af35e1d5324d44a6ad11f6e30031f27060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:43Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:43 crc kubenswrapper[5072]: I1124 11:09:43.508773 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1421e4bd297d99e68c36da933221bbabf8d74aa5fbfa7cbfe831215de52d4790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c82cb1df0677da29463f84139b09b8ee263695e4c994ef7d17846556260b5c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89dd7133a078fe05808fdf20f22b6939004406ae85d3b6ef854a3e4031350491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f6526ffcce8bc139bd9442203e460c71b46e2e8cf9e1f0d03beb067f5dc1c39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98470930757c0529cc831f91feab9f4b004c808efbfdf40e3e95b12e6af1c6d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7621cb39fa8d0330ee899d4962150519618be95eabfc592e6678bb5f5fbbdbfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17a209788447e8d556a2f5d4611b2979e998e017c2ad7a81d88b9d005f215721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07e6e7ab2f5cf671ed26130bd75177f315add4c324c1f8ca873c79b389c6d8d9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:09:41Z\\\",\\\"message\\\":\\\"\\\\nI1124 11:09:41.542791 6400 handler.go:208] Removed *v1.Node event handler 2\\\\nI1124 11:09:41.542822 6400 handler.go:208] Removed *v1.Node event handler 7\\\\nI1124 11:09:41.542853 6400 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1124 11:09:41.542876 6400 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1124 11:09:41.542898 6400 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1124 11:09:41.542912 6400 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 11:09:41.542952 6400 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1124 11:09:41.542994 6400 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1124 11:09:41.543016 6400 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1124 11:09:41.543049 6400 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1124 11:09:41.543050 6400 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1124 11:09:41.543088 6400 factory.go:656] Stopping watch factory\\\\nI1124 11:09:41.543107 6400 ovnkube.go:599] Stopped ovnkube\\\\nI1124 11:09:41.543015 6400 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1124 11:09:41.543088 6400 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1124 11:09:41.542995 6400 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1124 11:09:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af4c3d6857b6aaa6a401604f5423cfb55488de707a08698b4cf9f420b9c07975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4qmw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:43Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:43 crc kubenswrapper[5072]: I1124 11:09:43.528235 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a60343a1-7193-420d-b6ef-81505cfad266\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6597a19c8ed876fea1aaa8077315a8f39d0a79dee6af94970a3abcd552d673e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89e652bfaac124e13e0b3dfd3f167688a6b417b3613fb94d5422e2134ad95a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c9b314ea6e67a2866adfd0dc2e429523b6db6dab450a1a95fe5528548a0fcb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5f54ddd554c2e52a492be6b3e237793c7b7bed201d942c23d11983e154863a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e03b85333c8be2e5efe40f082369652f009482373f8e230fd948b2dee4e2ee39\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:09:23Z\\\",\\\"message\\\":\\\"W1124 11:09:12.543261 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 11:09:12.543592 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763982552 cert, and key in /tmp/serving-cert-2249531990/serving-signer.crt, /tmp/serving-cert-2249531990/serving-signer.key\\\\nI1124 11:09:13.042739 1 observer_polling.go:159] Starting file observer\\\\nW1124 11:09:13.046128 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 11:09:13.046351 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:09:13.048981 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2249531990/tls.crt::/tmp/serving-cert-2249531990/tls.key\\\\\\\"\\\\nF1124 11:09:23.567420 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d2187669c4dc9aae8ca2f2141104aee1e20df96f0bccf45ecd4c8528f51d1af\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a6b0468c00ca40213d12dd7b80c9f0dcfb93509a44ae37414053672e674f9f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6b0468c00ca40213d12dd7b80c9f0dcfb93509a44ae37414053672e674f9f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:43Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:43 crc kubenswrapper[5072]: I1124 11:09:43.545935 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:43Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:43 crc kubenswrapper[5072]: I1124 11:09:43.572122 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:43 crc kubenswrapper[5072]: I1124 11:09:43.572170 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:43 crc kubenswrapper[5072]: I1124 11:09:43.572183 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:43 crc kubenswrapper[5072]: I1124 11:09:43.572202 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:43 crc kubenswrapper[5072]: I1124 11:09:43.572218 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:43Z","lastTransitionTime":"2025-11-24T11:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:43 crc kubenswrapper[5072]: I1124 11:09:43.675962 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:43 crc kubenswrapper[5072]: I1124 11:09:43.676248 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:43 crc kubenswrapper[5072]: I1124 11:09:43.676308 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:43 crc kubenswrapper[5072]: I1124 11:09:43.676441 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:43 crc kubenswrapper[5072]: I1124 11:09:43.676460 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:43Z","lastTransitionTime":"2025-11-24T11:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:43 crc kubenswrapper[5072]: I1124 11:09:43.780001 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:43 crc kubenswrapper[5072]: I1124 11:09:43.780082 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:43 crc kubenswrapper[5072]: I1124 11:09:43.780107 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:43 crc kubenswrapper[5072]: I1124 11:09:43.780220 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:43 crc kubenswrapper[5072]: I1124 11:09:43.780252 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:43Z","lastTransitionTime":"2025-11-24T11:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:43 crc kubenswrapper[5072]: I1124 11:09:43.883213 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:43 crc kubenswrapper[5072]: I1124 11:09:43.883265 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:43 crc kubenswrapper[5072]: I1124 11:09:43.883282 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:43 crc kubenswrapper[5072]: I1124 11:09:43.883303 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:43 crc kubenswrapper[5072]: I1124 11:09:43.883319 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:43Z","lastTransitionTime":"2025-11-24T11:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:43 crc kubenswrapper[5072]: I1124 11:09:43.986778 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:43 crc kubenswrapper[5072]: I1124 11:09:43.986824 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:43 crc kubenswrapper[5072]: I1124 11:09:43.986839 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:43 crc kubenswrapper[5072]: I1124 11:09:43.986861 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:43 crc kubenswrapper[5072]: I1124 11:09:43.986879 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:43Z","lastTransitionTime":"2025-11-24T11:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:44 crc kubenswrapper[5072]: I1124 11:09:44.015877 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:09:44 crc kubenswrapper[5072]: I1124 11:09:44.015925 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:09:44 crc kubenswrapper[5072]: I1124 11:09:44.015886 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:09:44 crc kubenswrapper[5072]: E1124 11:09:44.016048 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:09:44 crc kubenswrapper[5072]: E1124 11:09:44.016203 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:09:44 crc kubenswrapper[5072]: E1124 11:09:44.016329 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:09:44 crc kubenswrapper[5072]: I1124 11:09:44.089015 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:44 crc kubenswrapper[5072]: I1124 11:09:44.089083 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:44 crc kubenswrapper[5072]: I1124 11:09:44.089094 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:44 crc kubenswrapper[5072]: I1124 11:09:44.089110 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:44 crc kubenswrapper[5072]: I1124 11:09:44.089454 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:44Z","lastTransitionTime":"2025-11-24T11:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:44 crc kubenswrapper[5072]: I1124 11:09:44.192333 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:44 crc kubenswrapper[5072]: I1124 11:09:44.192428 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:44 crc kubenswrapper[5072]: I1124 11:09:44.192448 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:44 crc kubenswrapper[5072]: I1124 11:09:44.192473 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:44 crc kubenswrapper[5072]: I1124 11:09:44.192490 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:44Z","lastTransitionTime":"2025-11-24T11:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:44 crc kubenswrapper[5072]: I1124 11:09:44.295278 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:44 crc kubenswrapper[5072]: I1124 11:09:44.295365 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:44 crc kubenswrapper[5072]: I1124 11:09:44.295423 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:44 crc kubenswrapper[5072]: I1124 11:09:44.295453 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:44 crc kubenswrapper[5072]: I1124 11:09:44.295471 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:44Z","lastTransitionTime":"2025-11-24T11:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:44 crc kubenswrapper[5072]: I1124 11:09:44.302828 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n4qmw_80fda759-ddfd-438a-b5a2-cb775ee1bf7e/ovnkube-controller/1.log" Nov 24 11:09:44 crc kubenswrapper[5072]: I1124 11:09:44.303638 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n4qmw_80fda759-ddfd-438a-b5a2-cb775ee1bf7e/ovnkube-controller/0.log" Nov 24 11:09:44 crc kubenswrapper[5072]: I1124 11:09:44.307588 5072 generic.go:334] "Generic (PLEG): container finished" podID="80fda759-ddfd-438a-b5a2-cb775ee1bf7e" containerID="17a209788447e8d556a2f5d4611b2979e998e017c2ad7a81d88b9d005f215721" exitCode=1 Nov 24 11:09:44 crc kubenswrapper[5072]: I1124 11:09:44.307664 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" event={"ID":"80fda759-ddfd-438a-b5a2-cb775ee1bf7e","Type":"ContainerDied","Data":"17a209788447e8d556a2f5d4611b2979e998e017c2ad7a81d88b9d005f215721"} Nov 24 11:09:44 crc kubenswrapper[5072]: I1124 11:09:44.307725 5072 scope.go:117] "RemoveContainer" containerID="07e6e7ab2f5cf671ed26130bd75177f315add4c324c1f8ca873c79b389c6d8d9" Nov 24 11:09:44 crc kubenswrapper[5072]: I1124 11:09:44.308934 5072 scope.go:117] "RemoveContainer" containerID="17a209788447e8d556a2f5d4611b2979e998e017c2ad7a81d88b9d005f215721" Nov 24 11:09:44 crc kubenswrapper[5072]: E1124 11:09:44.309530 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-n4qmw_openshift-ovn-kubernetes(80fda759-ddfd-438a-b5a2-cb775ee1bf7e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" podUID="80fda759-ddfd-438a-b5a2-cb775ee1bf7e" Nov 24 11:09:44 crc kubenswrapper[5072]: I1124 11:09:44.333445 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qjsxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74eb978f-00ff-4ed3-a5da-8026a3211592\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a69b8017daa872327d88eab8150845309e30c5cf37b229292e7c8a80e5d599c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://911b5942d35c25032791bf5a43559a6234acf215f5d3f84a30e69aced0caecc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://911b5942d35c25032791bf5a43559a6234acf215f5d3f84a30e69aced0caecc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://829da19d26a0ee0192a826e0b355266bcc48c77cf7b1fcf97a9e56add5d48645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://829da19d26a0ee0192a826e0b355266bcc48c77cf7b1fcf97a9e56add5d48645\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5add393950b53ed615d28b3d65833ae6a5174616b7170577babf1f4b7b6a2336\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5add393950b53ed615d28b3d65833ae6a5174616b7170577babf1f4b7b6a2336\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4771d3054f62a25ec9be8b6628ead9e7eb99ad4ae545d803919cb0122343c0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4771d3054f62a25ec9be8b6628ead9e7eb99ad4ae545d803919cb0122343c0ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd19ed803c2b441c4dde807b4cd4461c581058658db24f32dea39ad73b9cef14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd19ed803c2b441c4dde807b4cd4461c581058658db24f32dea39ad73b9cef14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09dba82c18fac19ddd5bbbeecab58a5dc685dbda72e7570cde5d445990066d2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09dba82c18fac19ddd5bbbeecab58a5dc685dbda72e7570cde5d445990066d2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qjsxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:44Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:44 crc kubenswrapper[5072]: I1124 11:09:44.355039 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b45fbff892ae7b15dc056d52d6485a995bb8a62ae423498027fe4866ef51e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcaa27616bc15c5ce26c371eb8a8f155914434949662b30894cd1ef7aa8e04a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:44Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:44 crc kubenswrapper[5072]: I1124 11:09:44.373198 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3973b61727227663fde759ad817fc73088f78293c67fc1bbbf5d5543afa7bbb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:44Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:44 crc kubenswrapper[5072]: I1124 11:09:44.388995 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bkjf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"175fd540-009b-4cb4-9c3e-e2ebc7e787f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d000a9d98b0e3ed54c1cc50148360bb8103d332c45ee03e745f14929132d2c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcts8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bkjf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:44Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:44 crc kubenswrapper[5072]: I1124 11:09:44.398487 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:44 crc kubenswrapper[5072]: I1124 11:09:44.398616 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:44 crc kubenswrapper[5072]: I1124 11:09:44.398637 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:44 crc kubenswrapper[5072]: I1124 11:09:44.398661 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:44 crc kubenswrapper[5072]: I1124 11:09:44.398679 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:44Z","lastTransitionTime":"2025-11-24T11:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:44 crc kubenswrapper[5072]: I1124 11:09:44.411945 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t8b9x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a9fe7b3-71a3-4388-8ee4-7531ceef6049\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96637ece9dca11a6b9e2a8fff8e78ca37f48e9f86e3f076e80cbd56aa353ca74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmbvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t8b9x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:44Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:44 crc kubenswrapper[5072]: I1124 11:09:44.430237 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85ee6420-36f0-467c-acf4-ebea8b02c8d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21d57225dc522c1ee3621c75ac8f9f93c47d21afb8b0cb1aae2d6aea1d17a252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3509fd52379451e43594c096ef652d92778331f2aef6b689e547f35a384b976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jfxnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:44Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:44 crc kubenswrapper[5072]: I1124 11:09:44.449305 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jz4mm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19d555ef-9635-4aa7-bce1-7b1eb4805445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc7d5e96171aeadf92196d2b795c03ec634abd92814569a974200484569c145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8k8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jz4mm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:44Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:44 crc kubenswrapper[5072]: I1124 11:09:44.451840 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wndk6"] Nov 24 11:09:44 crc kubenswrapper[5072]: I1124 11:09:44.452515 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wndk6" Nov 24 11:09:44 crc kubenswrapper[5072]: I1124 11:09:44.454451 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Nov 24 11:09:44 crc kubenswrapper[5072]: I1124 11:09:44.456448 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Nov 24 11:09:44 crc kubenswrapper[5072]: I1124 11:09:44.469964 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9007e2c-ce36-49d5-ac3f-a2a0ced4e662\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://631c19835680cfbfc94d8d2864f79bb327a834aae717a2c9c525383029e44001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a299161b21fb4a4bc255d765f39eaafa3c87549cc62d458d28ff57fbb4b5fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25ce4f3c52e2096622385f0bd213a058de7ddd3967ed8ba918e79fc63b00429c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28c581f99dcf7d549d235350230e7c3ef380dfeb4fdff577353410642700cb1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:44Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:44 crc kubenswrapper[5072]: I1124 11:09:44.489302 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:44Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:44 crc kubenswrapper[5072]: I1124 11:09:44.501128 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:44 crc kubenswrapper[5072]: I1124 11:09:44.501183 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:44 crc kubenswrapper[5072]: I1124 11:09:44.501204 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:44 crc kubenswrapper[5072]: I1124 11:09:44.501228 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:44 crc kubenswrapper[5072]: I1124 11:09:44.501245 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:44Z","lastTransitionTime":"2025-11-24T11:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:44 crc kubenswrapper[5072]: I1124 11:09:44.510846 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a948c39e09b468da8df5726e7734af35e1d5324d44a6ad11f6e30031f27060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:44Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:44 crc kubenswrapper[5072]: I1124 11:09:44.511045 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9c05ddf6-986e-4bd6-95f0-7d734bc59140-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-wndk6\" (UID: \"9c05ddf6-986e-4bd6-95f0-7d734bc59140\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wndk6" Nov 24 11:09:44 crc kubenswrapper[5072]: I1124 11:09:44.511114 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9c05ddf6-986e-4bd6-95f0-7d734bc59140-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-wndk6\" (UID: \"9c05ddf6-986e-4bd6-95f0-7d734bc59140\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wndk6" Nov 24 11:09:44 crc kubenswrapper[5072]: I1124 11:09:44.511254 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9c05ddf6-986e-4bd6-95f0-7d734bc59140-env-overrides\") pod \"ovnkube-control-plane-749d76644c-wndk6\" (UID: \"9c05ddf6-986e-4bd6-95f0-7d734bc59140\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wndk6" Nov 24 11:09:44 crc kubenswrapper[5072]: I1124 11:09:44.511294 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gztmk\" (UniqueName: \"kubernetes.io/projected/9c05ddf6-986e-4bd6-95f0-7d734bc59140-kube-api-access-gztmk\") pod \"ovnkube-control-plane-749d76644c-wndk6\" (UID: \"9c05ddf6-986e-4bd6-95f0-7d734bc59140\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wndk6" Nov 24 11:09:44 crc kubenswrapper[5072]: I1124 11:09:44.530646 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:44Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:44 crc kubenswrapper[5072]: I1124 11:09:44.557908 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a60343a1-7193-420d-b6ef-81505cfad266\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6597a19c8ed876fea1aaa8077315a8f39d0a79dee6af94970a3abcd552d673e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89e652bfaac124e13e0b3dfd3f167688a6b417b3613fb94d5422e2134ad95a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c9b314ea6e67a2866adfd0dc2e429523b6db6dab450a1a95fe5528548a0fcb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5f54ddd554c2e52a492be6b3e237793c7b7bed201d942c23d11983e154863a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e03b85333c8be2e5efe40f082369652f009482373f8e230fd948b2dee4e2ee39\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:09:23Z\\\",\\\"message\\\":\\\"W1124 11:09:12.543261 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 11:09:12.543592 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763982552 cert, and key in /tmp/serving-cert-2249531990/serving-signer.crt, /tmp/serving-cert-2249531990/serving-signer.key\\\\nI1124 11:09:13.042739 1 observer_polling.go:159] Starting file observer\\\\nW1124 11:09:13.046128 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 11:09:13.046351 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:09:13.048981 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2249531990/tls.crt::/tmp/serving-cert-2249531990/tls.key\\\\\\\"\\\\nF1124 11:09:23.567420 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d2187669c4dc9aae8ca2f2141104aee1e20df96f0bccf45ecd4c8528f51d1af\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a6b0468c00ca40213d12dd7b80c9f0dcfb93509a44ae37414053672e674f9f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6b0468c00ca40213d12dd7b80c9f0dcfb93509a44ae37414053672e674f9f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:44Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:44 crc kubenswrapper[5072]: I1124 11:09:44.578753 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:44Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:44 crc kubenswrapper[5072]: I1124 11:09:44.604245 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:44 crc kubenswrapper[5072]: I1124 11:09:44.604301 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:44 crc kubenswrapper[5072]: I1124 11:09:44.604319 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:44 crc kubenswrapper[5072]: I1124 11:09:44.604342 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:44 crc kubenswrapper[5072]: I1124 11:09:44.604359 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:44Z","lastTransitionTime":"2025-11-24T11:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:44 crc kubenswrapper[5072]: I1124 11:09:44.610156 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1421e4bd297d99e68c36da933221bbabf8d74aa5fbfa7cbfe831215de52d4790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c82cb1df0677da29463f84139b09b8ee263695e4c994ef7d17846556260b5c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89dd7133a078fe05808fdf20f22b6939004406ae85d3b6ef854a3e4031350491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f6526ffcce8bc139bd9442203e460c71b46e2e8cf9e1f0d03beb067f5dc1c39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98470930757c0529cc831f91feab9f4b004c808efbfdf40e3e95b12e6af1c6d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7621cb39fa8d0330ee899d4962150519618be95eabfc592e6678bb5f5fbbdbfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17a209788447e8d556a2f5d4611b2979e998e017c2ad7a81d88b9d005f215721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07e6e7ab2f5cf671ed26130bd75177f315add4c324c1f8ca873c79b389c6d8d9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:09:41Z\\\",\\\"message\\\":\\\"\\\\nI1124 11:09:41.542791 6400 handler.go:208] Removed *v1.Node event handler 2\\\\nI1124 11:09:41.542822 6400 handler.go:208] Removed *v1.Node event handler 7\\\\nI1124 11:09:41.542853 6400 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1124 11:09:41.542876 6400 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1124 11:09:41.542898 6400 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1124 11:09:41.542912 6400 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 11:09:41.542952 6400 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1124 11:09:41.542994 6400 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1124 11:09:41.543016 6400 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1124 11:09:41.543049 6400 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1124 11:09:41.543050 6400 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1124 11:09:41.543088 6400 factory.go:656] Stopping watch factory\\\\nI1124 11:09:41.543107 6400 ovnkube.go:599] Stopped ovnkube\\\\nI1124 11:09:41.543015 6400 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1124 11:09:41.543088 6400 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1124 11:09:41.542995 6400 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1124 11:09:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17a209788447e8d556a2f5d4611b2979e998e017c2ad7a81d88b9d005f215721\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:09:43Z\\\",\\\"message\\\":\\\"amespaces:*false,},},},Features:nil,},}\\\\nI1124 11:09:43.362510 6535 egressqos.go:1009] Finished syncing EgressQoS node crc : 15.350947ms\\\\nI1124 11:09:43.362562 6535 nad_controller.go:166] [zone-nad-controller NAD controller]: shutting down\\\\nI1124 11:09:43.362359 6535 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1124 11:09:43.362598 6535 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1124 11:09:43.362644 6535 handler.go:208] Removed *v1.Node event handler 2\\\\nI1124 11:09:43.362667 6535 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1124 11:09:43.362683 6535 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1124 11:09:43.362716 6535 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1124 11:09:43.362743 6535 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 11:09:43.362757 6535 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1124 11:09:43.362762 6535 handler.go:208] Removed *v1.Node event handler 7\\\\nI1124 11:09:43.362775 6535 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1124 11:09:43.362781 6535 factory.go:656] Stopping watch factory\\\\nI1124 11:09:43.362790 6535 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1124 11:09:43.362800 6535 ovnkube.go:599] Stopped ovnkube\\\\nI1124 11:09:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af4c3d6857b6aaa6a401604f5423cfb55488de707a08698b4cf9f420b9c07975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4qmw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:44Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:44 crc kubenswrapper[5072]: I1124 11:09:44.612438 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9c05ddf6-986e-4bd6-95f0-7d734bc59140-env-overrides\") pod \"ovnkube-control-plane-749d76644c-wndk6\" (UID: \"9c05ddf6-986e-4bd6-95f0-7d734bc59140\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wndk6" Nov 24 11:09:44 crc kubenswrapper[5072]: I1124 11:09:44.612506 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gztmk\" (UniqueName: \"kubernetes.io/projected/9c05ddf6-986e-4bd6-95f0-7d734bc59140-kube-api-access-gztmk\") pod \"ovnkube-control-plane-749d76644c-wndk6\" (UID: \"9c05ddf6-986e-4bd6-95f0-7d734bc59140\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wndk6" Nov 24 11:09:44 crc kubenswrapper[5072]: I1124 11:09:44.612575 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9c05ddf6-986e-4bd6-95f0-7d734bc59140-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-wndk6\" (UID: \"9c05ddf6-986e-4bd6-95f0-7d734bc59140\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wndk6" Nov 24 11:09:44 crc kubenswrapper[5072]: I1124 11:09:44.612650 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9c05ddf6-986e-4bd6-95f0-7d734bc59140-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-wndk6\" (UID: \"9c05ddf6-986e-4bd6-95f0-7d734bc59140\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wndk6" Nov 24 11:09:44 crc kubenswrapper[5072]: I1124 11:09:44.613516 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9c05ddf6-986e-4bd6-95f0-7d734bc59140-env-overrides\") pod \"ovnkube-control-plane-749d76644c-wndk6\" (UID: \"9c05ddf6-986e-4bd6-95f0-7d734bc59140\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wndk6" Nov 24 11:09:44 crc kubenswrapper[5072]: I1124 11:09:44.613828 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9c05ddf6-986e-4bd6-95f0-7d734bc59140-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-wndk6\" (UID: \"9c05ddf6-986e-4bd6-95f0-7d734bc59140\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wndk6" Nov 24 11:09:44 crc kubenswrapper[5072]: I1124 11:09:44.626262 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9c05ddf6-986e-4bd6-95f0-7d734bc59140-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-wndk6\" (UID: \"9c05ddf6-986e-4bd6-95f0-7d734bc59140\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wndk6" Nov 24 11:09:44 crc kubenswrapper[5072]: I1124 11:09:44.632989 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:44Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:44 crc kubenswrapper[5072]: I1124 11:09:44.646407 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gztmk\" (UniqueName: \"kubernetes.io/projected/9c05ddf6-986e-4bd6-95f0-7d734bc59140-kube-api-access-gztmk\") pod \"ovnkube-control-plane-749d76644c-wndk6\" (UID: \"9c05ddf6-986e-4bd6-95f0-7d734bc59140\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wndk6" Nov 24 11:09:44 crc kubenswrapper[5072]: I1124 11:09:44.664997 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1421e4bd297d99e68c36da933221bbabf8d74aa5fbfa7cbfe831215de52d4790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c82cb1df0677da29463f84139b09b8ee263695e4c994ef7d17846556260b5c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89dd7133a078fe05808fdf20f22b6939004406ae85d3b6ef854a3e4031350491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f6526ffcce8bc139bd9442203e460c71b46e2e8cf9e1f0d03beb067f5dc1c39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98470930757c0529cc831f91feab9f4b004c808efbfdf40e3e95b12e6af1c6d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7621cb39fa8d0330ee899d4962150519618be95eabfc592e6678bb5f5fbbdbfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17a209788447e8d556a2f5d4611b2979e998e017c2ad7a81d88b9d005f215721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07e6e7ab2f5cf671ed26130bd75177f315add4c324c1f8ca873c79b389c6d8d9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:09:41Z\\\",\\\"message\\\":\\\"\\\\nI1124 11:09:41.542791 6400 handler.go:208] Removed *v1.Node event handler 2\\\\nI1124 11:09:41.542822 6400 handler.go:208] Removed *v1.Node event handler 7\\\\nI1124 11:09:41.542853 6400 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1124 11:09:41.542876 6400 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1124 11:09:41.542898 6400 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1124 11:09:41.542912 6400 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 11:09:41.542952 6400 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1124 11:09:41.542994 6400 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1124 11:09:41.543016 6400 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1124 11:09:41.543049 6400 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1124 11:09:41.543050 6400 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1124 11:09:41.543088 6400 factory.go:656] Stopping watch factory\\\\nI1124 11:09:41.543107 6400 ovnkube.go:599] Stopped ovnkube\\\\nI1124 11:09:41.543015 6400 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1124 11:09:41.543088 6400 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1124 11:09:41.542995 6400 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1124 11:09:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17a209788447e8d556a2f5d4611b2979e998e017c2ad7a81d88b9d005f215721\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:09:43Z\\\",\\\"message\\\":\\\"amespaces:*false,},},},Features:nil,},}\\\\nI1124 11:09:43.362510 6535 egressqos.go:1009] Finished syncing EgressQoS node crc : 15.350947ms\\\\nI1124 11:09:43.362562 6535 nad_controller.go:166] [zone-nad-controller NAD controller]: shutting down\\\\nI1124 11:09:43.362359 6535 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1124 11:09:43.362598 6535 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1124 11:09:43.362644 6535 handler.go:208] Removed *v1.Node event handler 2\\\\nI1124 11:09:43.362667 6535 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1124 11:09:43.362683 6535 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1124 11:09:43.362716 6535 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1124 11:09:43.362743 6535 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 11:09:43.362757 6535 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1124 11:09:43.362762 6535 handler.go:208] Removed *v1.Node event handler 7\\\\nI1124 11:09:43.362775 6535 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1124 11:09:43.362781 6535 factory.go:656] Stopping watch factory\\\\nI1124 11:09:43.362790 6535 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1124 11:09:43.362800 6535 ovnkube.go:599] Stopped ovnkube\\\\nI1124 11:09:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af4c3d6857b6aaa6a401604f5423cfb55488de707a08698b4cf9f420b9c07975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4qmw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:44Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:44 crc kubenswrapper[5072]: I1124 11:09:44.687716 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a60343a1-7193-420d-b6ef-81505cfad266\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6597a19c8ed876fea1aaa8077315a8f39d0a79dee6af94970a3abcd552d673e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89e652bfaac124e13e0b3dfd3f167688a6b417b3613fb94d5422e2134ad95a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c9b314ea6e67a2866adfd0dc2e429523b6db6dab450a1a95fe5528548a0fcb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5f54ddd554c2e52a492be6b3e237793c7b7bed201d942c23d11983e154863a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e03b85333c8be2e5efe40f082369652f009482373f8e230fd948b2dee4e2ee39\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:09:23Z\\\",\\\"message\\\":\\\"W1124 11:09:12.543261 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 11:09:12.543592 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763982552 cert, and key in /tmp/serving-cert-2249531990/serving-signer.crt, /tmp/serving-cert-2249531990/serving-signer.key\\\\nI1124 11:09:13.042739 1 observer_polling.go:159] Starting file observer\\\\nW1124 11:09:13.046128 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 11:09:13.046351 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:09:13.048981 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2249531990/tls.crt::/tmp/serving-cert-2249531990/tls.key\\\\\\\"\\\\nF1124 11:09:23.567420 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d2187669c4dc9aae8ca2f2141104aee1e20df96f0bccf45ecd4c8528f51d1af\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a6b0468c00ca40213d12dd7b80c9f0dcfb93509a44ae37414053672e674f9f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6b0468c00ca40213d12dd7b80c9f0dcfb93509a44ae37414053672e674f9f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:44Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:44 crc kubenswrapper[5072]: I1124 11:09:44.707638 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:44 crc kubenswrapper[5072]: I1124 11:09:44.707691 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:44 crc kubenswrapper[5072]: I1124 11:09:44.707709 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:44 crc kubenswrapper[5072]: I1124 11:09:44.707733 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:44 crc kubenswrapper[5072]: I1124 11:09:44.707751 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:44Z","lastTransitionTime":"2025-11-24T11:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:44 crc kubenswrapper[5072]: I1124 11:09:44.712075 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qjsxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74eb978f-00ff-4ed3-a5da-8026a3211592\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a69b8017daa872327d88eab8150845309e30c5cf37b229292e7c8a80e5d599c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://911b5942d35c25032791bf5a43559a6234acf215f5d3f84a30e69aced0caecc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://911b5942d35c25032791bf5a43559a6234acf215f5d3f84a30e69aced0caecc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://829da19d26a0ee0192a826e0b355266bcc48c77cf7b1fcf97a9e56add5d48645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://829da19d26a0ee0192a826e0b355266bcc48c77cf7b1fcf97a9e56add5d48645\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5add393950b53ed615d28b3d65833ae6a5174616b7170577babf1f4b7b6a2336\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5add393950b53ed615d28b3d65833ae6a5174616b7170577babf1f4b7b6a2336\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4771d3054f62a25ec9be8b6628ead9e7eb99ad4ae545d803919cb0122343c0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4771d3054f62a25ec9be8b6628ead9e7eb99ad4ae545d803919cb0122343c0ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd19ed803c2b441c4dde807b4cd4461c581058658db24f32dea39ad73b9cef14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd19ed803c2b441c4dde807b4cd4461c581058658db24f32dea39ad73b9cef14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09dba82c18fac19ddd5bbbeecab58a5dc685dbda72e7570cde5d445990066d2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09dba82c18fac19ddd5bbbeecab58a5dc685dbda72e7570cde5d445990066d2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qjsxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:44Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:44 crc kubenswrapper[5072]: I1124 11:09:44.731605 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t8b9x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a9fe7b3-71a3-4388-8ee4-7531ceef6049\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96637ece9dca11a6b9e2a8fff8e78ca37f48e9f86e3f076e80cbd56aa353ca74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmbvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t8b9x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:44Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:44 crc kubenswrapper[5072]: I1124 11:09:44.748724 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85ee6420-36f0-467c-acf4-ebea8b02c8d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21d57225dc522c1ee3621c75ac8f9f93c47d21afb8b0cb1aae2d6aea1d17a252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3509fd52379451e43594c096ef652d92778331f2aef6b689e547f35a384b976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jfxnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:44Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:44 crc kubenswrapper[5072]: I1124 11:09:44.765608 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jz4mm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19d555ef-9635-4aa7-bce1-7b1eb4805445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc7d5e96171aeadf92196d2b795c03ec634abd92814569a974200484569c145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8k8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jz4mm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:44Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:44 crc kubenswrapper[5072]: I1124 11:09:44.772422 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wndk6" Nov 24 11:09:44 crc kubenswrapper[5072]: I1124 11:09:44.785775 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wndk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c05ddf6-986e-4bd6-95f0-7d734bc59140\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gztmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gztmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wndk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:44Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:44 crc kubenswrapper[5072]: W1124 11:09:44.794707 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9c05ddf6_986e_4bd6_95f0_7d734bc59140.slice/crio-c0cbd498146d48346cbf4da9de4b639c3467ca7bdf3a1676d43954115093eccd WatchSource:0}: Error finding container c0cbd498146d48346cbf4da9de4b639c3467ca7bdf3a1676d43954115093eccd: Status 404 returned error can't find the container with id c0cbd498146d48346cbf4da9de4b639c3467ca7bdf3a1676d43954115093eccd Nov 24 11:09:44 crc kubenswrapper[5072]: I1124 11:09:44.807084 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b45fbff892ae7b15dc056d52d6485a995bb8a62ae423498027fe4866ef51e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcaa27616bc15c5ce26c371eb8a8f155914434949662b30894cd1ef7aa8e04a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:44Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:44 crc kubenswrapper[5072]: I1124 11:09:44.810730 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:44 crc kubenswrapper[5072]: I1124 11:09:44.810894 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:44 crc kubenswrapper[5072]: I1124 11:09:44.810981 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:44 crc kubenswrapper[5072]: I1124 11:09:44.811078 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:44 crc kubenswrapper[5072]: I1124 11:09:44.811194 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:44Z","lastTransitionTime":"2025-11-24T11:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:44 crc kubenswrapper[5072]: I1124 11:09:44.824927 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3973b61727227663fde759ad817fc73088f78293c67fc1bbbf5d5543afa7bbb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:44Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:44 crc kubenswrapper[5072]: I1124 11:09:44.841713 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bkjf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"175fd540-009b-4cb4-9c3e-e2ebc7e787f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d000a9d98b0e3ed54c1cc50148360bb8103d332c45ee03e745f14929132d2c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcts8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bkjf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:44Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:44 crc kubenswrapper[5072]: I1124 11:09:44.857218 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a948c39e09b468da8df5726e7734af35e1d5324d44a6ad11f6e30031f27060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:44Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:44 crc kubenswrapper[5072]: I1124 11:09:44.873903 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:44Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:44 crc kubenswrapper[5072]: I1124 11:09:44.891999 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9007e2c-ce36-49d5-ac3f-a2a0ced4e662\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://631c19835680cfbfc94d8d2864f79bb327a834aae717a2c9c525383029e44001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a299161b21fb4a4bc255d765f39eaafa3c87549cc62d458d28ff57fbb4b5fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25ce4f3c52e2096622385f0bd213a058de7ddd3967ed8ba918e79fc63b00429c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28c581f99dcf7d549d235350230e7c3ef380dfeb4fdff577353410642700cb1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:44Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:44 crc kubenswrapper[5072]: I1124 11:09:44.910735 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:44Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:44 crc kubenswrapper[5072]: I1124 11:09:44.913141 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:44 crc kubenswrapper[5072]: I1124 11:09:44.913173 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:44 crc kubenswrapper[5072]: I1124 11:09:44.913184 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:44 crc kubenswrapper[5072]: I1124 11:09:44.913200 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:44 crc kubenswrapper[5072]: I1124 11:09:44.913210 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:44Z","lastTransitionTime":"2025-11-24T11:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.015161 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.015195 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.015204 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.015216 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.015227 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:45Z","lastTransitionTime":"2025-11-24T11:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.117269 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.117291 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.117300 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.117313 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.117321 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:45Z","lastTransitionTime":"2025-11-24T11:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.219841 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.219874 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.219882 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.219897 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.219906 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:45Z","lastTransitionTime":"2025-11-24T11:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.313157 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wndk6" event={"ID":"9c05ddf6-986e-4bd6-95f0-7d734bc59140","Type":"ContainerStarted","Data":"ea4b260f16a11dade8c8b120408cf2d167dd868a9b938f4231aa811546252c56"} Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.313212 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wndk6" event={"ID":"9c05ddf6-986e-4bd6-95f0-7d734bc59140","Type":"ContainerStarted","Data":"894e58e94d99e8ef26722db709e0135d59ac4847daf001e37ce266c9baf02e48"} Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.313227 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wndk6" event={"ID":"9c05ddf6-986e-4bd6-95f0-7d734bc59140","Type":"ContainerStarted","Data":"c0cbd498146d48346cbf4da9de4b639c3467ca7bdf3a1676d43954115093eccd"} Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.316166 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n4qmw_80fda759-ddfd-438a-b5a2-cb775ee1bf7e/ovnkube-controller/1.log" Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.320248 5072 scope.go:117] "RemoveContainer" containerID="17a209788447e8d556a2f5d4611b2979e998e017c2ad7a81d88b9d005f215721" Nov 24 11:09:45 crc kubenswrapper[5072]: E1124 11:09:45.320386 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-n4qmw_openshift-ovn-kubernetes(80fda759-ddfd-438a-b5a2-cb775ee1bf7e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" podUID="80fda759-ddfd-438a-b5a2-cb775ee1bf7e" Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.328286 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.328531 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.328660 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.328826 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.328951 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:45Z","lastTransitionTime":"2025-11-24T11:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.340815 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qjsxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74eb978f-00ff-4ed3-a5da-8026a3211592\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a69b8017daa872327d88eab8150845309e30c5cf37b229292e7c8a80e5d599c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://911b5942d35c25032791bf5a43559a6234acf215f5d3f84a30e69aced0caecc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://911b5942d35c25032791bf5a43559a6234acf215f5d3f84a30e69aced0caecc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://829da19d26a0ee0192a826e0b355266bcc48c77cf7b1fcf97a9e56add5d48645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://829da19d26a0ee0192a826e0b355266bcc48c77cf7b1fcf97a9e56add5d48645\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5add393950b53ed615d28b3d65833ae6a5174616b7170577babf1f4b7b6a2336\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5add393950b53ed615d28b3d65833ae6a5174616b7170577babf1f4b7b6a2336\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4771d3054f62a25ec9be8b6628ead9e7eb99ad4ae545d803919cb0122343c0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4771d3054f62a25ec9be8b6628ead9e7eb99ad4ae545d803919cb0122343c0ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd19ed803c2b441c4dde807b4cd4461c581058658db24f32dea39ad73b9cef14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd19ed803c2b441c4dde807b4cd4461c581058658db24f32dea39ad73b9cef14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09dba82c18fac19ddd5bbbeecab58a5dc685dbda72e7570cde5d445990066d2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09dba82c18fac19ddd5bbbeecab58a5dc685dbda72e7570cde5d445990066d2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qjsxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:45Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.351902 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jz4mm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19d555ef-9635-4aa7-bce1-7b1eb4805445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc7d5e96171aeadf92196d2b795c03ec634abd92814569a974200484569c145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8k8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jz4mm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:45Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.365982 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wndk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c05ddf6-986e-4bd6-95f0-7d734bc59140\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://894e58e94d99e8ef26722db709e0135d59ac4847daf001e37ce266c9baf02e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gztmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea4b260f16a11dade8c8b120408cf2d167dd868a9b938f4231aa811546252c56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gztmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wndk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:45Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.377987 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b45fbff892ae7b15dc056d52d6485a995bb8a62ae423498027fe4866ef51e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcaa27616bc15c5ce26c371eb8a8f155914434949662b30894cd1ef7aa8e04a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:45Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.392296 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3973b61727227663fde759ad817fc73088f78293c67fc1bbbf5d5543afa7bbb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:45Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.406562 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bkjf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"175fd540-009b-4cb4-9c3e-e2ebc7e787f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d000a9d98b0e3ed54c1cc50148360bb8103d332c45ee03e745f14929132d2c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcts8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bkjf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:45Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.426077 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t8b9x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a9fe7b3-71a3-4388-8ee4-7531ceef6049\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96637ece9dca11a6b9e2a8fff8e78ca37f48e9f86e3f076e80cbd56aa353ca74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmbvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t8b9x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:45Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.431495 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.431559 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.431572 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.431592 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.431604 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:45Z","lastTransitionTime":"2025-11-24T11:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.440407 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85ee6420-36f0-467c-acf4-ebea8b02c8d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21d57225dc522c1ee3621c75ac8f9f93c47d21afb8b0cb1aae2d6aea1d17a252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3509fd52379451e43594c096ef652d92778331f2aef6b689e547f35a384b976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jfxnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:45Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.458178 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9007e2c-ce36-49d5-ac3f-a2a0ced4e662\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://631c19835680cfbfc94d8d2864f79bb327a834aae717a2c9c525383029e44001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a299161b21fb4a4bc255d765f39eaafa3c87549cc62d458d28ff57fbb4b5fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25ce4f3c52e2096622385f0bd213a058de7ddd3967ed8ba918e79fc63b00429c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28c581f99dcf7d549d235350230e7c3ef380dfeb4fdff577353410642700cb1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:45Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.474169 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:45Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.492446 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a948c39e09b468da8df5726e7734af35e1d5324d44a6ad11f6e30031f27060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:45Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.513005 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:45Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.534195 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.534260 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.534273 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.534290 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.534304 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:45Z","lastTransitionTime":"2025-11-24T11:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.537508 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a60343a1-7193-420d-b6ef-81505cfad266\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6597a19c8ed876fea1aaa8077315a8f39d0a79dee6af94970a3abcd552d673e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89e652bfaac124e13e0b3dfd3f167688a6b417b3613fb94d5422e2134ad95a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c9b314ea6e67a2866adfd0dc2e429523b6db6dab450a1a95fe5528548a0fcb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5f54ddd554c2e52a492be6b3e237793c7b7bed201d942c23d11983e154863a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e03b85333c8be2e5efe40f082369652f009482373f8e230fd948b2dee4e2ee39\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:09:23Z\\\",\\\"message\\\":\\\"W1124 11:09:12.543261 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 11:09:12.543592 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763982552 cert, and key in /tmp/serving-cert-2249531990/serving-signer.crt, /tmp/serving-cert-2249531990/serving-signer.key\\\\nI1124 11:09:13.042739 1 observer_polling.go:159] Starting file observer\\\\nW1124 11:09:13.046128 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 11:09:13.046351 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:09:13.048981 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2249531990/tls.crt::/tmp/serving-cert-2249531990/tls.key\\\\\\\"\\\\nF1124 11:09:23.567420 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d2187669c4dc9aae8ca2f2141104aee1e20df96f0bccf45ecd4c8528f51d1af\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a6b0468c00ca40213d12dd7b80c9f0dcfb93509a44ae37414053672e674f9f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6b0468c00ca40213d12dd7b80c9f0dcfb93509a44ae37414053672e674f9f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:45Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.553015 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:45Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.577712 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-nnrv7"] Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.577616 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1421e4bd297d99e68c36da933221bbabf8d74aa5fbfa7cbfe831215de52d4790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c82cb1df0677da29463f84139b09b8ee263695e4c994ef7d17846556260b5c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89dd7133a078fe05808fdf20f22b6939004406ae85d3b6ef854a3e4031350491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f6526ffcce8bc139bd9442203e460c71b46e2e8cf9e1f0d03beb067f5dc1c39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98470930757c0529cc831f91feab9f4b004c808efbfdf40e3e95b12e6af1c6d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7621cb39fa8d0330ee899d4962150519618be95eabfc592e6678bb5f5fbbdbfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17a209788447e8d556a2f5d4611b2979e998e017c2ad7a81d88b9d005f215721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07e6e7ab2f5cf671ed26130bd75177f315add4c324c1f8ca873c79b389c6d8d9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:09:41Z\\\",\\\"message\\\":\\\"\\\\nI1124 11:09:41.542791 6400 handler.go:208] Removed *v1.Node event handler 2\\\\nI1124 11:09:41.542822 6400 handler.go:208] Removed *v1.Node event handler 7\\\\nI1124 11:09:41.542853 6400 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1124 11:09:41.542876 6400 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1124 11:09:41.542898 6400 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1124 11:09:41.542912 6400 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 11:09:41.542952 6400 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1124 11:09:41.542994 6400 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1124 11:09:41.543016 6400 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1124 11:09:41.543049 6400 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1124 11:09:41.543050 6400 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1124 11:09:41.543088 6400 factory.go:656] Stopping watch factory\\\\nI1124 11:09:41.543107 6400 ovnkube.go:599] Stopped ovnkube\\\\nI1124 11:09:41.543015 6400 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1124 11:09:41.543088 6400 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1124 11:09:41.542995 6400 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1124 11:09:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17a209788447e8d556a2f5d4611b2979e998e017c2ad7a81d88b9d005f215721\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:09:43Z\\\",\\\"message\\\":\\\"amespaces:*false,},},},Features:nil,},}\\\\nI1124 11:09:43.362510 6535 egressqos.go:1009] Finished syncing EgressQoS node crc : 15.350947ms\\\\nI1124 11:09:43.362562 6535 nad_controller.go:166] [zone-nad-controller NAD controller]: shutting down\\\\nI1124 11:09:43.362359 6535 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1124 11:09:43.362598 6535 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1124 11:09:43.362644 6535 handler.go:208] Removed *v1.Node event handler 2\\\\nI1124 11:09:43.362667 6535 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1124 11:09:43.362683 6535 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1124 11:09:43.362716 6535 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1124 11:09:43.362743 6535 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 11:09:43.362757 6535 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1124 11:09:43.362762 6535 handler.go:208] Removed *v1.Node event handler 7\\\\nI1124 11:09:43.362775 6535 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1124 11:09:43.362781 6535 factory.go:656] Stopping watch factory\\\\nI1124 11:09:43.362790 6535 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1124 11:09:43.362800 6535 ovnkube.go:599] Stopped ovnkube\\\\nI1124 11:09:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af4c3d6857b6aaa6a401604f5423cfb55488de707a08698b4cf9f420b9c07975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4qmw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:45Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.578185 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nnrv7" Nov 24 11:09:45 crc kubenswrapper[5072]: E1124 11:09:45.578253 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nnrv7" podUID="60100e7d-c8b1-4b18-8567-46e21096fa0f" Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.594214 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b45fbff892ae7b15dc056d52d6485a995bb8a62ae423498027fe4866ef51e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcaa27616bc15c5ce26c371eb8a8f155914434949662b30894cd1ef7aa8e04a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:45Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.611624 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3973b61727227663fde759ad817fc73088f78293c67fc1bbbf5d5543afa7bbb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:45Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.621612 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/60100e7d-c8b1-4b18-8567-46e21096fa0f-metrics-certs\") pod \"network-metrics-daemon-nnrv7\" (UID: \"60100e7d-c8b1-4b18-8567-46e21096fa0f\") " pod="openshift-multus/network-metrics-daemon-nnrv7" Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.621859 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbdfs\" (UniqueName: \"kubernetes.io/projected/60100e7d-c8b1-4b18-8567-46e21096fa0f-kube-api-access-rbdfs\") pod \"network-metrics-daemon-nnrv7\" (UID: \"60100e7d-c8b1-4b18-8567-46e21096fa0f\") " pod="openshift-multus/network-metrics-daemon-nnrv7" Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.628418 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bkjf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"175fd540-009b-4cb4-9c3e-e2ebc7e787f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d000a9d98b0e3ed54c1cc50148360bb8103d332c45ee03e745f14929132d2c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcts8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bkjf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:45Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.637748 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.637783 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.637810 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.637824 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.637833 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:45Z","lastTransitionTime":"2025-11-24T11:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.643872 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t8b9x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a9fe7b3-71a3-4388-8ee4-7531ceef6049\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96637ece9dca11a6b9e2a8fff8e78ca37f48e9f86e3f076e80cbd56aa353ca74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmbvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t8b9x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:45Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.660259 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85ee6420-36f0-467c-acf4-ebea8b02c8d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21d57225dc522c1ee3621c75ac8f9f93c47d21afb8b0cb1aae2d6aea1d17a252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3509fd52379451e43594c096ef652d92778331f2aef6b689e547f35a384b976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jfxnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:45Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.673543 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jz4mm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19d555ef-9635-4aa7-bce1-7b1eb4805445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc7d5e96171aeadf92196d2b795c03ec634abd92814569a974200484569c145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8k8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jz4mm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:45Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.685859 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wndk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c05ddf6-986e-4bd6-95f0-7d734bc59140\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://894e58e94d99e8ef26722db709e0135d59ac4847daf001e37ce266c9baf02e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gztmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea4b260f16a11dade8c8b120408cf2d167dd868a9b938f4231aa811546252c56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gztmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wndk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:45Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.699442 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9007e2c-ce36-49d5-ac3f-a2a0ced4e662\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://631c19835680cfbfc94d8d2864f79bb327a834aae717a2c9c525383029e44001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a299161b21fb4a4bc255d765f39eaafa3c87549cc62d458d28ff57fbb4b5fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25ce4f3c52e2096622385f0bd213a058de7ddd3967ed8ba918e79fc63b00429c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28c581f99dcf7d549d235350230e7c3ef380dfeb4fdff577353410642700cb1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:45Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.710645 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:45Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.722635 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.722694 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbdfs\" (UniqueName: \"kubernetes.io/projected/60100e7d-c8b1-4b18-8567-46e21096fa0f-kube-api-access-rbdfs\") pod \"network-metrics-daemon-nnrv7\" (UID: \"60100e7d-c8b1-4b18-8567-46e21096fa0f\") " pod="openshift-multus/network-metrics-daemon-nnrv7" Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.722739 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/60100e7d-c8b1-4b18-8567-46e21096fa0f-metrics-certs\") pod \"network-metrics-daemon-nnrv7\" (UID: \"60100e7d-c8b1-4b18-8567-46e21096fa0f\") " pod="openshift-multus/network-metrics-daemon-nnrv7" Nov 24 11:09:45 crc kubenswrapper[5072]: E1124 11:09:45.722857 5072 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 11:09:45 crc kubenswrapper[5072]: E1124 11:09:45.722906 5072 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 11:09:45 crc kubenswrapper[5072]: E1124 11:09:45.722921 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/60100e7d-c8b1-4b18-8567-46e21096fa0f-metrics-certs podName:60100e7d-c8b1-4b18-8567-46e21096fa0f nodeName:}" failed. No retries permitted until 2025-11-24 11:09:46.222904397 +0000 UTC m=+37.934428893 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/60100e7d-c8b1-4b18-8567-46e21096fa0f-metrics-certs") pod "network-metrics-daemon-nnrv7" (UID: "60100e7d-c8b1-4b18-8567-46e21096fa0f") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 11:09:45 crc kubenswrapper[5072]: E1124 11:09:45.723025 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 11:10:01.723002029 +0000 UTC m=+53.434526605 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.727142 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a948c39e09b468da8df5726e7734af35e1d5324d44a6ad11f6e30031f27060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:45Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.740639 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.740696 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.740711 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.740730 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.740744 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:45Z","lastTransitionTime":"2025-11-24T11:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.741593 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:45Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.743213 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbdfs\" (UniqueName: \"kubernetes.io/projected/60100e7d-c8b1-4b18-8567-46e21096fa0f-kube-api-access-rbdfs\") pod \"network-metrics-daemon-nnrv7\" (UID: \"60100e7d-c8b1-4b18-8567-46e21096fa0f\") " pod="openshift-multus/network-metrics-daemon-nnrv7" Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.759858 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a60343a1-7193-420d-b6ef-81505cfad266\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6597a19c8ed876fea1aaa8077315a8f39d0a79dee6af94970a3abcd552d673e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89e652bfaac124e13e0b3dfd3f167688a6b417b3613fb94d5422e2134ad95a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c9b314ea6e67a2866adfd0dc2e429523b6db6dab450a1a95fe5528548a0fcb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5f54ddd554c2e52a492be6b3e237793c7b7bed201d942c23d11983e154863a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e03b85333c8be2e5efe40f082369652f009482373f8e230fd948b2dee4e2ee39\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:09:23Z\\\",\\\"message\\\":\\\"W1124 11:09:12.543261 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 11:09:12.543592 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763982552 cert, and key in /tmp/serving-cert-2249531990/serving-signer.crt, /tmp/serving-cert-2249531990/serving-signer.key\\\\nI1124 11:09:13.042739 1 observer_polling.go:159] Starting file observer\\\\nW1124 11:09:13.046128 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 11:09:13.046351 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:09:13.048981 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2249531990/tls.crt::/tmp/serving-cert-2249531990/tls.key\\\\\\\"\\\\nF1124 11:09:23.567420 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d2187669c4dc9aae8ca2f2141104aee1e20df96f0bccf45ecd4c8528f51d1af\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a6b0468c00ca40213d12dd7b80c9f0dcfb93509a44ae37414053672e674f9f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6b0468c00ca40213d12dd7b80c9f0dcfb93509a44ae37414053672e674f9f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:45Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.775142 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:45Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.793328 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1421e4bd297d99e68c36da933221bbabf8d74aa5fbfa7cbfe831215de52d4790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c82cb1df0677da29463f84139b09b8ee263695e4c994ef7d17846556260b5c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89dd7133a078fe05808fdf20f22b6939004406ae85d3b6ef854a3e4031350491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f6526ffcce8bc139bd9442203e460c71b46e2e8cf9e1f0d03beb067f5dc1c39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98470930757c0529cc831f91feab9f4b004c808efbfdf40e3e95b12e6af1c6d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7621cb39fa8d0330ee899d4962150519618be95eabfc592e6678bb5f5fbbdbfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17a209788447e8d556a2f5d4611b2979e998e017c2ad7a81d88b9d005f215721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17a209788447e8d556a2f5d4611b2979e998e017c2ad7a81d88b9d005f215721\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:09:43Z\\\",\\\"message\\\":\\\"amespaces:*false,},},},Features:nil,},}\\\\nI1124 11:09:43.362510 6535 egressqos.go:1009] Finished syncing EgressQoS node crc : 15.350947ms\\\\nI1124 11:09:43.362562 6535 nad_controller.go:166] [zone-nad-controller NAD controller]: shutting down\\\\nI1124 11:09:43.362359 6535 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1124 11:09:43.362598 6535 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1124 11:09:43.362644 6535 handler.go:208] Removed *v1.Node event handler 2\\\\nI1124 11:09:43.362667 6535 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1124 11:09:43.362683 6535 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1124 11:09:43.362716 6535 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1124 11:09:43.362743 6535 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 11:09:43.362757 6535 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1124 11:09:43.362762 6535 handler.go:208] Removed *v1.Node event handler 7\\\\nI1124 11:09:43.362775 6535 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1124 11:09:43.362781 6535 factory.go:656] Stopping watch factory\\\\nI1124 11:09:43.362790 6535 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1124 11:09:43.362800 6535 ovnkube.go:599] Stopped ovnkube\\\\nI1124 11:09:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-n4qmw_openshift-ovn-kubernetes(80fda759-ddfd-438a-b5a2-cb775ee1bf7e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af4c3d6857b6aaa6a401604f5423cfb55488de707a08698b4cf9f420b9c07975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4qmw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:45Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.809491 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qjsxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74eb978f-00ff-4ed3-a5da-8026a3211592\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a69b8017daa872327d88eab8150845309e30c5cf37b229292e7c8a80e5d599c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://911b5942d35c25032791bf5a43559a6234acf215f5d3f84a30e69aced0caecc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://911b5942d35c25032791bf5a43559a6234acf215f5d3f84a30e69aced0caecc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://829da19d26a0ee0192a826e0b355266bcc48c77cf7b1fcf97a9e56add5d48645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://829da19d26a0ee0192a826e0b355266bcc48c77cf7b1fcf97a9e56add5d48645\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5add393950b53ed615d28b3d65833ae6a5174616b7170577babf1f4b7b6a2336\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5add393950b53ed615d28b3d65833ae6a5174616b7170577babf1f4b7b6a2336\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4771d3054f62a25ec9be8b6628ead9e7eb99ad4ae545d803919cb0122343c0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4771d3054f62a25ec9be8b6628ead9e7eb99ad4ae545d803919cb0122343c0ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd19ed803c2b441c4dde807b4cd4461c581058658db24f32dea39ad73b9cef14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd19ed803c2b441c4dde807b4cd4461c581058658db24f32dea39ad73b9cef14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09dba82c18fac19ddd5bbbeecab58a5dc685dbda72e7570cde5d445990066d2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09dba82c18fac19ddd5bbbeecab58a5dc685dbda72e7570cde5d445990066d2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qjsxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:45Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.823707 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:09:45 crc kubenswrapper[5072]: E1124 11:09:45.823886 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:10:01.82386323 +0000 UTC m=+53.535387706 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.824413 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.824546 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:09:45 crc kubenswrapper[5072]: E1124 11:09:45.824627 5072 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 11:09:45 crc kubenswrapper[5072]: E1124 11:09:45.824664 5072 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 11:09:45 crc kubenswrapper[5072]: E1124 11:09:45.824673 5072 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 11:09:45 crc kubenswrapper[5072]: E1124 11:09:45.824688 5072 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:09:45 crc kubenswrapper[5072]: E1124 11:09:45.824706 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 11:10:01.82469443 +0000 UTC m=+53.536218896 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 11:09:45 crc kubenswrapper[5072]: E1124 11:09:45.824768 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 11:10:01.824727611 +0000 UTC m=+53.536252157 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:09:45 crc kubenswrapper[5072]: E1124 11:09:45.824860 5072 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 11:09:45 crc kubenswrapper[5072]: E1124 11:09:45.824893 5072 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 11:09:45 crc kubenswrapper[5072]: E1124 11:09:45.824905 5072 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:09:45 crc kubenswrapper[5072]: E1124 11:09:45.824949 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 11:10:01.824935286 +0000 UTC m=+53.536459762 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.824653 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.827129 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a60343a1-7193-420d-b6ef-81505cfad266\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6597a19c8ed876fea1aaa8077315a8f39d0a79dee6af94970a3abcd552d673e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89e652bfaac124e13e0b3dfd3f167688a6b417b3613fb94d5422e2134ad95a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c9b314ea6e67a2866adfd0dc2e429523b6db6dab450a1a95fe5528548a0fcb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5f54ddd554c2e52a492be6b3e237793c7b7bed201d942c23d11983e154863a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e03b85333c8be2e5efe40f082369652f009482373f8e230fd948b2dee4e2ee39\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:09:23Z\\\",\\\"message\\\":\\\"W1124 11:09:12.543261 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 11:09:12.543592 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763982552 cert, and key in /tmp/serving-cert-2249531990/serving-signer.crt, /tmp/serving-cert-2249531990/serving-signer.key\\\\nI1124 11:09:13.042739 1 observer_polling.go:159] Starting file observer\\\\nW1124 11:09:13.046128 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 11:09:13.046351 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:09:13.048981 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2249531990/tls.crt::/tmp/serving-cert-2249531990/tls.key\\\\\\\"\\\\nF1124 11:09:23.567420 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d2187669c4dc9aae8ca2f2141104aee1e20df96f0bccf45ecd4c8528f51d1af\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a6b0468c00ca40213d12dd7b80c9f0dcfb93509a44ae37414053672e674f9f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6b0468c00ca40213d12dd7b80c9f0dcfb93509a44ae37414053672e674f9f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:45Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.839439 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:45Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.845904 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.845946 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.845955 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.845970 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.845982 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:45Z","lastTransitionTime":"2025-11-24T11:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.860270 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1421e4bd297d99e68c36da933221bbabf8d74aa5fbfa7cbfe831215de52d4790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c82cb1df0677da29463f84139b09b8ee263695e4c994ef7d17846556260b5c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89dd7133a078fe05808fdf20f22b6939004406ae85d3b6ef854a3e4031350491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f6526ffcce8bc139bd9442203e460c71b46e2e8cf9e1f0d03beb067f5dc1c39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98470930757c0529cc831f91feab9f4b004c808efbfdf40e3e95b12e6af1c6d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7621cb39fa8d0330ee899d4962150519618be95eabfc592e6678bb5f5fbbdbfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17a209788447e8d556a2f5d4611b2979e998e017c2ad7a81d88b9d005f215721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17a209788447e8d556a2f5d4611b2979e998e017c2ad7a81d88b9d005f215721\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:09:43Z\\\",\\\"message\\\":\\\"amespaces:*false,},},},Features:nil,},}\\\\nI1124 11:09:43.362510 6535 egressqos.go:1009] Finished syncing EgressQoS node crc : 15.350947ms\\\\nI1124 11:09:43.362562 6535 nad_controller.go:166] [zone-nad-controller NAD controller]: shutting down\\\\nI1124 11:09:43.362359 6535 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1124 11:09:43.362598 6535 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1124 11:09:43.362644 6535 handler.go:208] Removed *v1.Node event handler 2\\\\nI1124 11:09:43.362667 6535 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1124 11:09:43.362683 6535 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1124 11:09:43.362716 6535 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1124 11:09:43.362743 6535 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 11:09:43.362757 6535 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1124 11:09:43.362762 6535 handler.go:208] Removed *v1.Node event handler 7\\\\nI1124 11:09:43.362775 6535 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1124 11:09:43.362781 6535 factory.go:656] Stopping watch factory\\\\nI1124 11:09:43.362790 6535 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1124 11:09:43.362800 6535 ovnkube.go:599] Stopped ovnkube\\\\nI1124 11:09:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-n4qmw_openshift-ovn-kubernetes(80fda759-ddfd-438a-b5a2-cb775ee1bf7e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af4c3d6857b6aaa6a401604f5423cfb55488de707a08698b4cf9f420b9c07975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4qmw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:45Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.874225 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qjsxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74eb978f-00ff-4ed3-a5da-8026a3211592\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a69b8017daa872327d88eab8150845309e30c5cf37b229292e7c8a80e5d599c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://911b5942d35c25032791bf5a43559a6234acf215f5d3f84a30e69aced0caecc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://911b5942d35c25032791bf5a43559a6234acf215f5d3f84a30e69aced0caecc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://829da19d26a0ee0192a826e0b355266bcc48c77cf7b1fcf97a9e56add5d48645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://829da19d26a0ee0192a826e0b355266bcc48c77cf7b1fcf97a9e56add5d48645\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5add393950b53ed615d28b3d65833ae6a5174616b7170577babf1f4b7b6a2336\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5add393950b53ed615d28b3d65833ae6a5174616b7170577babf1f4b7b6a2336\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4771d3054f62a25ec9be8b6628ead9e7eb99ad4ae545d803919cb0122343c0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4771d3054f62a25ec9be8b6628ead9e7eb99ad4ae545d803919cb0122343c0ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd19ed803c2b441c4dde807b4cd4461c581058658db24f32dea39ad73b9cef14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd19ed803c2b441c4dde807b4cd4461c581058658db24f32dea39ad73b9cef14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09dba82c18fac19ddd5bbbeecab58a5dc685dbda72e7570cde5d445990066d2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09dba82c18fac19ddd5bbbeecab58a5dc685dbda72e7570cde5d445990066d2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qjsxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:45Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.885124 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wndk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c05ddf6-986e-4bd6-95f0-7d734bc59140\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://894e58e94d99e8ef26722db709e0135d59ac4847daf001e37ce266c9baf02e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gztmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea4b260f16a11dade8c8b120408cf2d167dd868a9b938f4231aa811546252c56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gztmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wndk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:45Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.893970 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nnrv7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60100e7d-c8b1-4b18-8567-46e21096fa0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbdfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbdfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:45Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nnrv7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:45Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.906243 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b45fbff892ae7b15dc056d52d6485a995bb8a62ae423498027fe4866ef51e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcaa27616bc15c5ce26c371eb8a8f155914434949662b30894cd1ef7aa8e04a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:45Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.917174 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3973b61727227663fde759ad817fc73088f78293c67fc1bbbf5d5543afa7bbb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:45Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.927126 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bkjf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"175fd540-009b-4cb4-9c3e-e2ebc7e787f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d000a9d98b0e3ed54c1cc50148360bb8103d332c45ee03e745f14929132d2c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcts8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bkjf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:45Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.940012 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t8b9x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a9fe7b3-71a3-4388-8ee4-7531ceef6049\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96637ece9dca11a6b9e2a8fff8e78ca37f48e9f86e3f076e80cbd56aa353ca74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmbvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t8b9x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:45Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.949578 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.949623 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.949637 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.949657 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.949675 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:45Z","lastTransitionTime":"2025-11-24T11:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.954988 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85ee6420-36f0-467c-acf4-ebea8b02c8d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21d57225dc522c1ee3621c75ac8f9f93c47d21afb8b0cb1aae2d6aea1d17a252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3509fd52379451e43594c096ef652d92778331f2aef6b689e547f35a384b976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jfxnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:45Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.967633 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jz4mm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19d555ef-9635-4aa7-bce1-7b1eb4805445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc7d5e96171aeadf92196d2b795c03ec634abd92814569a974200484569c145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8k8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jz4mm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:45Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:45 crc kubenswrapper[5072]: I1124 11:09:45.984259 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9007e2c-ce36-49d5-ac3f-a2a0ced4e662\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://631c19835680cfbfc94d8d2864f79bb327a834aae717a2c9c525383029e44001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a299161b21fb4a4bc255d765f39eaafa3c87549cc62d458d28ff57fbb4b5fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25ce4f3c52e2096622385f0bd213a058de7ddd3967ed8ba918e79fc63b00429c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28c581f99dcf7d549d235350230e7c3ef380dfeb4fdff577353410642700cb1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:45Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:46 crc kubenswrapper[5072]: I1124 11:09:46.000041 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:45Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:46 crc kubenswrapper[5072]: I1124 11:09:46.014142 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a948c39e09b468da8df5726e7734af35e1d5324d44a6ad11f6e30031f27060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:46Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:46 crc kubenswrapper[5072]: I1124 11:09:46.016206 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:09:46 crc kubenswrapper[5072]: I1124 11:09:46.016330 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:09:46 crc kubenswrapper[5072]: E1124 11:09:46.016577 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:09:46 crc kubenswrapper[5072]: I1124 11:09:46.016643 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:09:46 crc kubenswrapper[5072]: E1124 11:09:46.016762 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:09:46 crc kubenswrapper[5072]: E1124 11:09:46.016900 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:09:46 crc kubenswrapper[5072]: I1124 11:09:46.028352 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:46Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:46 crc kubenswrapper[5072]: I1124 11:09:46.052630 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:46 crc kubenswrapper[5072]: I1124 11:09:46.052684 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:46 crc kubenswrapper[5072]: I1124 11:09:46.052699 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:46 crc kubenswrapper[5072]: I1124 11:09:46.052718 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:46 crc kubenswrapper[5072]: I1124 11:09:46.052732 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:46Z","lastTransitionTime":"2025-11-24T11:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:46 crc kubenswrapper[5072]: I1124 11:09:46.155472 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:46 crc kubenswrapper[5072]: I1124 11:09:46.155542 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:46 crc kubenswrapper[5072]: I1124 11:09:46.155566 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:46 crc kubenswrapper[5072]: I1124 11:09:46.155594 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:46 crc kubenswrapper[5072]: I1124 11:09:46.155613 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:46Z","lastTransitionTime":"2025-11-24T11:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:46 crc kubenswrapper[5072]: I1124 11:09:46.230635 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/60100e7d-c8b1-4b18-8567-46e21096fa0f-metrics-certs\") pod \"network-metrics-daemon-nnrv7\" (UID: \"60100e7d-c8b1-4b18-8567-46e21096fa0f\") " pod="openshift-multus/network-metrics-daemon-nnrv7" Nov 24 11:09:46 crc kubenswrapper[5072]: E1124 11:09:46.230879 5072 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 11:09:46 crc kubenswrapper[5072]: E1124 11:09:46.231170 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/60100e7d-c8b1-4b18-8567-46e21096fa0f-metrics-certs podName:60100e7d-c8b1-4b18-8567-46e21096fa0f nodeName:}" failed. No retries permitted until 2025-11-24 11:09:47.231133728 +0000 UTC m=+38.942658244 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/60100e7d-c8b1-4b18-8567-46e21096fa0f-metrics-certs") pod "network-metrics-daemon-nnrv7" (UID: "60100e7d-c8b1-4b18-8567-46e21096fa0f") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 11:09:46 crc kubenswrapper[5072]: I1124 11:09:46.258185 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:46 crc kubenswrapper[5072]: I1124 11:09:46.258219 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:46 crc kubenswrapper[5072]: I1124 11:09:46.258228 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:46 crc kubenswrapper[5072]: I1124 11:09:46.258243 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:46 crc kubenswrapper[5072]: I1124 11:09:46.258257 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:46Z","lastTransitionTime":"2025-11-24T11:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:46 crc kubenswrapper[5072]: I1124 11:09:46.361499 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:46 crc kubenswrapper[5072]: I1124 11:09:46.361550 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:46 crc kubenswrapper[5072]: I1124 11:09:46.361567 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:46 crc kubenswrapper[5072]: I1124 11:09:46.361591 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:46 crc kubenswrapper[5072]: I1124 11:09:46.361609 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:46Z","lastTransitionTime":"2025-11-24T11:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:46 crc kubenswrapper[5072]: I1124 11:09:46.465110 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:46 crc kubenswrapper[5072]: I1124 11:09:46.465427 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:46 crc kubenswrapper[5072]: I1124 11:09:46.465522 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:46 crc kubenswrapper[5072]: I1124 11:09:46.465628 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:46 crc kubenswrapper[5072]: I1124 11:09:46.465736 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:46Z","lastTransitionTime":"2025-11-24T11:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:46 crc kubenswrapper[5072]: I1124 11:09:46.614126 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:46 crc kubenswrapper[5072]: I1124 11:09:46.614161 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:46 crc kubenswrapper[5072]: I1124 11:09:46.614172 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:46 crc kubenswrapper[5072]: I1124 11:09:46.614187 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:46 crc kubenswrapper[5072]: I1124 11:09:46.614199 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:46Z","lastTransitionTime":"2025-11-24T11:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:46 crc kubenswrapper[5072]: I1124 11:09:46.717736 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:46 crc kubenswrapper[5072]: I1124 11:09:46.718108 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:46 crc kubenswrapper[5072]: I1124 11:09:46.718233 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:46 crc kubenswrapper[5072]: I1124 11:09:46.718418 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:46 crc kubenswrapper[5072]: I1124 11:09:46.718544 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:46Z","lastTransitionTime":"2025-11-24T11:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:46 crc kubenswrapper[5072]: I1124 11:09:46.821235 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:46 crc kubenswrapper[5072]: I1124 11:09:46.821305 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:46 crc kubenswrapper[5072]: I1124 11:09:46.821329 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:46 crc kubenswrapper[5072]: I1124 11:09:46.821360 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:46 crc kubenswrapper[5072]: I1124 11:09:46.821424 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:46Z","lastTransitionTime":"2025-11-24T11:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:46 crc kubenswrapper[5072]: I1124 11:09:46.925111 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:46 crc kubenswrapper[5072]: I1124 11:09:46.925165 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:46 crc kubenswrapper[5072]: I1124 11:09:46.925182 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:46 crc kubenswrapper[5072]: I1124 11:09:46.925205 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:46 crc kubenswrapper[5072]: I1124 11:09:46.925225 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:46Z","lastTransitionTime":"2025-11-24T11:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:47 crc kubenswrapper[5072]: I1124 11:09:47.016449 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nnrv7" Nov 24 11:09:47 crc kubenswrapper[5072]: E1124 11:09:47.016676 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nnrv7" podUID="60100e7d-c8b1-4b18-8567-46e21096fa0f" Nov 24 11:09:47 crc kubenswrapper[5072]: I1124 11:09:47.027236 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:47 crc kubenswrapper[5072]: I1124 11:09:47.027304 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:47 crc kubenswrapper[5072]: I1124 11:09:47.027326 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:47 crc kubenswrapper[5072]: I1124 11:09:47.027355 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:47 crc kubenswrapper[5072]: I1124 11:09:47.027414 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:47Z","lastTransitionTime":"2025-11-24T11:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:47 crc kubenswrapper[5072]: I1124 11:09:47.130008 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:47 crc kubenswrapper[5072]: I1124 11:09:47.130453 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:47 crc kubenswrapper[5072]: I1124 11:09:47.130627 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:47 crc kubenswrapper[5072]: I1124 11:09:47.130807 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:47 crc kubenswrapper[5072]: I1124 11:09:47.130988 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:47Z","lastTransitionTime":"2025-11-24T11:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:47 crc kubenswrapper[5072]: I1124 11:09:47.233967 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:47 crc kubenswrapper[5072]: I1124 11:09:47.234036 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:47 crc kubenswrapper[5072]: I1124 11:09:47.234059 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:47 crc kubenswrapper[5072]: I1124 11:09:47.234095 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:47 crc kubenswrapper[5072]: I1124 11:09:47.234118 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:47Z","lastTransitionTime":"2025-11-24T11:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:47 crc kubenswrapper[5072]: I1124 11:09:47.242645 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/60100e7d-c8b1-4b18-8567-46e21096fa0f-metrics-certs\") pod \"network-metrics-daemon-nnrv7\" (UID: \"60100e7d-c8b1-4b18-8567-46e21096fa0f\") " pod="openshift-multus/network-metrics-daemon-nnrv7" Nov 24 11:09:47 crc kubenswrapper[5072]: E1124 11:09:47.242866 5072 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 11:09:47 crc kubenswrapper[5072]: E1124 11:09:47.242960 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/60100e7d-c8b1-4b18-8567-46e21096fa0f-metrics-certs podName:60100e7d-c8b1-4b18-8567-46e21096fa0f nodeName:}" failed. No retries permitted until 2025-11-24 11:09:49.242934088 +0000 UTC m=+40.954458594 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/60100e7d-c8b1-4b18-8567-46e21096fa0f-metrics-certs") pod "network-metrics-daemon-nnrv7" (UID: "60100e7d-c8b1-4b18-8567-46e21096fa0f") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 11:09:47 crc kubenswrapper[5072]: I1124 11:09:47.339688 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:47 crc kubenswrapper[5072]: I1124 11:09:47.339754 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:47 crc kubenswrapper[5072]: I1124 11:09:47.339781 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:47 crc kubenswrapper[5072]: I1124 11:09:47.339822 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:47 crc kubenswrapper[5072]: I1124 11:09:47.339845 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:47Z","lastTransitionTime":"2025-11-24T11:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:47 crc kubenswrapper[5072]: I1124 11:09:47.442427 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:47 crc kubenswrapper[5072]: I1124 11:09:47.442497 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:47 crc kubenswrapper[5072]: I1124 11:09:47.442519 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:47 crc kubenswrapper[5072]: I1124 11:09:47.442586 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:47 crc kubenswrapper[5072]: I1124 11:09:47.442609 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:47Z","lastTransitionTime":"2025-11-24T11:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:47 crc kubenswrapper[5072]: I1124 11:09:47.545475 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:47 crc kubenswrapper[5072]: I1124 11:09:47.545784 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:47 crc kubenswrapper[5072]: I1124 11:09:47.546662 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:47 crc kubenswrapper[5072]: I1124 11:09:47.546707 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:47 crc kubenswrapper[5072]: I1124 11:09:47.546725 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:47Z","lastTransitionTime":"2025-11-24T11:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:47 crc kubenswrapper[5072]: I1124 11:09:47.649398 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:47 crc kubenswrapper[5072]: I1124 11:09:47.649452 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:47 crc kubenswrapper[5072]: I1124 11:09:47.649471 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:47 crc kubenswrapper[5072]: I1124 11:09:47.649514 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:47 crc kubenswrapper[5072]: I1124 11:09:47.649529 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:47Z","lastTransitionTime":"2025-11-24T11:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:47 crc kubenswrapper[5072]: I1124 11:09:47.752926 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:47 crc kubenswrapper[5072]: I1124 11:09:47.752991 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:47 crc kubenswrapper[5072]: I1124 11:09:47.753013 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:47 crc kubenswrapper[5072]: I1124 11:09:47.753045 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:47 crc kubenswrapper[5072]: I1124 11:09:47.753067 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:47Z","lastTransitionTime":"2025-11-24T11:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:47 crc kubenswrapper[5072]: I1124 11:09:47.856645 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:47 crc kubenswrapper[5072]: I1124 11:09:47.856710 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:47 crc kubenswrapper[5072]: I1124 11:09:47.856733 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:47 crc kubenswrapper[5072]: I1124 11:09:47.856763 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:47 crc kubenswrapper[5072]: I1124 11:09:47.856784 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:47Z","lastTransitionTime":"2025-11-24T11:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:47 crc kubenswrapper[5072]: I1124 11:09:47.960244 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:47 crc kubenswrapper[5072]: I1124 11:09:47.960291 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:47 crc kubenswrapper[5072]: I1124 11:09:47.960304 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:47 crc kubenswrapper[5072]: I1124 11:09:47.960321 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:47 crc kubenswrapper[5072]: I1124 11:09:47.960337 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:47Z","lastTransitionTime":"2025-11-24T11:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:48 crc kubenswrapper[5072]: I1124 11:09:48.016127 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:09:48 crc kubenswrapper[5072]: I1124 11:09:48.016204 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:09:48 crc kubenswrapper[5072]: I1124 11:09:48.016235 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:09:48 crc kubenswrapper[5072]: E1124 11:09:48.016305 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:09:48 crc kubenswrapper[5072]: E1124 11:09:48.016485 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:09:48 crc kubenswrapper[5072]: E1124 11:09:48.016663 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:09:48 crc kubenswrapper[5072]: I1124 11:09:48.063426 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:48 crc kubenswrapper[5072]: I1124 11:09:48.063483 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:48 crc kubenswrapper[5072]: I1124 11:09:48.063500 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:48 crc kubenswrapper[5072]: I1124 11:09:48.063523 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:48 crc kubenswrapper[5072]: I1124 11:09:48.063541 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:48Z","lastTransitionTime":"2025-11-24T11:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:48 crc kubenswrapper[5072]: I1124 11:09:48.165991 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:48 crc kubenswrapper[5072]: I1124 11:09:48.166065 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:48 crc kubenswrapper[5072]: I1124 11:09:48.166082 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:48 crc kubenswrapper[5072]: I1124 11:09:48.166106 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:48 crc kubenswrapper[5072]: I1124 11:09:48.166127 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:48Z","lastTransitionTime":"2025-11-24T11:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:48 crc kubenswrapper[5072]: I1124 11:09:48.268504 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:48 crc kubenswrapper[5072]: I1124 11:09:48.268575 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:48 crc kubenswrapper[5072]: I1124 11:09:48.268599 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:48 crc kubenswrapper[5072]: I1124 11:09:48.268627 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:48 crc kubenswrapper[5072]: I1124 11:09:48.268648 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:48Z","lastTransitionTime":"2025-11-24T11:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:48 crc kubenswrapper[5072]: I1124 11:09:48.371985 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:48 crc kubenswrapper[5072]: I1124 11:09:48.372035 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:48 crc kubenswrapper[5072]: I1124 11:09:48.372046 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:48 crc kubenswrapper[5072]: I1124 11:09:48.372063 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:48 crc kubenswrapper[5072]: I1124 11:09:48.372078 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:48Z","lastTransitionTime":"2025-11-24T11:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:48 crc kubenswrapper[5072]: I1124 11:09:48.475055 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:48 crc kubenswrapper[5072]: I1124 11:09:48.475114 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:48 crc kubenswrapper[5072]: I1124 11:09:48.475130 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:48 crc kubenswrapper[5072]: I1124 11:09:48.475153 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:48 crc kubenswrapper[5072]: I1124 11:09:48.475170 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:48Z","lastTransitionTime":"2025-11-24T11:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:48 crc kubenswrapper[5072]: I1124 11:09:48.577751 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:48 crc kubenswrapper[5072]: I1124 11:09:48.577810 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:48 crc kubenswrapper[5072]: I1124 11:09:48.577828 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:48 crc kubenswrapper[5072]: I1124 11:09:48.577852 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:48 crc kubenswrapper[5072]: I1124 11:09:48.577870 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:48Z","lastTransitionTime":"2025-11-24T11:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:48 crc kubenswrapper[5072]: I1124 11:09:48.681340 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:48 crc kubenswrapper[5072]: I1124 11:09:48.681435 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:48 crc kubenswrapper[5072]: I1124 11:09:48.681455 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:48 crc kubenswrapper[5072]: I1124 11:09:48.681477 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:48 crc kubenswrapper[5072]: I1124 11:09:48.681494 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:48Z","lastTransitionTime":"2025-11-24T11:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:48 crc kubenswrapper[5072]: I1124 11:09:48.783852 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:48 crc kubenswrapper[5072]: I1124 11:09:48.783909 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:48 crc kubenswrapper[5072]: I1124 11:09:48.783927 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:48 crc kubenswrapper[5072]: I1124 11:09:48.783950 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:48 crc kubenswrapper[5072]: I1124 11:09:48.783967 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:48Z","lastTransitionTime":"2025-11-24T11:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:48 crc kubenswrapper[5072]: I1124 11:09:48.887195 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:48 crc kubenswrapper[5072]: I1124 11:09:48.887251 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:48 crc kubenswrapper[5072]: I1124 11:09:48.887267 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:48 crc kubenswrapper[5072]: I1124 11:09:48.887291 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:48 crc kubenswrapper[5072]: I1124 11:09:48.887308 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:48Z","lastTransitionTime":"2025-11-24T11:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:48 crc kubenswrapper[5072]: I1124 11:09:48.989871 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:48 crc kubenswrapper[5072]: I1124 11:09:48.989929 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:48 crc kubenswrapper[5072]: I1124 11:09:48.989945 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:48 crc kubenswrapper[5072]: I1124 11:09:48.989972 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:48 crc kubenswrapper[5072]: I1124 11:09:48.989990 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:48Z","lastTransitionTime":"2025-11-24T11:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:49 crc kubenswrapper[5072]: I1124 11:09:49.015406 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nnrv7" Nov 24 11:09:49 crc kubenswrapper[5072]: E1124 11:09:49.015754 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nnrv7" podUID="60100e7d-c8b1-4b18-8567-46e21096fa0f" Nov 24 11:09:49 crc kubenswrapper[5072]: I1124 11:09:49.037080 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:49Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:49 crc kubenswrapper[5072]: I1124 11:09:49.057056 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a948c39e09b468da8df5726e7734af35e1d5324d44a6ad11f6e30031f27060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:49Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:49 crc kubenswrapper[5072]: I1124 11:09:49.078484 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:49Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:49 crc kubenswrapper[5072]: I1124 11:09:49.092331 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:49 crc kubenswrapper[5072]: I1124 11:09:49.092396 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:49 crc kubenswrapper[5072]: I1124 11:09:49.092413 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:49 crc kubenswrapper[5072]: I1124 11:09:49.092435 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:49 crc kubenswrapper[5072]: I1124 11:09:49.092452 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:49Z","lastTransitionTime":"2025-11-24T11:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:49 crc kubenswrapper[5072]: I1124 11:09:49.098934 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9007e2c-ce36-49d5-ac3f-a2a0ced4e662\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://631c19835680cfbfc94d8d2864f79bb327a834aae717a2c9c525383029e44001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a299161b21fb4a4bc255d765f39eaafa3c87549cc62d458d28ff57fbb4b5fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25ce4f3c52e2096622385f0bd213a058de7ddd3967ed8ba918e79fc63b00429c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28c581f99dcf7d549d235350230e7c3ef380dfeb4fdff577353410642700cb1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:49Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:49 crc kubenswrapper[5072]: I1124 11:09:49.122727 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a60343a1-7193-420d-b6ef-81505cfad266\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6597a19c8ed876fea1aaa8077315a8f39d0a79dee6af94970a3abcd552d673e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89e652bfaac124e13e0b3dfd3f167688a6b417b3613fb94d5422e2134ad95a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c9b314ea6e67a2866adfd0dc2e429523b6db6dab450a1a95fe5528548a0fcb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5f54ddd554c2e52a492be6b3e237793c7b7bed201d942c23d11983e154863a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e03b85333c8be2e5efe40f082369652f009482373f8e230fd948b2dee4e2ee39\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:09:23Z\\\",\\\"message\\\":\\\"W1124 11:09:12.543261 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 11:09:12.543592 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763982552 cert, and key in /tmp/serving-cert-2249531990/serving-signer.crt, /tmp/serving-cert-2249531990/serving-signer.key\\\\nI1124 11:09:13.042739 1 observer_polling.go:159] Starting file observer\\\\nW1124 11:09:13.046128 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 11:09:13.046351 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:09:13.048981 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2249531990/tls.crt::/tmp/serving-cert-2249531990/tls.key\\\\\\\"\\\\nF1124 11:09:23.567420 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d2187669c4dc9aae8ca2f2141104aee1e20df96f0bccf45ecd4c8528f51d1af\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a6b0468c00ca40213d12dd7b80c9f0dcfb93509a44ae37414053672e674f9f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6b0468c00ca40213d12dd7b80c9f0dcfb93509a44ae37414053672e674f9f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:49Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:49 crc kubenswrapper[5072]: I1124 11:09:49.141991 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:49Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:49 crc kubenswrapper[5072]: I1124 11:09:49.172185 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1421e4bd297d99e68c36da933221bbabf8d74aa5fbfa7cbfe831215de52d4790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c82cb1df0677da29463f84139b09b8ee263695e4c994ef7d17846556260b5c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89dd7133a078fe05808fdf20f22b6939004406ae85d3b6ef854a3e4031350491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f6526ffcce8bc139bd9442203e460c71b46e2e8cf9e1f0d03beb067f5dc1c39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98470930757c0529cc831f91feab9f4b004c808efbfdf40e3e95b12e6af1c6d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7621cb39fa8d0330ee899d4962150519618be95eabfc592e6678bb5f5fbbdbfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17a209788447e8d556a2f5d4611b2979e998e017c2ad7a81d88b9d005f215721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17a209788447e8d556a2f5d4611b2979e998e017c2ad7a81d88b9d005f215721\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:09:43Z\\\",\\\"message\\\":\\\"amespaces:*false,},},},Features:nil,},}\\\\nI1124 11:09:43.362510 6535 egressqos.go:1009] Finished syncing EgressQoS node crc : 15.350947ms\\\\nI1124 11:09:43.362562 6535 nad_controller.go:166] [zone-nad-controller NAD controller]: shutting down\\\\nI1124 11:09:43.362359 6535 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1124 11:09:43.362598 6535 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1124 11:09:43.362644 6535 handler.go:208] Removed *v1.Node event handler 2\\\\nI1124 11:09:43.362667 6535 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1124 11:09:43.362683 6535 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1124 11:09:43.362716 6535 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1124 11:09:43.362743 6535 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 11:09:43.362757 6535 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1124 11:09:43.362762 6535 handler.go:208] Removed *v1.Node event handler 7\\\\nI1124 11:09:43.362775 6535 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1124 11:09:43.362781 6535 factory.go:656] Stopping watch factory\\\\nI1124 11:09:43.362790 6535 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1124 11:09:43.362800 6535 ovnkube.go:599] Stopped ovnkube\\\\nI1124 11:09:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-n4qmw_openshift-ovn-kubernetes(80fda759-ddfd-438a-b5a2-cb775ee1bf7e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af4c3d6857b6aaa6a401604f5423cfb55488de707a08698b4cf9f420b9c07975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4qmw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:49Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:49 crc kubenswrapper[5072]: I1124 11:09:49.194717 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:49 crc kubenswrapper[5072]: I1124 11:09:49.194818 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qjsxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74eb978f-00ff-4ed3-a5da-8026a3211592\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a69b8017daa872327d88eab8150845309e30c5cf37b229292e7c8a80e5d599c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://911b5942d35c25032791bf5a43559a6234acf215f5d3f84a30e69aced0caecc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://911b5942d35c25032791bf5a43559a6234acf215f5d3f84a30e69aced0caecc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://829da19d26a0ee0192a826e0b355266bcc48c77cf7b1fcf97a9e56add5d48645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://829da19d26a0ee0192a826e0b355266bcc48c77cf7b1fcf97a9e56add5d48645\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5add393950b53ed615d28b3d65833ae6a5174616b7170577babf1f4b7b6a2336\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5add393950b53ed615d28b3d65833ae6a5174616b7170577babf1f4b7b6a2336\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4771d3054f62a25ec9be8b6628ead9e7eb99ad4ae545d803919cb0122343c0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4771d3054f62a25ec9be8b6628ead9e7eb99ad4ae545d803919cb0122343c0ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd19ed803c2b441c4dde807b4cd4461c581058658db24f32dea39ad73b9cef14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd19ed803c2b441c4dde807b4cd4461c581058658db24f32dea39ad73b9cef14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09dba82c18fac19ddd5bbbeecab58a5dc685dbda72e7570cde5d445990066d2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09dba82c18fac19ddd5bbbeecab58a5dc685dbda72e7570cde5d445990066d2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qjsxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:49Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:49 crc kubenswrapper[5072]: I1124 11:09:49.194998 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:49 crc kubenswrapper[5072]: I1124 11:09:49.195156 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:49 crc kubenswrapper[5072]: I1124 11:09:49.195190 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:49 crc kubenswrapper[5072]: I1124 11:09:49.195209 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:49Z","lastTransitionTime":"2025-11-24T11:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:49 crc kubenswrapper[5072]: I1124 11:09:49.210205 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bkjf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"175fd540-009b-4cb4-9c3e-e2ebc7e787f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d000a9d98b0e3ed54c1cc50148360bb8103d332c45ee03e745f14929132d2c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcts8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bkjf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:49Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:49 crc kubenswrapper[5072]: I1124 11:09:49.231575 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t8b9x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a9fe7b3-71a3-4388-8ee4-7531ceef6049\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96637ece9dca11a6b9e2a8fff8e78ca37f48e9f86e3f076e80cbd56aa353ca74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmbvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t8b9x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:49Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:49 crc kubenswrapper[5072]: I1124 11:09:49.249903 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85ee6420-36f0-467c-acf4-ebea8b02c8d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21d57225dc522c1ee3621c75ac8f9f93c47d21afb8b0cb1aae2d6aea1d17a252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3509fd52379451e43594c096ef652d92778331f2aef6b689e547f35a384b976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jfxnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:49Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:49 crc kubenswrapper[5072]: I1124 11:09:49.264068 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/60100e7d-c8b1-4b18-8567-46e21096fa0f-metrics-certs\") pod \"network-metrics-daemon-nnrv7\" (UID: \"60100e7d-c8b1-4b18-8567-46e21096fa0f\") " pod="openshift-multus/network-metrics-daemon-nnrv7" Nov 24 11:09:49 crc kubenswrapper[5072]: E1124 11:09:49.264290 5072 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 11:09:49 crc kubenswrapper[5072]: E1124 11:09:49.264481 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/60100e7d-c8b1-4b18-8567-46e21096fa0f-metrics-certs podName:60100e7d-c8b1-4b18-8567-46e21096fa0f nodeName:}" failed. No retries permitted until 2025-11-24 11:09:53.264440149 +0000 UTC m=+44.975964665 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/60100e7d-c8b1-4b18-8567-46e21096fa0f-metrics-certs") pod "network-metrics-daemon-nnrv7" (UID: "60100e7d-c8b1-4b18-8567-46e21096fa0f") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 11:09:49 crc kubenswrapper[5072]: I1124 11:09:49.265752 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jz4mm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19d555ef-9635-4aa7-bce1-7b1eb4805445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc7d5e96171aeadf92196d2b795c03ec634abd92814569a974200484569c145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8k8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jz4mm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:49Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:49 crc kubenswrapper[5072]: I1124 11:09:49.281499 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wndk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c05ddf6-986e-4bd6-95f0-7d734bc59140\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://894e58e94d99e8ef26722db709e0135d59ac4847daf001e37ce266c9baf02e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gztmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea4b260f16a11dade8c8b120408cf2d167dd868a9b938f4231aa811546252c56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gztmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wndk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:49Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:49 crc kubenswrapper[5072]: I1124 11:09:49.298089 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nnrv7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60100e7d-c8b1-4b18-8567-46e21096fa0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbdfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbdfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:45Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nnrv7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:49Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:49 crc kubenswrapper[5072]: I1124 11:09:49.299283 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:49 crc kubenswrapper[5072]: I1124 11:09:49.299345 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:49 crc kubenswrapper[5072]: I1124 11:09:49.299363 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:49 crc kubenswrapper[5072]: I1124 11:09:49.299414 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:49 crc kubenswrapper[5072]: I1124 11:09:49.299437 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:49Z","lastTransitionTime":"2025-11-24T11:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:49 crc kubenswrapper[5072]: I1124 11:09:49.316821 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b45fbff892ae7b15dc056d52d6485a995bb8a62ae423498027fe4866ef51e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcaa27616bc15c5ce26c371eb8a8f155914434949662b30894cd1ef7aa8e04a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:49Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:49 crc kubenswrapper[5072]: I1124 11:09:49.332782 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3973b61727227663fde759ad817fc73088f78293c67fc1bbbf5d5543afa7bbb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:49Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:49 crc kubenswrapper[5072]: I1124 11:09:49.402518 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:49 crc kubenswrapper[5072]: I1124 11:09:49.402574 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:49 crc kubenswrapper[5072]: I1124 11:09:49.402591 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:49 crc kubenswrapper[5072]: I1124 11:09:49.402614 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:49 crc kubenswrapper[5072]: I1124 11:09:49.402630 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:49Z","lastTransitionTime":"2025-11-24T11:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:49 crc kubenswrapper[5072]: I1124 11:09:49.505065 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:49 crc kubenswrapper[5072]: I1124 11:09:49.505131 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:49 crc kubenswrapper[5072]: I1124 11:09:49.505150 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:49 crc kubenswrapper[5072]: I1124 11:09:49.505173 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:49 crc kubenswrapper[5072]: I1124 11:09:49.505191 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:49Z","lastTransitionTime":"2025-11-24T11:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:49 crc kubenswrapper[5072]: I1124 11:09:49.557880 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:49 crc kubenswrapper[5072]: I1124 11:09:49.557954 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:49 crc kubenswrapper[5072]: I1124 11:09:49.557979 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:49 crc kubenswrapper[5072]: I1124 11:09:49.558007 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:49 crc kubenswrapper[5072]: I1124 11:09:49.558030 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:49Z","lastTransitionTime":"2025-11-24T11:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:49 crc kubenswrapper[5072]: E1124 11:09:49.576720 5072 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:09:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:09:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:09:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:09:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a41d3a9c-0834-482e-9391-dff98db0f196\\\",\\\"systemUUID\\\":\\\"d0383649-b062-48ed-9fc1-5e553cb9256a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:49Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:49 crc kubenswrapper[5072]: I1124 11:09:49.581356 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:49 crc kubenswrapper[5072]: I1124 11:09:49.581553 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:49 crc kubenswrapper[5072]: I1124 11:09:49.581628 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:49 crc kubenswrapper[5072]: I1124 11:09:49.581667 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:49 crc kubenswrapper[5072]: I1124 11:09:49.581729 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:49Z","lastTransitionTime":"2025-11-24T11:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:49 crc kubenswrapper[5072]: E1124 11:09:49.600885 5072 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:09:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:09:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:09:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:09:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a41d3a9c-0834-482e-9391-dff98db0f196\\\",\\\"systemUUID\\\":\\\"d0383649-b062-48ed-9fc1-5e553cb9256a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:49Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:49 crc kubenswrapper[5072]: I1124 11:09:49.605841 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:49 crc kubenswrapper[5072]: I1124 11:09:49.605941 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:49 crc kubenswrapper[5072]: I1124 11:09:49.605970 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:49 crc kubenswrapper[5072]: I1124 11:09:49.605999 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:49 crc kubenswrapper[5072]: I1124 11:09:49.606035 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:49Z","lastTransitionTime":"2025-11-24T11:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:49 crc kubenswrapper[5072]: E1124 11:09:49.627006 5072 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:09:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:09:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:09:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:09:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a41d3a9c-0834-482e-9391-dff98db0f196\\\",\\\"systemUUID\\\":\\\"d0383649-b062-48ed-9fc1-5e553cb9256a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:49Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:49 crc kubenswrapper[5072]: I1124 11:09:49.635513 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:49 crc kubenswrapper[5072]: I1124 11:09:49.635563 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:49 crc kubenswrapper[5072]: I1124 11:09:49.635575 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:49 crc kubenswrapper[5072]: I1124 11:09:49.635592 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:49 crc kubenswrapper[5072]: I1124 11:09:49.635606 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:49Z","lastTransitionTime":"2025-11-24T11:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:49 crc kubenswrapper[5072]: E1124 11:09:49.651419 5072 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:09:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:09:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:09:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:09:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a41d3a9c-0834-482e-9391-dff98db0f196\\\",\\\"systemUUID\\\":\\\"d0383649-b062-48ed-9fc1-5e553cb9256a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:49Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:49 crc kubenswrapper[5072]: I1124 11:09:49.655732 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:49 crc kubenswrapper[5072]: I1124 11:09:49.655808 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:49 crc kubenswrapper[5072]: I1124 11:09:49.655818 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:49 crc kubenswrapper[5072]: I1124 11:09:49.655832 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:49 crc kubenswrapper[5072]: I1124 11:09:49.655841 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:49Z","lastTransitionTime":"2025-11-24T11:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:49 crc kubenswrapper[5072]: E1124 11:09:49.675093 5072 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:09:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:09:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:09:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:09:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a41d3a9c-0834-482e-9391-dff98db0f196\\\",\\\"systemUUID\\\":\\\"d0383649-b062-48ed-9fc1-5e553cb9256a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:49Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:49 crc kubenswrapper[5072]: E1124 11:09:49.675319 5072 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 24 11:09:49 crc kubenswrapper[5072]: I1124 11:09:49.677311 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:49 crc kubenswrapper[5072]: I1124 11:09:49.677402 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:49 crc kubenswrapper[5072]: I1124 11:09:49.677426 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:49 crc kubenswrapper[5072]: I1124 11:09:49.677457 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:49 crc kubenswrapper[5072]: I1124 11:09:49.677476 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:49Z","lastTransitionTime":"2025-11-24T11:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:49 crc kubenswrapper[5072]: I1124 11:09:49.780437 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:49 crc kubenswrapper[5072]: I1124 11:09:49.780487 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:49 crc kubenswrapper[5072]: I1124 11:09:49.780503 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:49 crc kubenswrapper[5072]: I1124 11:09:49.780525 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:49 crc kubenswrapper[5072]: I1124 11:09:49.780544 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:49Z","lastTransitionTime":"2025-11-24T11:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:49 crc kubenswrapper[5072]: I1124 11:09:49.883834 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:49 crc kubenswrapper[5072]: I1124 11:09:49.883884 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:49 crc kubenswrapper[5072]: I1124 11:09:49.883897 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:49 crc kubenswrapper[5072]: I1124 11:09:49.883918 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:49 crc kubenswrapper[5072]: I1124 11:09:49.883932 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:49Z","lastTransitionTime":"2025-11-24T11:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:49 crc kubenswrapper[5072]: I1124 11:09:49.986649 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:49 crc kubenswrapper[5072]: I1124 11:09:49.986745 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:49 crc kubenswrapper[5072]: I1124 11:09:49.986771 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:49 crc kubenswrapper[5072]: I1124 11:09:49.986801 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:49 crc kubenswrapper[5072]: I1124 11:09:49.986822 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:49Z","lastTransitionTime":"2025-11-24T11:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:50 crc kubenswrapper[5072]: I1124 11:09:50.016193 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:09:50 crc kubenswrapper[5072]: I1124 11:09:50.016193 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:09:50 crc kubenswrapper[5072]: E1124 11:09:50.016432 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:09:50 crc kubenswrapper[5072]: E1124 11:09:50.016556 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:09:50 crc kubenswrapper[5072]: I1124 11:09:50.016231 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:09:50 crc kubenswrapper[5072]: E1124 11:09:50.016671 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:09:50 crc kubenswrapper[5072]: I1124 11:09:50.090187 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:50 crc kubenswrapper[5072]: I1124 11:09:50.090256 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:50 crc kubenswrapper[5072]: I1124 11:09:50.090291 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:50 crc kubenswrapper[5072]: I1124 11:09:50.090308 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:50 crc kubenswrapper[5072]: I1124 11:09:50.090320 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:50Z","lastTransitionTime":"2025-11-24T11:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:50 crc kubenswrapper[5072]: I1124 11:09:50.193667 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:50 crc kubenswrapper[5072]: I1124 11:09:50.193728 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:50 crc kubenswrapper[5072]: I1124 11:09:50.193745 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:50 crc kubenswrapper[5072]: I1124 11:09:50.193768 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:50 crc kubenswrapper[5072]: I1124 11:09:50.193786 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:50Z","lastTransitionTime":"2025-11-24T11:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:50 crc kubenswrapper[5072]: I1124 11:09:50.296738 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:50 crc kubenswrapper[5072]: I1124 11:09:50.296797 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:50 crc kubenswrapper[5072]: I1124 11:09:50.296813 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:50 crc kubenswrapper[5072]: I1124 11:09:50.296837 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:50 crc kubenswrapper[5072]: I1124 11:09:50.296854 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:50Z","lastTransitionTime":"2025-11-24T11:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:50 crc kubenswrapper[5072]: I1124 11:09:50.399895 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:50 crc kubenswrapper[5072]: I1124 11:09:50.399961 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:50 crc kubenswrapper[5072]: I1124 11:09:50.399985 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:50 crc kubenswrapper[5072]: I1124 11:09:50.400011 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:50 crc kubenswrapper[5072]: I1124 11:09:50.400033 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:50Z","lastTransitionTime":"2025-11-24T11:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:50 crc kubenswrapper[5072]: I1124 11:09:50.501802 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:50 crc kubenswrapper[5072]: I1124 11:09:50.501830 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:50 crc kubenswrapper[5072]: I1124 11:09:50.501838 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:50 crc kubenswrapper[5072]: I1124 11:09:50.501851 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:50 crc kubenswrapper[5072]: I1124 11:09:50.501859 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:50Z","lastTransitionTime":"2025-11-24T11:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:50 crc kubenswrapper[5072]: I1124 11:09:50.604853 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:50 crc kubenswrapper[5072]: I1124 11:09:50.604920 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:50 crc kubenswrapper[5072]: I1124 11:09:50.604947 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:50 crc kubenswrapper[5072]: I1124 11:09:50.604961 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:50 crc kubenswrapper[5072]: I1124 11:09:50.604969 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:50Z","lastTransitionTime":"2025-11-24T11:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:50 crc kubenswrapper[5072]: I1124 11:09:50.707672 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:50 crc kubenswrapper[5072]: I1124 11:09:50.707746 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:50 crc kubenswrapper[5072]: I1124 11:09:50.707768 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:50 crc kubenswrapper[5072]: I1124 11:09:50.707798 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:50 crc kubenswrapper[5072]: I1124 11:09:50.707821 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:50Z","lastTransitionTime":"2025-11-24T11:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:50 crc kubenswrapper[5072]: I1124 11:09:50.811016 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:50 crc kubenswrapper[5072]: I1124 11:09:50.811076 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:50 crc kubenswrapper[5072]: I1124 11:09:50.811093 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:50 crc kubenswrapper[5072]: I1124 11:09:50.811116 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:50 crc kubenswrapper[5072]: I1124 11:09:50.811137 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:50Z","lastTransitionTime":"2025-11-24T11:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:50 crc kubenswrapper[5072]: I1124 11:09:50.914162 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:50 crc kubenswrapper[5072]: I1124 11:09:50.914236 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:50 crc kubenswrapper[5072]: I1124 11:09:50.914253 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:50 crc kubenswrapper[5072]: I1124 11:09:50.914276 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:50 crc kubenswrapper[5072]: I1124 11:09:50.914293 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:50Z","lastTransitionTime":"2025-11-24T11:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:51 crc kubenswrapper[5072]: I1124 11:09:51.015713 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nnrv7" Nov 24 11:09:51 crc kubenswrapper[5072]: E1124 11:09:51.015883 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nnrv7" podUID="60100e7d-c8b1-4b18-8567-46e21096fa0f" Nov 24 11:09:51 crc kubenswrapper[5072]: I1124 11:09:51.017361 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:51 crc kubenswrapper[5072]: I1124 11:09:51.017469 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:51 crc kubenswrapper[5072]: I1124 11:09:51.017492 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:51 crc kubenswrapper[5072]: I1124 11:09:51.017539 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:51 crc kubenswrapper[5072]: I1124 11:09:51.017565 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:51Z","lastTransitionTime":"2025-11-24T11:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:51 crc kubenswrapper[5072]: I1124 11:09:51.120184 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:51 crc kubenswrapper[5072]: I1124 11:09:51.120254 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:51 crc kubenswrapper[5072]: I1124 11:09:51.120267 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:51 crc kubenswrapper[5072]: I1124 11:09:51.120286 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:51 crc kubenswrapper[5072]: I1124 11:09:51.120298 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:51Z","lastTransitionTime":"2025-11-24T11:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:51 crc kubenswrapper[5072]: I1124 11:09:51.223504 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:51 crc kubenswrapper[5072]: I1124 11:09:51.223571 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:51 crc kubenswrapper[5072]: I1124 11:09:51.223596 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:51 crc kubenswrapper[5072]: I1124 11:09:51.223629 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:51 crc kubenswrapper[5072]: I1124 11:09:51.223653 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:51Z","lastTransitionTime":"2025-11-24T11:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:51 crc kubenswrapper[5072]: I1124 11:09:51.326920 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:51 crc kubenswrapper[5072]: I1124 11:09:51.327033 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:51 crc kubenswrapper[5072]: I1124 11:09:51.327050 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:51 crc kubenswrapper[5072]: I1124 11:09:51.327072 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:51 crc kubenswrapper[5072]: I1124 11:09:51.327088 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:51Z","lastTransitionTime":"2025-11-24T11:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:51 crc kubenswrapper[5072]: I1124 11:09:51.430242 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:51 crc kubenswrapper[5072]: I1124 11:09:51.430302 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:51 crc kubenswrapper[5072]: I1124 11:09:51.430324 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:51 crc kubenswrapper[5072]: I1124 11:09:51.430351 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:51 crc kubenswrapper[5072]: I1124 11:09:51.430405 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:51Z","lastTransitionTime":"2025-11-24T11:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:51 crc kubenswrapper[5072]: I1124 11:09:51.532627 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:51 crc kubenswrapper[5072]: I1124 11:09:51.532675 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:51 crc kubenswrapper[5072]: I1124 11:09:51.532694 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:51 crc kubenswrapper[5072]: I1124 11:09:51.532718 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:51 crc kubenswrapper[5072]: I1124 11:09:51.532735 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:51Z","lastTransitionTime":"2025-11-24T11:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:51 crc kubenswrapper[5072]: I1124 11:09:51.635916 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:51 crc kubenswrapper[5072]: I1124 11:09:51.635983 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:51 crc kubenswrapper[5072]: I1124 11:09:51.636001 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:51 crc kubenswrapper[5072]: I1124 11:09:51.636027 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:51 crc kubenswrapper[5072]: I1124 11:09:51.636046 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:51Z","lastTransitionTime":"2025-11-24T11:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:51 crc kubenswrapper[5072]: I1124 11:09:51.739367 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:51 crc kubenswrapper[5072]: I1124 11:09:51.739470 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:51 crc kubenswrapper[5072]: I1124 11:09:51.739487 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:51 crc kubenswrapper[5072]: I1124 11:09:51.739509 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:51 crc kubenswrapper[5072]: I1124 11:09:51.739526 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:51Z","lastTransitionTime":"2025-11-24T11:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:51 crc kubenswrapper[5072]: I1124 11:09:51.842750 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:51 crc kubenswrapper[5072]: I1124 11:09:51.842805 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:51 crc kubenswrapper[5072]: I1124 11:09:51.842823 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:51 crc kubenswrapper[5072]: I1124 11:09:51.842850 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:51 crc kubenswrapper[5072]: I1124 11:09:51.842878 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:51Z","lastTransitionTime":"2025-11-24T11:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:51 crc kubenswrapper[5072]: I1124 11:09:51.945524 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:51 crc kubenswrapper[5072]: I1124 11:09:51.945566 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:51 crc kubenswrapper[5072]: I1124 11:09:51.945614 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:51 crc kubenswrapper[5072]: I1124 11:09:51.945637 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:51 crc kubenswrapper[5072]: I1124 11:09:51.945653 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:51Z","lastTransitionTime":"2025-11-24T11:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:52 crc kubenswrapper[5072]: I1124 11:09:52.016295 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:09:52 crc kubenswrapper[5072]: I1124 11:09:52.016404 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:09:52 crc kubenswrapper[5072]: E1124 11:09:52.016475 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:09:52 crc kubenswrapper[5072]: I1124 11:09:52.016304 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:09:52 crc kubenswrapper[5072]: E1124 11:09:52.016554 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:09:52 crc kubenswrapper[5072]: E1124 11:09:52.016703 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:09:52 crc kubenswrapper[5072]: I1124 11:09:52.048077 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:52 crc kubenswrapper[5072]: I1124 11:09:52.048106 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:52 crc kubenswrapper[5072]: I1124 11:09:52.048114 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:52 crc kubenswrapper[5072]: I1124 11:09:52.048126 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:52 crc kubenswrapper[5072]: I1124 11:09:52.048135 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:52Z","lastTransitionTime":"2025-11-24T11:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:52 crc kubenswrapper[5072]: I1124 11:09:52.151271 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:52 crc kubenswrapper[5072]: I1124 11:09:52.151325 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:52 crc kubenswrapper[5072]: I1124 11:09:52.151344 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:52 crc kubenswrapper[5072]: I1124 11:09:52.151398 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:52 crc kubenswrapper[5072]: I1124 11:09:52.151418 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:52Z","lastTransitionTime":"2025-11-24T11:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:52 crc kubenswrapper[5072]: I1124 11:09:52.254645 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:52 crc kubenswrapper[5072]: I1124 11:09:52.254687 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:52 crc kubenswrapper[5072]: I1124 11:09:52.254700 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:52 crc kubenswrapper[5072]: I1124 11:09:52.254717 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:52 crc kubenswrapper[5072]: I1124 11:09:52.254729 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:52Z","lastTransitionTime":"2025-11-24T11:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:52 crc kubenswrapper[5072]: I1124 11:09:52.357741 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:52 crc kubenswrapper[5072]: I1124 11:09:52.357779 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:52 crc kubenswrapper[5072]: I1124 11:09:52.357787 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:52 crc kubenswrapper[5072]: I1124 11:09:52.357818 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:52 crc kubenswrapper[5072]: I1124 11:09:52.357827 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:52Z","lastTransitionTime":"2025-11-24T11:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:52 crc kubenswrapper[5072]: I1124 11:09:52.460835 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:52 crc kubenswrapper[5072]: I1124 11:09:52.460892 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:52 crc kubenswrapper[5072]: I1124 11:09:52.460910 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:52 crc kubenswrapper[5072]: I1124 11:09:52.460934 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:52 crc kubenswrapper[5072]: I1124 11:09:52.460951 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:52Z","lastTransitionTime":"2025-11-24T11:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:52 crc kubenswrapper[5072]: I1124 11:09:52.564053 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:52 crc kubenswrapper[5072]: I1124 11:09:52.564120 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:52 crc kubenswrapper[5072]: I1124 11:09:52.564141 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:52 crc kubenswrapper[5072]: I1124 11:09:52.564170 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:52 crc kubenswrapper[5072]: I1124 11:09:52.564190 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:52Z","lastTransitionTime":"2025-11-24T11:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:52 crc kubenswrapper[5072]: I1124 11:09:52.667933 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:52 crc kubenswrapper[5072]: I1124 11:09:52.667988 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:52 crc kubenswrapper[5072]: I1124 11:09:52.668004 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:52 crc kubenswrapper[5072]: I1124 11:09:52.668027 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:52 crc kubenswrapper[5072]: I1124 11:09:52.668044 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:52Z","lastTransitionTime":"2025-11-24T11:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:52 crc kubenswrapper[5072]: I1124 11:09:52.770473 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:52 crc kubenswrapper[5072]: I1124 11:09:52.770533 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:52 crc kubenswrapper[5072]: I1124 11:09:52.770548 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:52 crc kubenswrapper[5072]: I1124 11:09:52.770571 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:52 crc kubenswrapper[5072]: I1124 11:09:52.770598 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:52Z","lastTransitionTime":"2025-11-24T11:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:52 crc kubenswrapper[5072]: I1124 11:09:52.873433 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:52 crc kubenswrapper[5072]: I1124 11:09:52.873508 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:52 crc kubenswrapper[5072]: I1124 11:09:52.873531 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:52 crc kubenswrapper[5072]: I1124 11:09:52.873559 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:52 crc kubenswrapper[5072]: I1124 11:09:52.873580 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:52Z","lastTransitionTime":"2025-11-24T11:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:52 crc kubenswrapper[5072]: I1124 11:09:52.977172 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:52 crc kubenswrapper[5072]: I1124 11:09:52.977272 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:52 crc kubenswrapper[5072]: I1124 11:09:52.977324 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:52 crc kubenswrapper[5072]: I1124 11:09:52.977353 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:52 crc kubenswrapper[5072]: I1124 11:09:52.977403 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:52Z","lastTransitionTime":"2025-11-24T11:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:53 crc kubenswrapper[5072]: I1124 11:09:53.016068 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nnrv7" Nov 24 11:09:53 crc kubenswrapper[5072]: E1124 11:09:53.016209 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nnrv7" podUID="60100e7d-c8b1-4b18-8567-46e21096fa0f" Nov 24 11:09:53 crc kubenswrapper[5072]: I1124 11:09:53.080450 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:53 crc kubenswrapper[5072]: I1124 11:09:53.080510 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:53 crc kubenswrapper[5072]: I1124 11:09:53.080529 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:53 crc kubenswrapper[5072]: I1124 11:09:53.080558 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:53 crc kubenswrapper[5072]: I1124 11:09:53.080579 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:53Z","lastTransitionTime":"2025-11-24T11:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:53 crc kubenswrapper[5072]: I1124 11:09:53.183626 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:53 crc kubenswrapper[5072]: I1124 11:09:53.183695 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:53 crc kubenswrapper[5072]: I1124 11:09:53.183720 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:53 crc kubenswrapper[5072]: I1124 11:09:53.183746 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:53 crc kubenswrapper[5072]: I1124 11:09:53.183767 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:53Z","lastTransitionTime":"2025-11-24T11:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:53 crc kubenswrapper[5072]: I1124 11:09:53.286620 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:53 crc kubenswrapper[5072]: I1124 11:09:53.286659 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:53 crc kubenswrapper[5072]: I1124 11:09:53.286671 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:53 crc kubenswrapper[5072]: I1124 11:09:53.286687 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:53 crc kubenswrapper[5072]: I1124 11:09:53.286698 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:53Z","lastTransitionTime":"2025-11-24T11:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:53 crc kubenswrapper[5072]: I1124 11:09:53.309848 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/60100e7d-c8b1-4b18-8567-46e21096fa0f-metrics-certs\") pod \"network-metrics-daemon-nnrv7\" (UID: \"60100e7d-c8b1-4b18-8567-46e21096fa0f\") " pod="openshift-multus/network-metrics-daemon-nnrv7" Nov 24 11:09:53 crc kubenswrapper[5072]: E1124 11:09:53.310078 5072 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 11:09:53 crc kubenswrapper[5072]: E1124 11:09:53.310180 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/60100e7d-c8b1-4b18-8567-46e21096fa0f-metrics-certs podName:60100e7d-c8b1-4b18-8567-46e21096fa0f nodeName:}" failed. No retries permitted until 2025-11-24 11:10:01.310157286 +0000 UTC m=+53.021681822 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/60100e7d-c8b1-4b18-8567-46e21096fa0f-metrics-certs") pod "network-metrics-daemon-nnrv7" (UID: "60100e7d-c8b1-4b18-8567-46e21096fa0f") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 11:09:53 crc kubenswrapper[5072]: I1124 11:09:53.389703 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:53 crc kubenswrapper[5072]: I1124 11:09:53.389790 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:53 crc kubenswrapper[5072]: I1124 11:09:53.389806 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:53 crc kubenswrapper[5072]: I1124 11:09:53.389836 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:53 crc kubenswrapper[5072]: I1124 11:09:53.389853 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:53Z","lastTransitionTime":"2025-11-24T11:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:53 crc kubenswrapper[5072]: I1124 11:09:53.493003 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:53 crc kubenswrapper[5072]: I1124 11:09:53.493057 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:53 crc kubenswrapper[5072]: I1124 11:09:53.493074 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:53 crc kubenswrapper[5072]: I1124 11:09:53.493096 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:53 crc kubenswrapper[5072]: I1124 11:09:53.493112 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:53Z","lastTransitionTime":"2025-11-24T11:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:53 crc kubenswrapper[5072]: I1124 11:09:53.595514 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:53 crc kubenswrapper[5072]: I1124 11:09:53.595573 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:53 crc kubenswrapper[5072]: I1124 11:09:53.595589 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:53 crc kubenswrapper[5072]: I1124 11:09:53.595614 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:53 crc kubenswrapper[5072]: I1124 11:09:53.595631 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:53Z","lastTransitionTime":"2025-11-24T11:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:53 crc kubenswrapper[5072]: I1124 11:09:53.698900 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:53 crc kubenswrapper[5072]: I1124 11:09:53.698953 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:53 crc kubenswrapper[5072]: I1124 11:09:53.698969 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:53 crc kubenswrapper[5072]: I1124 11:09:53.698992 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:53 crc kubenswrapper[5072]: I1124 11:09:53.699009 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:53Z","lastTransitionTime":"2025-11-24T11:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:53 crc kubenswrapper[5072]: I1124 11:09:53.801545 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:53 crc kubenswrapper[5072]: I1124 11:09:53.801603 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:53 crc kubenswrapper[5072]: I1124 11:09:53.801621 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:53 crc kubenswrapper[5072]: I1124 11:09:53.801647 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:53 crc kubenswrapper[5072]: I1124 11:09:53.801663 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:53Z","lastTransitionTime":"2025-11-24T11:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:53 crc kubenswrapper[5072]: I1124 11:09:53.904578 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:53 crc kubenswrapper[5072]: I1124 11:09:53.904637 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:53 crc kubenswrapper[5072]: I1124 11:09:53.904653 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:53 crc kubenswrapper[5072]: I1124 11:09:53.904677 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:53 crc kubenswrapper[5072]: I1124 11:09:53.904696 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:53Z","lastTransitionTime":"2025-11-24T11:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:54 crc kubenswrapper[5072]: I1124 11:09:54.007449 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:54 crc kubenswrapper[5072]: I1124 11:09:54.007485 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:54 crc kubenswrapper[5072]: I1124 11:09:54.007498 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:54 crc kubenswrapper[5072]: I1124 11:09:54.007514 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:54 crc kubenswrapper[5072]: I1124 11:09:54.007524 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:54Z","lastTransitionTime":"2025-11-24T11:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:54 crc kubenswrapper[5072]: I1124 11:09:54.016274 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:09:54 crc kubenswrapper[5072]: E1124 11:09:54.016697 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:09:54 crc kubenswrapper[5072]: I1124 11:09:54.017201 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:09:54 crc kubenswrapper[5072]: E1124 11:09:54.017423 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:09:54 crc kubenswrapper[5072]: I1124 11:09:54.017610 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:09:54 crc kubenswrapper[5072]: E1124 11:09:54.017738 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:09:54 crc kubenswrapper[5072]: I1124 11:09:54.111116 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:54 crc kubenswrapper[5072]: I1124 11:09:54.111435 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:54 crc kubenswrapper[5072]: I1124 11:09:54.111550 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:54 crc kubenswrapper[5072]: I1124 11:09:54.111685 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:54 crc kubenswrapper[5072]: I1124 11:09:54.111933 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:54Z","lastTransitionTime":"2025-11-24T11:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:54 crc kubenswrapper[5072]: I1124 11:09:54.215160 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:54 crc kubenswrapper[5072]: I1124 11:09:54.215218 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:54 crc kubenswrapper[5072]: I1124 11:09:54.215235 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:54 crc kubenswrapper[5072]: I1124 11:09:54.215261 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:54 crc kubenswrapper[5072]: I1124 11:09:54.215278 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:54Z","lastTransitionTime":"2025-11-24T11:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:54 crc kubenswrapper[5072]: I1124 11:09:54.318446 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:54 crc kubenswrapper[5072]: I1124 11:09:54.318483 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:54 crc kubenswrapper[5072]: I1124 11:09:54.318491 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:54 crc kubenswrapper[5072]: I1124 11:09:54.318508 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:54 crc kubenswrapper[5072]: I1124 11:09:54.318520 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:54Z","lastTransitionTime":"2025-11-24T11:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:54 crc kubenswrapper[5072]: I1124 11:09:54.421011 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:54 crc kubenswrapper[5072]: I1124 11:09:54.421208 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:54 crc kubenswrapper[5072]: I1124 11:09:54.421280 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:54 crc kubenswrapper[5072]: I1124 11:09:54.421366 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:54 crc kubenswrapper[5072]: I1124 11:09:54.421444 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:54Z","lastTransitionTime":"2025-11-24T11:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:54 crc kubenswrapper[5072]: I1124 11:09:54.524277 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:54 crc kubenswrapper[5072]: I1124 11:09:54.524596 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:54 crc kubenswrapper[5072]: I1124 11:09:54.524736 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:54 crc kubenswrapper[5072]: I1124 11:09:54.524873 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:54 crc kubenswrapper[5072]: I1124 11:09:54.525218 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:54Z","lastTransitionTime":"2025-11-24T11:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:54 crc kubenswrapper[5072]: I1124 11:09:54.627998 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:54 crc kubenswrapper[5072]: I1124 11:09:54.628039 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:54 crc kubenswrapper[5072]: I1124 11:09:54.628050 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:54 crc kubenswrapper[5072]: I1124 11:09:54.628065 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:54 crc kubenswrapper[5072]: I1124 11:09:54.628075 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:54Z","lastTransitionTime":"2025-11-24T11:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:54 crc kubenswrapper[5072]: I1124 11:09:54.730455 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:54 crc kubenswrapper[5072]: I1124 11:09:54.731147 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:54 crc kubenswrapper[5072]: I1124 11:09:54.731232 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:54 crc kubenswrapper[5072]: I1124 11:09:54.731314 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:54 crc kubenswrapper[5072]: I1124 11:09:54.731414 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:54Z","lastTransitionTime":"2025-11-24T11:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:54 crc kubenswrapper[5072]: I1124 11:09:54.833453 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:54 crc kubenswrapper[5072]: I1124 11:09:54.833511 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:54 crc kubenswrapper[5072]: I1124 11:09:54.833531 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:54 crc kubenswrapper[5072]: I1124 11:09:54.833558 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:54 crc kubenswrapper[5072]: I1124 11:09:54.833575 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:54Z","lastTransitionTime":"2025-11-24T11:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:54 crc kubenswrapper[5072]: I1124 11:09:54.935958 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:54 crc kubenswrapper[5072]: I1124 11:09:54.935995 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:54 crc kubenswrapper[5072]: I1124 11:09:54.936006 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:54 crc kubenswrapper[5072]: I1124 11:09:54.936022 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:54 crc kubenswrapper[5072]: I1124 11:09:54.936031 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:54Z","lastTransitionTime":"2025-11-24T11:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:55 crc kubenswrapper[5072]: I1124 11:09:55.016240 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nnrv7" Nov 24 11:09:55 crc kubenswrapper[5072]: E1124 11:09:55.016442 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nnrv7" podUID="60100e7d-c8b1-4b18-8567-46e21096fa0f" Nov 24 11:09:55 crc kubenswrapper[5072]: I1124 11:09:55.038510 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:55 crc kubenswrapper[5072]: I1124 11:09:55.038555 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:55 crc kubenswrapper[5072]: I1124 11:09:55.038568 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:55 crc kubenswrapper[5072]: I1124 11:09:55.038585 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:55 crc kubenswrapper[5072]: I1124 11:09:55.038597 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:55Z","lastTransitionTime":"2025-11-24T11:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:55 crc kubenswrapper[5072]: I1124 11:09:55.141293 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:55 crc kubenswrapper[5072]: I1124 11:09:55.141342 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:55 crc kubenswrapper[5072]: I1124 11:09:55.141352 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:55 crc kubenswrapper[5072]: I1124 11:09:55.141369 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:55 crc kubenswrapper[5072]: I1124 11:09:55.141395 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:55Z","lastTransitionTime":"2025-11-24T11:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:55 crc kubenswrapper[5072]: I1124 11:09:55.244936 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:55 crc kubenswrapper[5072]: I1124 11:09:55.244990 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:55 crc kubenswrapper[5072]: I1124 11:09:55.245006 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:55 crc kubenswrapper[5072]: I1124 11:09:55.245029 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:55 crc kubenswrapper[5072]: I1124 11:09:55.245045 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:55Z","lastTransitionTime":"2025-11-24T11:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:55 crc kubenswrapper[5072]: I1124 11:09:55.348163 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:55 crc kubenswrapper[5072]: I1124 11:09:55.348204 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:55 crc kubenswrapper[5072]: I1124 11:09:55.348255 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:55 crc kubenswrapper[5072]: I1124 11:09:55.348309 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:55 crc kubenswrapper[5072]: I1124 11:09:55.348330 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:55Z","lastTransitionTime":"2025-11-24T11:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:55 crc kubenswrapper[5072]: I1124 11:09:55.451176 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:55 crc kubenswrapper[5072]: I1124 11:09:55.451242 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:55 crc kubenswrapper[5072]: I1124 11:09:55.451259 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:55 crc kubenswrapper[5072]: I1124 11:09:55.451283 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:55 crc kubenswrapper[5072]: I1124 11:09:55.451300 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:55Z","lastTransitionTime":"2025-11-24T11:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:55 crc kubenswrapper[5072]: I1124 11:09:55.554594 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:55 crc kubenswrapper[5072]: I1124 11:09:55.554652 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:55 crc kubenswrapper[5072]: I1124 11:09:55.554669 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:55 crc kubenswrapper[5072]: I1124 11:09:55.554692 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:55 crc kubenswrapper[5072]: I1124 11:09:55.554711 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:55Z","lastTransitionTime":"2025-11-24T11:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:55 crc kubenswrapper[5072]: I1124 11:09:55.657681 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:55 crc kubenswrapper[5072]: I1124 11:09:55.657751 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:55 crc kubenswrapper[5072]: I1124 11:09:55.657774 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:55 crc kubenswrapper[5072]: I1124 11:09:55.657803 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:55 crc kubenswrapper[5072]: I1124 11:09:55.657824 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:55Z","lastTransitionTime":"2025-11-24T11:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:55 crc kubenswrapper[5072]: I1124 11:09:55.760522 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:55 crc kubenswrapper[5072]: I1124 11:09:55.760592 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:55 crc kubenswrapper[5072]: I1124 11:09:55.760614 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:55 crc kubenswrapper[5072]: I1124 11:09:55.760643 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:55 crc kubenswrapper[5072]: I1124 11:09:55.760666 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:55Z","lastTransitionTime":"2025-11-24T11:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:55 crc kubenswrapper[5072]: I1124 11:09:55.863574 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:55 crc kubenswrapper[5072]: I1124 11:09:55.863649 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:55 crc kubenswrapper[5072]: I1124 11:09:55.863673 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:55 crc kubenswrapper[5072]: I1124 11:09:55.863702 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:55 crc kubenswrapper[5072]: I1124 11:09:55.863725 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:55Z","lastTransitionTime":"2025-11-24T11:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:55 crc kubenswrapper[5072]: I1124 11:09:55.966123 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:55 crc kubenswrapper[5072]: I1124 11:09:55.966179 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:55 crc kubenswrapper[5072]: I1124 11:09:55.966198 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:55 crc kubenswrapper[5072]: I1124 11:09:55.966221 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:55 crc kubenswrapper[5072]: I1124 11:09:55.966237 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:55Z","lastTransitionTime":"2025-11-24T11:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:56 crc kubenswrapper[5072]: I1124 11:09:56.016132 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:09:56 crc kubenswrapper[5072]: I1124 11:09:56.016155 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:09:56 crc kubenswrapper[5072]: E1124 11:09:56.016260 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:09:56 crc kubenswrapper[5072]: E1124 11:09:56.016473 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:09:56 crc kubenswrapper[5072]: I1124 11:09:56.016500 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:09:56 crc kubenswrapper[5072]: E1124 11:09:56.016559 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:09:56 crc kubenswrapper[5072]: I1124 11:09:56.069090 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:56 crc kubenswrapper[5072]: I1124 11:09:56.069132 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:56 crc kubenswrapper[5072]: I1124 11:09:56.069144 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:56 crc kubenswrapper[5072]: I1124 11:09:56.069163 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:56 crc kubenswrapper[5072]: I1124 11:09:56.069175 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:56Z","lastTransitionTime":"2025-11-24T11:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:56 crc kubenswrapper[5072]: I1124 11:09:56.172025 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:56 crc kubenswrapper[5072]: I1124 11:09:56.172081 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:56 crc kubenswrapper[5072]: I1124 11:09:56.172103 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:56 crc kubenswrapper[5072]: I1124 11:09:56.172127 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:56 crc kubenswrapper[5072]: I1124 11:09:56.172144 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:56Z","lastTransitionTime":"2025-11-24T11:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:56 crc kubenswrapper[5072]: I1124 11:09:56.274711 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:56 crc kubenswrapper[5072]: I1124 11:09:56.274772 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:56 crc kubenswrapper[5072]: I1124 11:09:56.274792 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:56 crc kubenswrapper[5072]: I1124 11:09:56.274817 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:56 crc kubenswrapper[5072]: I1124 11:09:56.274835 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:56Z","lastTransitionTime":"2025-11-24T11:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:56 crc kubenswrapper[5072]: I1124 11:09:56.376942 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:56 crc kubenswrapper[5072]: I1124 11:09:56.376987 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:56 crc kubenswrapper[5072]: I1124 11:09:56.376999 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:56 crc kubenswrapper[5072]: I1124 11:09:56.377017 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:56 crc kubenswrapper[5072]: I1124 11:09:56.377032 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:56Z","lastTransitionTime":"2025-11-24T11:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:56 crc kubenswrapper[5072]: I1124 11:09:56.480184 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:56 crc kubenswrapper[5072]: I1124 11:09:56.480235 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:56 crc kubenswrapper[5072]: I1124 11:09:56.480248 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:56 crc kubenswrapper[5072]: I1124 11:09:56.480267 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:56 crc kubenswrapper[5072]: I1124 11:09:56.480279 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:56Z","lastTransitionTime":"2025-11-24T11:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:56 crc kubenswrapper[5072]: I1124 11:09:56.582614 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:56 crc kubenswrapper[5072]: I1124 11:09:56.582664 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:56 crc kubenswrapper[5072]: I1124 11:09:56.582676 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:56 crc kubenswrapper[5072]: I1124 11:09:56.582694 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:56 crc kubenswrapper[5072]: I1124 11:09:56.582707 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:56Z","lastTransitionTime":"2025-11-24T11:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:56 crc kubenswrapper[5072]: I1124 11:09:56.701276 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:56 crc kubenswrapper[5072]: I1124 11:09:56.701329 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:56 crc kubenswrapper[5072]: I1124 11:09:56.701342 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:56 crc kubenswrapper[5072]: I1124 11:09:56.701361 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:56 crc kubenswrapper[5072]: I1124 11:09:56.701392 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:56Z","lastTransitionTime":"2025-11-24T11:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:56 crc kubenswrapper[5072]: I1124 11:09:56.805094 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:56 crc kubenswrapper[5072]: I1124 11:09:56.805146 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:56 crc kubenswrapper[5072]: I1124 11:09:56.805159 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:56 crc kubenswrapper[5072]: I1124 11:09:56.805178 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:56 crc kubenswrapper[5072]: I1124 11:09:56.805190 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:56Z","lastTransitionTime":"2025-11-24T11:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:56 crc kubenswrapper[5072]: I1124 11:09:56.908612 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:56 crc kubenswrapper[5072]: I1124 11:09:56.908669 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:56 crc kubenswrapper[5072]: I1124 11:09:56.908684 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:56 crc kubenswrapper[5072]: I1124 11:09:56.908702 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:56 crc kubenswrapper[5072]: I1124 11:09:56.908716 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:56Z","lastTransitionTime":"2025-11-24T11:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:57 crc kubenswrapper[5072]: I1124 11:09:57.012549 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:57 crc kubenswrapper[5072]: I1124 11:09:57.012619 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:57 crc kubenswrapper[5072]: I1124 11:09:57.012632 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:57 crc kubenswrapper[5072]: I1124 11:09:57.012703 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:57 crc kubenswrapper[5072]: I1124 11:09:57.012718 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:57Z","lastTransitionTime":"2025-11-24T11:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:57 crc kubenswrapper[5072]: I1124 11:09:57.015691 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nnrv7" Nov 24 11:09:57 crc kubenswrapper[5072]: E1124 11:09:57.015820 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nnrv7" podUID="60100e7d-c8b1-4b18-8567-46e21096fa0f" Nov 24 11:09:57 crc kubenswrapper[5072]: I1124 11:09:57.016602 5072 scope.go:117] "RemoveContainer" containerID="17a209788447e8d556a2f5d4611b2979e998e017c2ad7a81d88b9d005f215721" Nov 24 11:09:57 crc kubenswrapper[5072]: I1124 11:09:57.115344 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:57 crc kubenswrapper[5072]: I1124 11:09:57.115779 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:57 crc kubenswrapper[5072]: I1124 11:09:57.115797 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:57 crc kubenswrapper[5072]: I1124 11:09:57.115821 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:57 crc kubenswrapper[5072]: I1124 11:09:57.115839 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:57Z","lastTransitionTime":"2025-11-24T11:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:57 crc kubenswrapper[5072]: I1124 11:09:57.218098 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:57 crc kubenswrapper[5072]: I1124 11:09:57.218141 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:57 crc kubenswrapper[5072]: I1124 11:09:57.218152 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:57 crc kubenswrapper[5072]: I1124 11:09:57.218170 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:57 crc kubenswrapper[5072]: I1124 11:09:57.218180 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:57Z","lastTransitionTime":"2025-11-24T11:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:57 crc kubenswrapper[5072]: I1124 11:09:57.320644 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:57 crc kubenswrapper[5072]: I1124 11:09:57.320701 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:57 crc kubenswrapper[5072]: I1124 11:09:57.320721 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:57 crc kubenswrapper[5072]: I1124 11:09:57.320746 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:57 crc kubenswrapper[5072]: I1124 11:09:57.320764 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:57Z","lastTransitionTime":"2025-11-24T11:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:57 crc kubenswrapper[5072]: I1124 11:09:57.362013 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n4qmw_80fda759-ddfd-438a-b5a2-cb775ee1bf7e/ovnkube-controller/1.log" Nov 24 11:09:57 crc kubenswrapper[5072]: I1124 11:09:57.363917 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" event={"ID":"80fda759-ddfd-438a-b5a2-cb775ee1bf7e","Type":"ContainerStarted","Data":"06ce6673e7a7189e88659cf5cb63428c7ad38aea24f770411a7de6b3754b27b7"} Nov 24 11:09:57 crc kubenswrapper[5072]: I1124 11:09:57.364025 5072 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 11:09:57 crc kubenswrapper[5072]: I1124 11:09:57.379589 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9007e2c-ce36-49d5-ac3f-a2a0ced4e662\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://631c19835680cfbfc94d8d2864f79bb327a834aae717a2c9c525383029e44001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a299161b21fb4a4bc255d765f39eaafa3c87549cc62d458d28ff57fbb4b5fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25ce4f3c52e2096622385f0bd213a058de7ddd3967ed8ba918e79fc63b00429c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28c581f99dcf7d549d235350230e7c3ef380dfeb4fdff577353410642700cb1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:57Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:57 crc kubenswrapper[5072]: I1124 11:09:57.423330 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:57 crc kubenswrapper[5072]: I1124 11:09:57.423387 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:57 crc kubenswrapper[5072]: I1124 11:09:57.423403 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:57 crc kubenswrapper[5072]: I1124 11:09:57.423422 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:57 crc kubenswrapper[5072]: I1124 11:09:57.423437 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:57Z","lastTransitionTime":"2025-11-24T11:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:57 crc kubenswrapper[5072]: I1124 11:09:57.445360 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:57Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:57 crc kubenswrapper[5072]: I1124 11:09:57.471444 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a948c39e09b468da8df5726e7734af35e1d5324d44a6ad11f6e30031f27060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:57Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:57 crc kubenswrapper[5072]: I1124 11:09:57.500064 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:57Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:57 crc kubenswrapper[5072]: I1124 11:09:57.526144 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:57 crc kubenswrapper[5072]: I1124 11:09:57.526217 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:57 crc kubenswrapper[5072]: I1124 11:09:57.526240 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:57 crc kubenswrapper[5072]: I1124 11:09:57.526273 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:57 crc kubenswrapper[5072]: I1124 11:09:57.526297 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:57Z","lastTransitionTime":"2025-11-24T11:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:57 crc kubenswrapper[5072]: I1124 11:09:57.528044 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a60343a1-7193-420d-b6ef-81505cfad266\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6597a19c8ed876fea1aaa8077315a8f39d0a79dee6af94970a3abcd552d673e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89e652bfaac124e13e0b3dfd3f167688a6b417b3613fb94d5422e2134ad95a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c9b314ea6e67a2866adfd0dc2e429523b6db6dab450a1a95fe5528548a0fcb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5f54ddd554c2e52a492be6b3e237793c7b7bed201d942c23d11983e154863a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e03b85333c8be2e5efe40f082369652f009482373f8e230fd948b2dee4e2ee39\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:09:23Z\\\",\\\"message\\\":\\\"W1124 11:09:12.543261 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 11:09:12.543592 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763982552 cert, and key in /tmp/serving-cert-2249531990/serving-signer.crt, /tmp/serving-cert-2249531990/serving-signer.key\\\\nI1124 11:09:13.042739 1 observer_polling.go:159] Starting file observer\\\\nW1124 11:09:13.046128 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 11:09:13.046351 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:09:13.048981 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2249531990/tls.crt::/tmp/serving-cert-2249531990/tls.key\\\\\\\"\\\\nF1124 11:09:23.567420 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d2187669c4dc9aae8ca2f2141104aee1e20df96f0bccf45ecd4c8528f51d1af\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a6b0468c00ca40213d12dd7b80c9f0dcfb93509a44ae37414053672e674f9f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6b0468c00ca40213d12dd7b80c9f0dcfb93509a44ae37414053672e674f9f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:57Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:57 crc kubenswrapper[5072]: I1124 11:09:57.542778 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:57Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:57 crc kubenswrapper[5072]: I1124 11:09:57.568460 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1421e4bd297d99e68c36da933221bbabf8d74aa5fbfa7cbfe831215de52d4790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c82cb1df0677da29463f84139b09b8ee263695e4c994ef7d17846556260b5c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89dd7133a078fe05808fdf20f22b6939004406ae85d3b6ef854a3e4031350491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f6526ffcce8bc139bd9442203e460c71b46e2e8cf9e1f0d03beb067f5dc1c39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98470930757c0529cc831f91feab9f4b004c808efbfdf40e3e95b12e6af1c6d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7621cb39fa8d0330ee899d4962150519618be95eabfc592e6678bb5f5fbbdbfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06ce6673e7a7189e88659cf5cb63428c7ad38aea24f770411a7de6b3754b27b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17a209788447e8d556a2f5d4611b2979e998e017c2ad7a81d88b9d005f215721\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:09:43Z\\\",\\\"message\\\":\\\"amespaces:*false,},},},Features:nil,},}\\\\nI1124 11:09:43.362510 6535 egressqos.go:1009] Finished syncing EgressQoS node crc : 15.350947ms\\\\nI1124 11:09:43.362562 6535 nad_controller.go:166] [zone-nad-controller NAD controller]: shutting down\\\\nI1124 11:09:43.362359 6535 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1124 11:09:43.362598 6535 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1124 11:09:43.362644 6535 handler.go:208] Removed *v1.Node event handler 2\\\\nI1124 11:09:43.362667 6535 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1124 11:09:43.362683 6535 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1124 11:09:43.362716 6535 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1124 11:09:43.362743 6535 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 11:09:43.362757 6535 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1124 11:09:43.362762 6535 handler.go:208] Removed *v1.Node event handler 7\\\\nI1124 11:09:43.362775 6535 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1124 11:09:43.362781 6535 factory.go:656] Stopping watch factory\\\\nI1124 11:09:43.362790 6535 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1124 11:09:43.362800 6535 ovnkube.go:599] Stopped ovnkube\\\\nI1124 11:09:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af4c3d6857b6aaa6a401604f5423cfb55488de707a08698b4cf9f420b9c07975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4qmw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:57Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:57 crc kubenswrapper[5072]: I1124 11:09:57.587357 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qjsxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74eb978f-00ff-4ed3-a5da-8026a3211592\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a69b8017daa872327d88eab8150845309e30c5cf37b229292e7c8a80e5d599c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://911b5942d35c25032791bf5a43559a6234acf215f5d3f84a30e69aced0caecc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://911b5942d35c25032791bf5a43559a6234acf215f5d3f84a30e69aced0caecc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://829da19d26a0ee0192a826e0b355266bcc48c77cf7b1fcf97a9e56add5d48645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://829da19d26a0ee0192a826e0b355266bcc48c77cf7b1fcf97a9e56add5d48645\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5add393950b53ed615d28b3d65833ae6a5174616b7170577babf1f4b7b6a2336\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5add393950b53ed615d28b3d65833ae6a5174616b7170577babf1f4b7b6a2336\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4771d3054f62a25ec9be8b6628ead9e7eb99ad4ae545d803919cb0122343c0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4771d3054f62a25ec9be8b6628ead9e7eb99ad4ae545d803919cb0122343c0ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd19ed803c2b441c4dde807b4cd4461c581058658db24f32dea39ad73b9cef14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd19ed803c2b441c4dde807b4cd4461c581058658db24f32dea39ad73b9cef14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09dba82c18fac19ddd5bbbeecab58a5dc685dbda72e7570cde5d445990066d2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09dba82c18fac19ddd5bbbeecab58a5dc685dbda72e7570cde5d445990066d2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qjsxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:57Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:57 crc kubenswrapper[5072]: I1124 11:09:57.598914 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wndk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c05ddf6-986e-4bd6-95f0-7d734bc59140\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://894e58e94d99e8ef26722db709e0135d59ac4847daf001e37ce266c9baf02e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gztmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea4b260f16a11dade8c8b120408cf2d167dd868a9b938f4231aa811546252c56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gztmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wndk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:57Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:57 crc kubenswrapper[5072]: I1124 11:09:57.611052 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nnrv7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60100e7d-c8b1-4b18-8567-46e21096fa0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbdfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbdfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:45Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nnrv7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:57Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:57 crc kubenswrapper[5072]: I1124 11:09:57.628968 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:57 crc kubenswrapper[5072]: I1124 11:09:57.629029 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:57 crc kubenswrapper[5072]: I1124 11:09:57.629054 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:57 crc kubenswrapper[5072]: I1124 11:09:57.629084 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:57 crc kubenswrapper[5072]: I1124 11:09:57.629108 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:57Z","lastTransitionTime":"2025-11-24T11:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:57 crc kubenswrapper[5072]: I1124 11:09:57.630645 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b45fbff892ae7b15dc056d52d6485a995bb8a62ae423498027fe4866ef51e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcaa27616bc15c5ce26c371eb8a8f155914434949662b30894cd1ef7aa8e04a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:57Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:57 crc kubenswrapper[5072]: I1124 11:09:57.646530 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3973b61727227663fde759ad817fc73088f78293c67fc1bbbf5d5543afa7bbb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:57Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:57 crc kubenswrapper[5072]: I1124 11:09:57.661115 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bkjf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"175fd540-009b-4cb4-9c3e-e2ebc7e787f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d000a9d98b0e3ed54c1cc50148360bb8103d332c45ee03e745f14929132d2c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcts8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bkjf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:57Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:57 crc kubenswrapper[5072]: I1124 11:09:57.675340 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t8b9x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a9fe7b3-71a3-4388-8ee4-7531ceef6049\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96637ece9dca11a6b9e2a8fff8e78ca37f48e9f86e3f076e80cbd56aa353ca74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmbvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t8b9x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:57Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:57 crc kubenswrapper[5072]: I1124 11:09:57.685419 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85ee6420-36f0-467c-acf4-ebea8b02c8d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21d57225dc522c1ee3621c75ac8f9f93c47d21afb8b0cb1aae2d6aea1d17a252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3509fd52379451e43594c096ef652d92778331f2aef6b689e547f35a384b976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jfxnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:57Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:57 crc kubenswrapper[5072]: I1124 11:09:57.695222 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jz4mm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19d555ef-9635-4aa7-bce1-7b1eb4805445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc7d5e96171aeadf92196d2b795c03ec634abd92814569a974200484569c145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8k8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jz4mm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:57Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:57 crc kubenswrapper[5072]: I1124 11:09:57.731482 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:57 crc kubenswrapper[5072]: I1124 11:09:57.731526 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:57 crc kubenswrapper[5072]: I1124 11:09:57.731541 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:57 crc kubenswrapper[5072]: I1124 11:09:57.731561 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:57 crc kubenswrapper[5072]: I1124 11:09:57.731575 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:57Z","lastTransitionTime":"2025-11-24T11:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:57 crc kubenswrapper[5072]: I1124 11:09:57.834777 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:57 crc kubenswrapper[5072]: I1124 11:09:57.834827 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:57 crc kubenswrapper[5072]: I1124 11:09:57.834839 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:57 crc kubenswrapper[5072]: I1124 11:09:57.834859 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:57 crc kubenswrapper[5072]: I1124 11:09:57.834876 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:57Z","lastTransitionTime":"2025-11-24T11:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:57 crc kubenswrapper[5072]: I1124 11:09:57.937774 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:57 crc kubenswrapper[5072]: I1124 11:09:57.937850 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:57 crc kubenswrapper[5072]: I1124 11:09:57.937885 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:57 crc kubenswrapper[5072]: I1124 11:09:57.937917 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:57 crc kubenswrapper[5072]: I1124 11:09:57.937935 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:57Z","lastTransitionTime":"2025-11-24T11:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:58 crc kubenswrapper[5072]: I1124 11:09:58.015563 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:09:58 crc kubenswrapper[5072]: I1124 11:09:58.015646 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:09:58 crc kubenswrapper[5072]: E1124 11:09:58.015736 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:09:58 crc kubenswrapper[5072]: I1124 11:09:58.015674 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:09:58 crc kubenswrapper[5072]: E1124 11:09:58.015853 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:09:58 crc kubenswrapper[5072]: E1124 11:09:58.016128 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:09:58 crc kubenswrapper[5072]: I1124 11:09:58.041071 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:58 crc kubenswrapper[5072]: I1124 11:09:58.041140 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:58 crc kubenswrapper[5072]: I1124 11:09:58.041152 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:58 crc kubenswrapper[5072]: I1124 11:09:58.041168 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:58 crc kubenswrapper[5072]: I1124 11:09:58.041181 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:58Z","lastTransitionTime":"2025-11-24T11:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:58 crc kubenswrapper[5072]: I1124 11:09:58.144146 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:58 crc kubenswrapper[5072]: I1124 11:09:58.144211 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:58 crc kubenswrapper[5072]: I1124 11:09:58.144230 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:58 crc kubenswrapper[5072]: I1124 11:09:58.144253 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:58 crc kubenswrapper[5072]: I1124 11:09:58.144271 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:58Z","lastTransitionTime":"2025-11-24T11:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:58 crc kubenswrapper[5072]: I1124 11:09:58.247331 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:58 crc kubenswrapper[5072]: I1124 11:09:58.247423 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:58 crc kubenswrapper[5072]: I1124 11:09:58.247443 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:58 crc kubenswrapper[5072]: I1124 11:09:58.247470 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:58 crc kubenswrapper[5072]: I1124 11:09:58.247488 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:58Z","lastTransitionTime":"2025-11-24T11:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:58 crc kubenswrapper[5072]: I1124 11:09:58.350781 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:58 crc kubenswrapper[5072]: I1124 11:09:58.350875 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:58 crc kubenswrapper[5072]: I1124 11:09:58.350908 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:58 crc kubenswrapper[5072]: I1124 11:09:58.350939 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:58 crc kubenswrapper[5072]: I1124 11:09:58.350961 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:58Z","lastTransitionTime":"2025-11-24T11:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:58 crc kubenswrapper[5072]: I1124 11:09:58.369731 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n4qmw_80fda759-ddfd-438a-b5a2-cb775ee1bf7e/ovnkube-controller/2.log" Nov 24 11:09:58 crc kubenswrapper[5072]: I1124 11:09:58.370849 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n4qmw_80fda759-ddfd-438a-b5a2-cb775ee1bf7e/ovnkube-controller/1.log" Nov 24 11:09:58 crc kubenswrapper[5072]: I1124 11:09:58.374194 5072 generic.go:334] "Generic (PLEG): container finished" podID="80fda759-ddfd-438a-b5a2-cb775ee1bf7e" containerID="06ce6673e7a7189e88659cf5cb63428c7ad38aea24f770411a7de6b3754b27b7" exitCode=1 Nov 24 11:09:58 crc kubenswrapper[5072]: I1124 11:09:58.374244 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" event={"ID":"80fda759-ddfd-438a-b5a2-cb775ee1bf7e","Type":"ContainerDied","Data":"06ce6673e7a7189e88659cf5cb63428c7ad38aea24f770411a7de6b3754b27b7"} Nov 24 11:09:58 crc kubenswrapper[5072]: I1124 11:09:58.374282 5072 scope.go:117] "RemoveContainer" containerID="17a209788447e8d556a2f5d4611b2979e998e017c2ad7a81d88b9d005f215721" Nov 24 11:09:58 crc kubenswrapper[5072]: I1124 11:09:58.375484 5072 scope.go:117] "RemoveContainer" containerID="06ce6673e7a7189e88659cf5cb63428c7ad38aea24f770411a7de6b3754b27b7" Nov 24 11:09:58 crc kubenswrapper[5072]: E1124 11:09:58.375850 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-n4qmw_openshift-ovn-kubernetes(80fda759-ddfd-438a-b5a2-cb775ee1bf7e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" podUID="80fda759-ddfd-438a-b5a2-cb775ee1bf7e" Nov 24 11:09:58 crc kubenswrapper[5072]: I1124 11:09:58.390112 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:58Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:58 crc kubenswrapper[5072]: I1124 11:09:58.407476 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9007e2c-ce36-49d5-ac3f-a2a0ced4e662\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://631c19835680cfbfc94d8d2864f79bb327a834aae717a2c9c525383029e44001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a299161b21fb4a4bc255d765f39eaafa3c87549cc62d458d28ff57fbb4b5fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25ce4f3c52e2096622385f0bd213a058de7ddd3967ed8ba918e79fc63b00429c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28c581f99dcf7d549d235350230e7c3ef380dfeb4fdff577353410642700cb1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:58Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:58 crc kubenswrapper[5072]: I1124 11:09:58.420876 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:58Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:58 crc kubenswrapper[5072]: I1124 11:09:58.433303 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a948c39e09b468da8df5726e7734af35e1d5324d44a6ad11f6e30031f27060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:58Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:58 crc kubenswrapper[5072]: I1124 11:09:58.452218 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1421e4bd297d99e68c36da933221bbabf8d74aa5fbfa7cbfe831215de52d4790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c82cb1df0677da29463f84139b09b8ee263695e4c994ef7d17846556260b5c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89dd7133a078fe05808fdf20f22b6939004406ae85d3b6ef854a3e4031350491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f6526ffcce8bc139bd9442203e460c71b46e2e8cf9e1f0d03beb067f5dc1c39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98470930757c0529cc831f91feab9f4b004c808efbfdf40e3e95b12e6af1c6d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7621cb39fa8d0330ee899d4962150519618be95eabfc592e6678bb5f5fbbdbfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06ce6673e7a7189e88659cf5cb63428c7ad38aea24f770411a7de6b3754b27b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17a209788447e8d556a2f5d4611b2979e998e017c2ad7a81d88b9d005f215721\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:09:43Z\\\",\\\"message\\\":\\\"amespaces:*false,},},},Features:nil,},}\\\\nI1124 11:09:43.362510 6535 egressqos.go:1009] Finished syncing EgressQoS node crc : 15.350947ms\\\\nI1124 11:09:43.362562 6535 nad_controller.go:166] [zone-nad-controller NAD controller]: shutting down\\\\nI1124 11:09:43.362359 6535 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1124 11:09:43.362598 6535 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1124 11:09:43.362644 6535 handler.go:208] Removed *v1.Node event handler 2\\\\nI1124 11:09:43.362667 6535 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1124 11:09:43.362683 6535 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1124 11:09:43.362716 6535 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1124 11:09:43.362743 6535 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 11:09:43.362757 6535 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1124 11:09:43.362762 6535 handler.go:208] Removed *v1.Node event handler 7\\\\nI1124 11:09:43.362775 6535 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1124 11:09:43.362781 6535 factory.go:656] Stopping watch factory\\\\nI1124 11:09:43.362790 6535 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1124 11:09:43.362800 6535 ovnkube.go:599] Stopped ovnkube\\\\nI1124 11:09:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06ce6673e7a7189e88659cf5cb63428c7ad38aea24f770411a7de6b3754b27b7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:09:57Z\\\",\\\"message\\\":\\\"_cluster\\\\\\\", UUID:\\\\\\\"ba175bbe-5cc4-47e6-a32d-57693e1320bd\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-controller-manager/kube-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-controller-manager/kube-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-controller-manager/kube-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.36\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1124 11:09:57.933863 6751 ovnkube.go:599] Stopped ovnkube\\\\nI1124 11:09:57.933893 6751 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 11:09:57.933975 6751 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af4c3d6857b6aaa6a401604f5423cfb55488de707a08698b4cf9f420b9c07975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4qmw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:58Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:58 crc kubenswrapper[5072]: I1124 11:09:58.453131 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:58 crc kubenswrapper[5072]: I1124 11:09:58.453183 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:58 crc kubenswrapper[5072]: I1124 11:09:58.453200 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:58 crc kubenswrapper[5072]: I1124 11:09:58.453222 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:58 crc kubenswrapper[5072]: I1124 11:09:58.453240 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:58Z","lastTransitionTime":"2025-11-24T11:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:58 crc kubenswrapper[5072]: I1124 11:09:58.468160 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a60343a1-7193-420d-b6ef-81505cfad266\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6597a19c8ed876fea1aaa8077315a8f39d0a79dee6af94970a3abcd552d673e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89e652bfaac124e13e0b3dfd3f167688a6b417b3613fb94d5422e2134ad95a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c9b314ea6e67a2866adfd0dc2e429523b6db6dab450a1a95fe5528548a0fcb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5f54ddd554c2e52a492be6b3e237793c7b7bed201d942c23d11983e154863a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e03b85333c8be2e5efe40f082369652f009482373f8e230fd948b2dee4e2ee39\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:09:23Z\\\",\\\"message\\\":\\\"W1124 11:09:12.543261 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 11:09:12.543592 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763982552 cert, and key in /tmp/serving-cert-2249531990/serving-signer.crt, /tmp/serving-cert-2249531990/serving-signer.key\\\\nI1124 11:09:13.042739 1 observer_polling.go:159] Starting file observer\\\\nW1124 11:09:13.046128 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 11:09:13.046351 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:09:13.048981 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2249531990/tls.crt::/tmp/serving-cert-2249531990/tls.key\\\\\\\"\\\\nF1124 11:09:23.567420 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d2187669c4dc9aae8ca2f2141104aee1e20df96f0bccf45ecd4c8528f51d1af\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a6b0468c00ca40213d12dd7b80c9f0dcfb93509a44ae37414053672e674f9f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6b0468c00ca40213d12dd7b80c9f0dcfb93509a44ae37414053672e674f9f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:58Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:58 crc kubenswrapper[5072]: I1124 11:09:58.483259 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:58Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:58 crc kubenswrapper[5072]: I1124 11:09:58.501326 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qjsxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74eb978f-00ff-4ed3-a5da-8026a3211592\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a69b8017daa872327d88eab8150845309e30c5cf37b229292e7c8a80e5d599c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://911b5942d35c25032791bf5a43559a6234acf215f5d3f84a30e69aced0caecc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://911b5942d35c25032791bf5a43559a6234acf215f5d3f84a30e69aced0caecc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://829da19d26a0ee0192a826e0b355266bcc48c77cf7b1fcf97a9e56add5d48645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://829da19d26a0ee0192a826e0b355266bcc48c77cf7b1fcf97a9e56add5d48645\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5add393950b53ed615d28b3d65833ae6a5174616b7170577babf1f4b7b6a2336\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5add393950b53ed615d28b3d65833ae6a5174616b7170577babf1f4b7b6a2336\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4771d3054f62a25ec9be8b6628ead9e7eb99ad4ae545d803919cb0122343c0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4771d3054f62a25ec9be8b6628ead9e7eb99ad4ae545d803919cb0122343c0ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd19ed803c2b441c4dde807b4cd4461c581058658db24f32dea39ad73b9cef14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd19ed803c2b441c4dde807b4cd4461c581058658db24f32dea39ad73b9cef14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09dba82c18fac19ddd5bbbeecab58a5dc685dbda72e7570cde5d445990066d2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09dba82c18fac19ddd5bbbeecab58a5dc685dbda72e7570cde5d445990066d2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qjsxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:58Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:58 crc kubenswrapper[5072]: I1124 11:09:58.515005 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85ee6420-36f0-467c-acf4-ebea8b02c8d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21d57225dc522c1ee3621c75ac8f9f93c47d21afb8b0cb1aae2d6aea1d17a252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3509fd52379451e43594c096ef652d92778331f2aef6b689e547f35a384b976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jfxnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:58Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:58 crc kubenswrapper[5072]: I1124 11:09:58.526068 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jz4mm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19d555ef-9635-4aa7-bce1-7b1eb4805445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc7d5e96171aeadf92196d2b795c03ec634abd92814569a974200484569c145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8k8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jz4mm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:58Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:58 crc kubenswrapper[5072]: I1124 11:09:58.539708 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wndk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c05ddf6-986e-4bd6-95f0-7d734bc59140\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://894e58e94d99e8ef26722db709e0135d59ac4847daf001e37ce266c9baf02e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gztmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea4b260f16a11dade8c8b120408cf2d167dd868a9b938f4231aa811546252c56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gztmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wndk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:58Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:58 crc kubenswrapper[5072]: I1124 11:09:58.551948 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nnrv7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60100e7d-c8b1-4b18-8567-46e21096fa0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbdfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbdfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:45Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nnrv7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:58Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:58 crc kubenswrapper[5072]: I1124 11:09:58.555687 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:58 crc kubenswrapper[5072]: I1124 11:09:58.555740 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:58 crc kubenswrapper[5072]: I1124 11:09:58.555756 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:58 crc kubenswrapper[5072]: I1124 11:09:58.555779 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:58 crc kubenswrapper[5072]: I1124 11:09:58.555795 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:58Z","lastTransitionTime":"2025-11-24T11:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:58 crc kubenswrapper[5072]: I1124 11:09:58.567176 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b45fbff892ae7b15dc056d52d6485a995bb8a62ae423498027fe4866ef51e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcaa27616bc15c5ce26c371eb8a8f155914434949662b30894cd1ef7aa8e04a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:58Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:58 crc kubenswrapper[5072]: I1124 11:09:58.584435 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3973b61727227663fde759ad817fc73088f78293c67fc1bbbf5d5543afa7bbb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:58Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:58 crc kubenswrapper[5072]: I1124 11:09:58.596564 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bkjf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"175fd540-009b-4cb4-9c3e-e2ebc7e787f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d000a9d98b0e3ed54c1cc50148360bb8103d332c45ee03e745f14929132d2c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcts8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bkjf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:58Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:58 crc kubenswrapper[5072]: I1124 11:09:58.614960 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t8b9x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a9fe7b3-71a3-4388-8ee4-7531ceef6049\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96637ece9dca11a6b9e2a8fff8e78ca37f48e9f86e3f076e80cbd56aa353ca74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmbvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t8b9x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:58Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:58 crc kubenswrapper[5072]: I1124 11:09:58.658029 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:58 crc kubenswrapper[5072]: I1124 11:09:58.658097 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:58 crc kubenswrapper[5072]: I1124 11:09:58.658119 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:58 crc kubenswrapper[5072]: I1124 11:09:58.658148 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:58 crc kubenswrapper[5072]: I1124 11:09:58.658169 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:58Z","lastTransitionTime":"2025-11-24T11:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:58 crc kubenswrapper[5072]: I1124 11:09:58.761818 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:58 crc kubenswrapper[5072]: I1124 11:09:58.761879 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:58 crc kubenswrapper[5072]: I1124 11:09:58.761896 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:58 crc kubenswrapper[5072]: I1124 11:09:58.761973 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:58 crc kubenswrapper[5072]: I1124 11:09:58.761993 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:58Z","lastTransitionTime":"2025-11-24T11:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:58 crc kubenswrapper[5072]: I1124 11:09:58.864848 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:58 crc kubenswrapper[5072]: I1124 11:09:58.864905 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:58 crc kubenswrapper[5072]: I1124 11:09:58.864922 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:58 crc kubenswrapper[5072]: I1124 11:09:58.864946 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:58 crc kubenswrapper[5072]: I1124 11:09:58.864963 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:58Z","lastTransitionTime":"2025-11-24T11:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:58 crc kubenswrapper[5072]: I1124 11:09:58.908024 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 11:09:58 crc kubenswrapper[5072]: I1124 11:09:58.918919 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Nov 24 11:09:58 crc kubenswrapper[5072]: I1124 11:09:58.930107 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9007e2c-ce36-49d5-ac3f-a2a0ced4e662\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://631c19835680cfbfc94d8d2864f79bb327a834aae717a2c9c525383029e44001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a299161b21fb4a4bc255d765f39eaafa3c87549cc62d458d28ff57fbb4b5fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25ce4f3c52e2096622385f0bd213a058de7ddd3967ed8ba918e79fc63b00429c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28c581f99dcf7d549d235350230e7c3ef380dfeb4fdff577353410642700cb1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:58Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:58 crc kubenswrapper[5072]: I1124 11:09:58.948933 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:58Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:58 crc kubenswrapper[5072]: I1124 11:09:58.967662 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:58 crc kubenswrapper[5072]: I1124 11:09:58.967693 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:58 crc kubenswrapper[5072]: I1124 11:09:58.967720 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:58 crc kubenswrapper[5072]: I1124 11:09:58.967735 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:58 crc kubenswrapper[5072]: I1124 11:09:58.967746 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:58Z","lastTransitionTime":"2025-11-24T11:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:58 crc kubenswrapper[5072]: I1124 11:09:58.973013 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a948c39e09b468da8df5726e7734af35e1d5324d44a6ad11f6e30031f27060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:58Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:58 crc kubenswrapper[5072]: I1124 11:09:58.994086 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:58Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.009486 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a60343a1-7193-420d-b6ef-81505cfad266\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6597a19c8ed876fea1aaa8077315a8f39d0a79dee6af94970a3abcd552d673e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89e652bfaac124e13e0b3dfd3f167688a6b417b3613fb94d5422e2134ad95a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c9b314ea6e67a2866adfd0dc2e429523b6db6dab450a1a95fe5528548a0fcb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5f54ddd554c2e52a492be6b3e237793c7b7bed201d942c23d11983e154863a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e03b85333c8be2e5efe40f082369652f009482373f8e230fd948b2dee4e2ee39\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:09:23Z\\\",\\\"message\\\":\\\"W1124 11:09:12.543261 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 11:09:12.543592 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763982552 cert, and key in /tmp/serving-cert-2249531990/serving-signer.crt, /tmp/serving-cert-2249531990/serving-signer.key\\\\nI1124 11:09:13.042739 1 observer_polling.go:159] Starting file observer\\\\nW1124 11:09:13.046128 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 11:09:13.046351 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:09:13.048981 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2249531990/tls.crt::/tmp/serving-cert-2249531990/tls.key\\\\\\\"\\\\nF1124 11:09:23.567420 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d2187669c4dc9aae8ca2f2141104aee1e20df96f0bccf45ecd4c8528f51d1af\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a6b0468c00ca40213d12dd7b80c9f0dcfb93509a44ae37414053672e674f9f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6b0468c00ca40213d12dd7b80c9f0dcfb93509a44ae37414053672e674f9f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:59Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.015685 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nnrv7" Nov 24 11:09:59 crc kubenswrapper[5072]: E1124 11:09:59.015776 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nnrv7" podUID="60100e7d-c8b1-4b18-8567-46e21096fa0f" Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.031929 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:59Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.065012 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1421e4bd297d99e68c36da933221bbabf8d74aa5fbfa7cbfe831215de52d4790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c82cb1df0677da29463f84139b09b8ee263695e4c994ef7d17846556260b5c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89dd7133a078fe05808fdf20f22b6939004406ae85d3b6ef854a3e4031350491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f6526ffcce8bc139bd9442203e460c71b46e2e8cf9e1f0d03beb067f5dc1c39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98470930757c0529cc831f91feab9f4b004c808efbfdf40e3e95b12e6af1c6d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7621cb39fa8d0330ee899d4962150519618be95eabfc592e6678bb5f5fbbdbfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06ce6673e7a7189e88659cf5cb63428c7ad38aea24f770411a7de6b3754b27b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17a209788447e8d556a2f5d4611b2979e998e017c2ad7a81d88b9d005f215721\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:09:43Z\\\",\\\"message\\\":\\\"amespaces:*false,},},},Features:nil,},}\\\\nI1124 11:09:43.362510 6535 egressqos.go:1009] Finished syncing EgressQoS node crc : 15.350947ms\\\\nI1124 11:09:43.362562 6535 nad_controller.go:166] [zone-nad-controller NAD controller]: shutting down\\\\nI1124 11:09:43.362359 6535 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1124 11:09:43.362598 6535 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1124 11:09:43.362644 6535 handler.go:208] Removed *v1.Node event handler 2\\\\nI1124 11:09:43.362667 6535 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1124 11:09:43.362683 6535 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1124 11:09:43.362716 6535 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1124 11:09:43.362743 6535 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 11:09:43.362757 6535 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1124 11:09:43.362762 6535 handler.go:208] Removed *v1.Node event handler 7\\\\nI1124 11:09:43.362775 6535 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1124 11:09:43.362781 6535 factory.go:656] Stopping watch factory\\\\nI1124 11:09:43.362790 6535 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1124 11:09:43.362800 6535 ovnkube.go:599] Stopped ovnkube\\\\nI1124 11:09:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06ce6673e7a7189e88659cf5cb63428c7ad38aea24f770411a7de6b3754b27b7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:09:57Z\\\",\\\"message\\\":\\\"_cluster\\\\\\\", UUID:\\\\\\\"ba175bbe-5cc4-47e6-a32d-57693e1320bd\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-controller-manager/kube-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-controller-manager/kube-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-controller-manager/kube-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.36\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1124 11:09:57.933863 6751 ovnkube.go:599] Stopped ovnkube\\\\nI1124 11:09:57.933893 6751 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 11:09:57.933975 6751 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af4c3d6857b6aaa6a401604f5423cfb55488de707a08698b4cf9f420b9c07975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4qmw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:59Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.070025 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.070080 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.070092 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.070106 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.070117 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:59Z","lastTransitionTime":"2025-11-24T11:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.087200 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qjsxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74eb978f-00ff-4ed3-a5da-8026a3211592\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a69b8017daa872327d88eab8150845309e30c5cf37b229292e7c8a80e5d599c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://911b5942d35c25032791bf5a43559a6234acf215f5d3f84a30e69aced0caecc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://911b5942d35c25032791bf5a43559a6234acf215f5d3f84a30e69aced0caecc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://829da19d26a0ee0192a826e0b355266bcc48c77cf7b1fcf97a9e56add5d48645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://829da19d26a0ee0192a826e0b355266bcc48c77cf7b1fcf97a9e56add5d48645\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5add393950b53ed615d28b3d65833ae6a5174616b7170577babf1f4b7b6a2336\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5add393950b53ed615d28b3d65833ae6a5174616b7170577babf1f4b7b6a2336\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4771d3054f62a25ec9be8b6628ead9e7eb99ad4ae545d803919cb0122343c0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4771d3054f62a25ec9be8b6628ead9e7eb99ad4ae545d803919cb0122343c0ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd19ed803c2b441c4dde807b4cd4461c581058658db24f32dea39ad73b9cef14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd19ed803c2b441c4dde807b4cd4461c581058658db24f32dea39ad73b9cef14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09dba82c18fac19ddd5bbbeecab58a5dc685dbda72e7570cde5d445990066d2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09dba82c18fac19ddd5bbbeecab58a5dc685dbda72e7570cde5d445990066d2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qjsxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:59Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.102801 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jz4mm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19d555ef-9635-4aa7-bce1-7b1eb4805445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc7d5e96171aeadf92196d2b795c03ec634abd92814569a974200484569c145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8k8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jz4mm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:59Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.122123 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wndk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c05ddf6-986e-4bd6-95f0-7d734bc59140\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://894e58e94d99e8ef26722db709e0135d59ac4847daf001e37ce266c9baf02e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gztmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea4b260f16a11dade8c8b120408cf2d167dd868a9b938f4231aa811546252c56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gztmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wndk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:59Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.135573 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nnrv7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60100e7d-c8b1-4b18-8567-46e21096fa0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbdfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbdfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:45Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nnrv7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:59Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.149014 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b45fbff892ae7b15dc056d52d6485a995bb8a62ae423498027fe4866ef51e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcaa27616bc15c5ce26c371eb8a8f155914434949662b30894cd1ef7aa8e04a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:59Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.164358 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3973b61727227663fde759ad817fc73088f78293c67fc1bbbf5d5543afa7bbb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:59Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.172645 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.172711 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.172732 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.172760 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.172777 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:59Z","lastTransitionTime":"2025-11-24T11:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.177954 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bkjf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"175fd540-009b-4cb4-9c3e-e2ebc7e787f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d000a9d98b0e3ed54c1cc50148360bb8103d332c45ee03e745f14929132d2c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcts8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bkjf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:59Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.194962 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t8b9x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a9fe7b3-71a3-4388-8ee4-7531ceef6049\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96637ece9dca11a6b9e2a8fff8e78ca37f48e9f86e3f076e80cbd56aa353ca74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmbvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t8b9x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:59Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.207583 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85ee6420-36f0-467c-acf4-ebea8b02c8d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21d57225dc522c1ee3621c75ac8f9f93c47d21afb8b0cb1aae2d6aea1d17a252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3509fd52379451e43594c096ef652d92778331f2aef6b689e547f35a384b976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jfxnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:59Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.230178 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qjsxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74eb978f-00ff-4ed3-a5da-8026a3211592\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a69b8017daa872327d88eab8150845309e30c5cf37b229292e7c8a80e5d599c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://911b5942d35c25032791bf5a43559a6234acf215f5d3f84a30e69aced0caecc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://911b5942d35c25032791bf5a43559a6234acf215f5d3f84a30e69aced0caecc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://829da19d26a0ee0192a826e0b355266bcc48c77cf7b1fcf97a9e56add5d48645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://829da19d26a0ee0192a826e0b355266bcc48c77cf7b1fcf97a9e56add5d48645\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5add393950b53ed615d28b3d65833ae6a5174616b7170577babf1f4b7b6a2336\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5add393950b53ed615d28b3d65833ae6a5174616b7170577babf1f4b7b6a2336\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4771d3054f62a25ec9be8b6628ead9e7eb99ad4ae545d803919cb0122343c0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4771d3054f62a25ec9be8b6628ead9e7eb99ad4ae545d803919cb0122343c0ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd19ed803c2b441c4dde807b4cd4461c581058658db24f32dea39ad73b9cef14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd19ed803c2b441c4dde807b4cd4461c581058658db24f32dea39ad73b9cef14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09dba82c18fac19ddd5bbbeecab58a5dc685dbda72e7570cde5d445990066d2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09dba82c18fac19ddd5bbbeecab58a5dc685dbda72e7570cde5d445990066d2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qjsxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:59Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.247901 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85ee6420-36f0-467c-acf4-ebea8b02c8d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21d57225dc522c1ee3621c75ac8f9f93c47d21afb8b0cb1aae2d6aea1d17a252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3509fd52379451e43594c096ef652d92778331f2aef6b689e547f35a384b976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jfxnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:59Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.261174 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jz4mm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19d555ef-9635-4aa7-bce1-7b1eb4805445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc7d5e96171aeadf92196d2b795c03ec634abd92814569a974200484569c145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8k8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jz4mm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:59Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.274752 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wndk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c05ddf6-986e-4bd6-95f0-7d734bc59140\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://894e58e94d99e8ef26722db709e0135d59ac4847daf001e37ce266c9baf02e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gztmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea4b260f16a11dade8c8b120408cf2d167dd868a9b938f4231aa811546252c56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gztmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wndk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:59Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.275279 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.275308 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.275317 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.275331 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.275341 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:59Z","lastTransitionTime":"2025-11-24T11:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.287715 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nnrv7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60100e7d-c8b1-4b18-8567-46e21096fa0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbdfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbdfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:45Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nnrv7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:59Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.302176 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b45fbff892ae7b15dc056d52d6485a995bb8a62ae423498027fe4866ef51e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcaa27616bc15c5ce26c371eb8a8f155914434949662b30894cd1ef7aa8e04a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:59Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.323019 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3973b61727227663fde759ad817fc73088f78293c67fc1bbbf5d5543afa7bbb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:59Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.335484 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bkjf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"175fd540-009b-4cb4-9c3e-e2ebc7e787f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d000a9d98b0e3ed54c1cc50148360bb8103d332c45ee03e745f14929132d2c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcts8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bkjf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:59Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.352885 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t8b9x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a9fe7b3-71a3-4388-8ee4-7531ceef6049\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96637ece9dca11a6b9e2a8fff8e78ca37f48e9f86e3f076e80cbd56aa353ca74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmbvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t8b9x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:59Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.372552 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:59Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.380922 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n4qmw_80fda759-ddfd-438a-b5a2-cb775ee1bf7e/ovnkube-controller/2.log" Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.382077 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.382129 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.382147 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.382171 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.382191 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:59Z","lastTransitionTime":"2025-11-24T11:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.391484 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9007e2c-ce36-49d5-ac3f-a2a0ced4e662\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://631c19835680cfbfc94d8d2864f79bb327a834aae717a2c9c525383029e44001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a299161b21fb4a4bc255d765f39eaafa3c87549cc62d458d28ff57fbb4b5fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25ce4f3c52e2096622385f0bd213a058de7ddd3967ed8ba918e79fc63b00429c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28c581f99dcf7d549d235350230e7c3ef380dfeb4fdff577353410642700cb1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:59Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.411511 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:59Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.431954 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a948c39e09b468da8df5726e7734af35e1d5324d44a6ad11f6e30031f27060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:59Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.468841 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1421e4bd297d99e68c36da933221bbabf8d74aa5fbfa7cbfe831215de52d4790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c82cb1df0677da29463f84139b09b8ee263695e4c994ef7d17846556260b5c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89dd7133a078fe05808fdf20f22b6939004406ae85d3b6ef854a3e4031350491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f6526ffcce8bc139bd9442203e460c71b46e2e8cf9e1f0d03beb067f5dc1c39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98470930757c0529cc831f91feab9f4b004c808efbfdf40e3e95b12e6af1c6d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7621cb39fa8d0330ee899d4962150519618be95eabfc592e6678bb5f5fbbdbfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06ce6673e7a7189e88659cf5cb63428c7ad38aea24f770411a7de6b3754b27b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17a209788447e8d556a2f5d4611b2979e998e017c2ad7a81d88b9d005f215721\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:09:43Z\\\",\\\"message\\\":\\\"amespaces:*false,},},},Features:nil,},}\\\\nI1124 11:09:43.362510 6535 egressqos.go:1009] Finished syncing EgressQoS node crc : 15.350947ms\\\\nI1124 11:09:43.362562 6535 nad_controller.go:166] [zone-nad-controller NAD controller]: shutting down\\\\nI1124 11:09:43.362359 6535 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1124 11:09:43.362598 6535 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1124 11:09:43.362644 6535 handler.go:208] Removed *v1.Node event handler 2\\\\nI1124 11:09:43.362667 6535 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1124 11:09:43.362683 6535 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1124 11:09:43.362716 6535 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1124 11:09:43.362743 6535 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 11:09:43.362757 6535 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1124 11:09:43.362762 6535 handler.go:208] Removed *v1.Node event handler 7\\\\nI1124 11:09:43.362775 6535 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1124 11:09:43.362781 6535 factory.go:656] Stopping watch factory\\\\nI1124 11:09:43.362790 6535 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1124 11:09:43.362800 6535 ovnkube.go:599] Stopped ovnkube\\\\nI1124 11:09:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06ce6673e7a7189e88659cf5cb63428c7ad38aea24f770411a7de6b3754b27b7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:09:57Z\\\",\\\"message\\\":\\\"_cluster\\\\\\\", UUID:\\\\\\\"ba175bbe-5cc4-47e6-a32d-57693e1320bd\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-controller-manager/kube-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-controller-manager/kube-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-controller-manager/kube-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.36\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1124 11:09:57.933863 6751 ovnkube.go:599] Stopped ovnkube\\\\nI1124 11:09:57.933893 6751 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 11:09:57.933975 6751 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af4c3d6857b6aaa6a401604f5423cfb55488de707a08698b4cf9f420b9c07975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4qmw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:59Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.486037 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.486095 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.486116 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.486146 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.486166 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:59Z","lastTransitionTime":"2025-11-24T11:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.490418 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3de15bd-d863-49c9-a84d-44e5af94f01c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1845d620994797b0fad3550ee243fdb5719b076cd21e2cd9fbdbfd84d5afd805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://802b58c2bb92a1887147eee76414a66c948e077ad8a3835bccd344ae67562b89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24ca0cd9727c9f25252266ba758cfa75b6d48b1f683f97b36bc3a40d6e4d9346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91aa9d18d2efa1c3559a3a17858453a13c76b7567ffb215046c57556b661890c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91aa9d18d2efa1c3559a3a17858453a13c76b7567ffb215046c57556b661890c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:59Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.510602 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a60343a1-7193-420d-b6ef-81505cfad266\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6597a19c8ed876fea1aaa8077315a8f39d0a79dee6af94970a3abcd552d673e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89e652bfaac124e13e0b3dfd3f167688a6b417b3613fb94d5422e2134ad95a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c9b314ea6e67a2866adfd0dc2e429523b6db6dab450a1a95fe5528548a0fcb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5f54ddd554c2e52a492be6b3e237793c7b7bed201d942c23d11983e154863a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e03b85333c8be2e5efe40f082369652f009482373f8e230fd948b2dee4e2ee39\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:09:23Z\\\",\\\"message\\\":\\\"W1124 11:09:12.543261 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 11:09:12.543592 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763982552 cert, and key in /tmp/serving-cert-2249531990/serving-signer.crt, /tmp/serving-cert-2249531990/serving-signer.key\\\\nI1124 11:09:13.042739 1 observer_polling.go:159] Starting file observer\\\\nW1124 11:09:13.046128 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 11:09:13.046351 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:09:13.048981 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2249531990/tls.crt::/tmp/serving-cert-2249531990/tls.key\\\\\\\"\\\\nF1124 11:09:23.567420 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d2187669c4dc9aae8ca2f2141104aee1e20df96f0bccf45ecd4c8528f51d1af\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a6b0468c00ca40213d12dd7b80c9f0dcfb93509a44ae37414053672e674f9f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6b0468c00ca40213d12dd7b80c9f0dcfb93509a44ae37414053672e674f9f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:59Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.528690 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:59Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.589767 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.589799 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.589807 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.589820 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.589830 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:59Z","lastTransitionTime":"2025-11-24T11:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.692595 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.692659 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.692672 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.692688 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.692721 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:59Z","lastTransitionTime":"2025-11-24T11:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.795669 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.795718 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.795726 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.795738 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.795747 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:59Z","lastTransitionTime":"2025-11-24T11:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.811727 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.811764 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.811926 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.811943 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.811953 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:59Z","lastTransitionTime":"2025-11-24T11:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:59 crc kubenswrapper[5072]: E1124 11:09:59.829939 5072 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:09:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:09:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:09:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:09:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a41d3a9c-0834-482e-9391-dff98db0f196\\\",\\\"systemUUID\\\":\\\"d0383649-b062-48ed-9fc1-5e553cb9256a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:59Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.834623 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.834687 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.834704 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.834729 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.834746 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:59Z","lastTransitionTime":"2025-11-24T11:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:59 crc kubenswrapper[5072]: E1124 11:09:59.852086 5072 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:09:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:09:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:09:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:09:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a41d3a9c-0834-482e-9391-dff98db0f196\\\",\\\"systemUUID\\\":\\\"d0383649-b062-48ed-9fc1-5e553cb9256a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:59Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.855791 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.855820 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.855828 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.855842 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.855855 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:59Z","lastTransitionTime":"2025-11-24T11:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:59 crc kubenswrapper[5072]: E1124 11:09:59.869306 5072 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:09:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:09:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:09:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:09:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a41d3a9c-0834-482e-9391-dff98db0f196\\\",\\\"systemUUID\\\":\\\"d0383649-b062-48ed-9fc1-5e553cb9256a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:59Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.873432 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.873453 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.873461 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.873473 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.873482 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:59Z","lastTransitionTime":"2025-11-24T11:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:59 crc kubenswrapper[5072]: E1124 11:09:59.891821 5072 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:09:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:09:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:09:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:09:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a41d3a9c-0834-482e-9391-dff98db0f196\\\",\\\"systemUUID\\\":\\\"d0383649-b062-48ed-9fc1-5e553cb9256a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:59Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.896077 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.896111 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.896122 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.896158 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.896173 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:59Z","lastTransitionTime":"2025-11-24T11:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:59 crc kubenswrapper[5072]: E1124 11:09:59.910057 5072 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:09:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:09:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:09:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:09:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a41d3a9c-0834-482e-9391-dff98db0f196\\\",\\\"systemUUID\\\":\\\"d0383649-b062-48ed-9fc1-5e553cb9256a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:09:59Z is after 2025-08-24T17:21:41Z" Nov 24 11:09:59 crc kubenswrapper[5072]: E1124 11:09:59.910319 5072 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.912146 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.912189 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.912200 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.912217 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.912234 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:09:59Z","lastTransitionTime":"2025-11-24T11:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.991697 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" Nov 24 11:09:59 crc kubenswrapper[5072]: I1124 11:09:59.992464 5072 scope.go:117] "RemoveContainer" containerID="06ce6673e7a7189e88659cf5cb63428c7ad38aea24f770411a7de6b3754b27b7" Nov 24 11:09:59 crc kubenswrapper[5072]: E1124 11:09:59.992629 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-n4qmw_openshift-ovn-kubernetes(80fda759-ddfd-438a-b5a2-cb775ee1bf7e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" podUID="80fda759-ddfd-438a-b5a2-cb775ee1bf7e" Nov 24 11:10:00 crc kubenswrapper[5072]: I1124 11:10:00.006104 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3de15bd-d863-49c9-a84d-44e5af94f01c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1845d620994797b0fad3550ee243fdb5719b076cd21e2cd9fbdbfd84d5afd805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://802b58c2bb92a1887147eee76414a66c948e077ad8a3835bccd344ae67562b89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24ca0cd9727c9f25252266ba758cfa75b6d48b1f683f97b36bc3a40d6e4d9346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91aa9d18d2efa1c3559a3a17858453a13c76b7567ffb215046c57556b661890c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91aa9d18d2efa1c3559a3a17858453a13c76b7567ffb215046c57556b661890c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:00Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:00 crc kubenswrapper[5072]: I1124 11:10:00.014894 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:00 crc kubenswrapper[5072]: I1124 11:10:00.014933 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:00 crc kubenswrapper[5072]: I1124 11:10:00.014945 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:00 crc kubenswrapper[5072]: I1124 11:10:00.014962 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:00 crc kubenswrapper[5072]: I1124 11:10:00.014973 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:00Z","lastTransitionTime":"2025-11-24T11:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:00 crc kubenswrapper[5072]: I1124 11:10:00.015324 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:10:00 crc kubenswrapper[5072]: I1124 11:10:00.015351 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:10:00 crc kubenswrapper[5072]: E1124 11:10:00.015494 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:10:00 crc kubenswrapper[5072]: I1124 11:10:00.015703 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:10:00 crc kubenswrapper[5072]: E1124 11:10:00.015804 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:10:00 crc kubenswrapper[5072]: E1124 11:10:00.016421 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:10:00 crc kubenswrapper[5072]: I1124 11:10:00.027129 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a60343a1-7193-420d-b6ef-81505cfad266\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6597a19c8ed876fea1aaa8077315a8f39d0a79dee6af94970a3abcd552d673e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89e652bfaac124e13e0b3dfd3f167688a6b417b3613fb94d5422e2134ad95a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c9b314ea6e67a2866adfd0dc2e429523b6db6dab450a1a95fe5528548a0fcb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5f54ddd554c2e52a492be6b3e237793c7b7bed201d942c23d11983e154863a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e03b85333c8be2e5efe40f082369652f009482373f8e230fd948b2dee4e2ee39\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:09:23Z\\\",\\\"message\\\":\\\"W1124 11:09:12.543261 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 11:09:12.543592 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763982552 cert, and key in /tmp/serving-cert-2249531990/serving-signer.crt, /tmp/serving-cert-2249531990/serving-signer.key\\\\nI1124 11:09:13.042739 1 observer_polling.go:159] Starting file observer\\\\nW1124 11:09:13.046128 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 11:09:13.046351 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:09:13.048981 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2249531990/tls.crt::/tmp/serving-cert-2249531990/tls.key\\\\\\\"\\\\nF1124 11:09:23.567420 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d2187669c4dc9aae8ca2f2141104aee1e20df96f0bccf45ecd4c8528f51d1af\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a6b0468c00ca40213d12dd7b80c9f0dcfb93509a44ae37414053672e674f9f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6b0468c00ca40213d12dd7b80c9f0dcfb93509a44ae37414053672e674f9f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:00Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:00 crc kubenswrapper[5072]: I1124 11:10:00.041755 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:00Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:00 crc kubenswrapper[5072]: I1124 11:10:00.071884 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1421e4bd297d99e68c36da933221bbabf8d74aa5fbfa7cbfe831215de52d4790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c82cb1df0677da29463f84139b09b8ee263695e4c994ef7d17846556260b5c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89dd7133a078fe05808fdf20f22b6939004406ae85d3b6ef854a3e4031350491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f6526ffcce8bc139bd9442203e460c71b46e2e8cf9e1f0d03beb067f5dc1c39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98470930757c0529cc831f91feab9f4b004c808efbfdf40e3e95b12e6af1c6d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7621cb39fa8d0330ee899d4962150519618be95eabfc592e6678bb5f5fbbdbfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06ce6673e7a7189e88659cf5cb63428c7ad38aea24f770411a7de6b3754b27b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06ce6673e7a7189e88659cf5cb63428c7ad38aea24f770411a7de6b3754b27b7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:09:57Z\\\",\\\"message\\\":\\\"_cluster\\\\\\\", UUID:\\\\\\\"ba175bbe-5cc4-47e6-a32d-57693e1320bd\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-controller-manager/kube-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-controller-manager/kube-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-controller-manager/kube-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.36\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1124 11:09:57.933863 6751 ovnkube.go:599] Stopped ovnkube\\\\nI1124 11:09:57.933893 6751 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 11:09:57.933975 6751 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-n4qmw_openshift-ovn-kubernetes(80fda759-ddfd-438a-b5a2-cb775ee1bf7e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af4c3d6857b6aaa6a401604f5423cfb55488de707a08698b4cf9f420b9c07975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4qmw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:00Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:00 crc kubenswrapper[5072]: I1124 11:10:00.096254 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qjsxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74eb978f-00ff-4ed3-a5da-8026a3211592\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a69b8017daa872327d88eab8150845309e30c5cf37b229292e7c8a80e5d599c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://911b5942d35c25032791bf5a43559a6234acf215f5d3f84a30e69aced0caecc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://911b5942d35c25032791bf5a43559a6234acf215f5d3f84a30e69aced0caecc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://829da19d26a0ee0192a826e0b355266bcc48c77cf7b1fcf97a9e56add5d48645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://829da19d26a0ee0192a826e0b355266bcc48c77cf7b1fcf97a9e56add5d48645\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5add393950b53ed615d28b3d65833ae6a5174616b7170577babf1f4b7b6a2336\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5add393950b53ed615d28b3d65833ae6a5174616b7170577babf1f4b7b6a2336\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4771d3054f62a25ec9be8b6628ead9e7eb99ad4ae545d803919cb0122343c0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4771d3054f62a25ec9be8b6628ead9e7eb99ad4ae545d803919cb0122343c0ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd19ed803c2b441c4dde807b4cd4461c581058658db24f32dea39ad73b9cef14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd19ed803c2b441c4dde807b4cd4461c581058658db24f32dea39ad73b9cef14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09dba82c18fac19ddd5bbbeecab58a5dc685dbda72e7570cde5d445990066d2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09dba82c18fac19ddd5bbbeecab58a5dc685dbda72e7570cde5d445990066d2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qjsxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:00Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:00 crc kubenswrapper[5072]: I1124 11:10:00.112722 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nnrv7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60100e7d-c8b1-4b18-8567-46e21096fa0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbdfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbdfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:45Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nnrv7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:00Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:00 crc kubenswrapper[5072]: I1124 11:10:00.119572 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:00 crc kubenswrapper[5072]: I1124 11:10:00.119608 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:00 crc kubenswrapper[5072]: I1124 11:10:00.119619 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:00 crc kubenswrapper[5072]: I1124 11:10:00.119636 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:00 crc kubenswrapper[5072]: I1124 11:10:00.119647 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:00Z","lastTransitionTime":"2025-11-24T11:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:00 crc kubenswrapper[5072]: I1124 11:10:00.129882 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b45fbff892ae7b15dc056d52d6485a995bb8a62ae423498027fe4866ef51e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcaa27616bc15c5ce26c371eb8a8f155914434949662b30894cd1ef7aa8e04a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:00Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:00 crc kubenswrapper[5072]: I1124 11:10:00.144556 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3973b61727227663fde759ad817fc73088f78293c67fc1bbbf5d5543afa7bbb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:00Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:00 crc kubenswrapper[5072]: I1124 11:10:00.153886 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bkjf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"175fd540-009b-4cb4-9c3e-e2ebc7e787f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d000a9d98b0e3ed54c1cc50148360bb8103d332c45ee03e745f14929132d2c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcts8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bkjf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:00Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:00 crc kubenswrapper[5072]: I1124 11:10:00.167759 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t8b9x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a9fe7b3-71a3-4388-8ee4-7531ceef6049\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96637ece9dca11a6b9e2a8fff8e78ca37f48e9f86e3f076e80cbd56aa353ca74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmbvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t8b9x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:00Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:00 crc kubenswrapper[5072]: I1124 11:10:00.182903 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85ee6420-36f0-467c-acf4-ebea8b02c8d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21d57225dc522c1ee3621c75ac8f9f93c47d21afb8b0cb1aae2d6aea1d17a252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3509fd52379451e43594c096ef652d92778331f2aef6b689e547f35a384b976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jfxnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:00Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:00 crc kubenswrapper[5072]: I1124 11:10:00.199150 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jz4mm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19d555ef-9635-4aa7-bce1-7b1eb4805445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc7d5e96171aeadf92196d2b795c03ec634abd92814569a974200484569c145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8k8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jz4mm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:00Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:00 crc kubenswrapper[5072]: I1124 11:10:00.214571 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wndk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c05ddf6-986e-4bd6-95f0-7d734bc59140\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://894e58e94d99e8ef26722db709e0135d59ac4847daf001e37ce266c9baf02e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gztmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea4b260f16a11dade8c8b120408cf2d167dd868a9b938f4231aa811546252c56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gztmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wndk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:00Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:00 crc kubenswrapper[5072]: I1124 11:10:00.221591 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:00 crc kubenswrapper[5072]: I1124 11:10:00.221643 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:00 crc kubenswrapper[5072]: I1124 11:10:00.221660 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:00 crc kubenswrapper[5072]: I1124 11:10:00.221683 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:00 crc kubenswrapper[5072]: I1124 11:10:00.221704 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:00Z","lastTransitionTime":"2025-11-24T11:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:00 crc kubenswrapper[5072]: I1124 11:10:00.230798 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9007e2c-ce36-49d5-ac3f-a2a0ced4e662\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://631c19835680cfbfc94d8d2864f79bb327a834aae717a2c9c525383029e44001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a299161b21fb4a4bc255d765f39eaafa3c87549cc62d458d28ff57fbb4b5fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25ce4f3c52e2096622385f0bd213a058de7ddd3967ed8ba918e79fc63b00429c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28c581f99dcf7d549d235350230e7c3ef380dfeb4fdff577353410642700cb1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:00Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:00 crc kubenswrapper[5072]: I1124 11:10:00.250507 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:00Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:00 crc kubenswrapper[5072]: I1124 11:10:00.267660 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a948c39e09b468da8df5726e7734af35e1d5324d44a6ad11f6e30031f27060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:00Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:00 crc kubenswrapper[5072]: I1124 11:10:00.285606 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:00Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:00 crc kubenswrapper[5072]: I1124 11:10:00.323698 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:00 crc kubenswrapper[5072]: I1124 11:10:00.324012 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:00 crc kubenswrapper[5072]: I1124 11:10:00.324112 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:00 crc kubenswrapper[5072]: I1124 11:10:00.324210 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:00 crc kubenswrapper[5072]: I1124 11:10:00.324307 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:00Z","lastTransitionTime":"2025-11-24T11:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:00 crc kubenswrapper[5072]: I1124 11:10:00.426882 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:00 crc kubenswrapper[5072]: I1124 11:10:00.427125 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:00 crc kubenswrapper[5072]: I1124 11:10:00.427188 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:00 crc kubenswrapper[5072]: I1124 11:10:00.427254 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:00 crc kubenswrapper[5072]: I1124 11:10:00.427311 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:00Z","lastTransitionTime":"2025-11-24T11:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:00 crc kubenswrapper[5072]: I1124 11:10:00.529895 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:00 crc kubenswrapper[5072]: I1124 11:10:00.529955 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:00 crc kubenswrapper[5072]: I1124 11:10:00.529977 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:00 crc kubenswrapper[5072]: I1124 11:10:00.530006 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:00 crc kubenswrapper[5072]: I1124 11:10:00.530027 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:00Z","lastTransitionTime":"2025-11-24T11:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:00 crc kubenswrapper[5072]: I1124 11:10:00.633810 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:00 crc kubenswrapper[5072]: I1124 11:10:00.633891 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:00 crc kubenswrapper[5072]: I1124 11:10:00.633915 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:00 crc kubenswrapper[5072]: I1124 11:10:00.633939 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:00 crc kubenswrapper[5072]: I1124 11:10:00.633956 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:00Z","lastTransitionTime":"2025-11-24T11:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:00 crc kubenswrapper[5072]: I1124 11:10:00.737432 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:00 crc kubenswrapper[5072]: I1124 11:10:00.737747 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:00 crc kubenswrapper[5072]: I1124 11:10:00.737912 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:00 crc kubenswrapper[5072]: I1124 11:10:00.738096 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:00 crc kubenswrapper[5072]: I1124 11:10:00.738343 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:00Z","lastTransitionTime":"2025-11-24T11:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:00 crc kubenswrapper[5072]: I1124 11:10:00.841915 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:00 crc kubenswrapper[5072]: I1124 11:10:00.842198 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:00 crc kubenswrapper[5072]: I1124 11:10:00.842410 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:00 crc kubenswrapper[5072]: I1124 11:10:00.842633 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:00 crc kubenswrapper[5072]: I1124 11:10:00.842832 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:00Z","lastTransitionTime":"2025-11-24T11:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:00 crc kubenswrapper[5072]: I1124 11:10:00.946615 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:00 crc kubenswrapper[5072]: I1124 11:10:00.947603 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:00 crc kubenswrapper[5072]: I1124 11:10:00.947957 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:00 crc kubenswrapper[5072]: I1124 11:10:00.948572 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:00 crc kubenswrapper[5072]: I1124 11:10:00.948729 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:00Z","lastTransitionTime":"2025-11-24T11:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:01 crc kubenswrapper[5072]: I1124 11:10:01.015991 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nnrv7" Nov 24 11:10:01 crc kubenswrapper[5072]: E1124 11:10:01.016413 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nnrv7" podUID="60100e7d-c8b1-4b18-8567-46e21096fa0f" Nov 24 11:10:01 crc kubenswrapper[5072]: I1124 11:10:01.051972 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:01 crc kubenswrapper[5072]: I1124 11:10:01.052461 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:01 crc kubenswrapper[5072]: I1124 11:10:01.052729 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:01 crc kubenswrapper[5072]: I1124 11:10:01.052904 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:01 crc kubenswrapper[5072]: I1124 11:10:01.053064 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:01Z","lastTransitionTime":"2025-11-24T11:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:01 crc kubenswrapper[5072]: I1124 11:10:01.156649 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:01 crc kubenswrapper[5072]: I1124 11:10:01.157072 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:01 crc kubenswrapper[5072]: I1124 11:10:01.157327 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:01 crc kubenswrapper[5072]: I1124 11:10:01.157569 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:01 crc kubenswrapper[5072]: I1124 11:10:01.157723 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:01Z","lastTransitionTime":"2025-11-24T11:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:01 crc kubenswrapper[5072]: I1124 11:10:01.260857 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:01 crc kubenswrapper[5072]: I1124 11:10:01.260948 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:01 crc kubenswrapper[5072]: I1124 11:10:01.260966 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:01 crc kubenswrapper[5072]: I1124 11:10:01.260989 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:01 crc kubenswrapper[5072]: I1124 11:10:01.261045 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:01Z","lastTransitionTime":"2025-11-24T11:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:01 crc kubenswrapper[5072]: I1124 11:10:01.364357 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:01 crc kubenswrapper[5072]: I1124 11:10:01.364731 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:01 crc kubenswrapper[5072]: I1124 11:10:01.364808 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:01 crc kubenswrapper[5072]: I1124 11:10:01.364913 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:01 crc kubenswrapper[5072]: I1124 11:10:01.364999 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:01Z","lastTransitionTime":"2025-11-24T11:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:01 crc kubenswrapper[5072]: I1124 11:10:01.397533 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/60100e7d-c8b1-4b18-8567-46e21096fa0f-metrics-certs\") pod \"network-metrics-daemon-nnrv7\" (UID: \"60100e7d-c8b1-4b18-8567-46e21096fa0f\") " pod="openshift-multus/network-metrics-daemon-nnrv7" Nov 24 11:10:01 crc kubenswrapper[5072]: E1124 11:10:01.397737 5072 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 11:10:01 crc kubenswrapper[5072]: E1124 11:10:01.397848 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/60100e7d-c8b1-4b18-8567-46e21096fa0f-metrics-certs podName:60100e7d-c8b1-4b18-8567-46e21096fa0f nodeName:}" failed. No retries permitted until 2025-11-24 11:10:17.397821811 +0000 UTC m=+69.109346317 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/60100e7d-c8b1-4b18-8567-46e21096fa0f-metrics-certs") pod "network-metrics-daemon-nnrv7" (UID: "60100e7d-c8b1-4b18-8567-46e21096fa0f") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 11:10:01 crc kubenswrapper[5072]: I1124 11:10:01.473528 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:01 crc kubenswrapper[5072]: I1124 11:10:01.473624 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:01 crc kubenswrapper[5072]: I1124 11:10:01.473645 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:01 crc kubenswrapper[5072]: I1124 11:10:01.473709 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:01 crc kubenswrapper[5072]: I1124 11:10:01.473730 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:01Z","lastTransitionTime":"2025-11-24T11:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:01 crc kubenswrapper[5072]: I1124 11:10:01.576787 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:01 crc kubenswrapper[5072]: I1124 11:10:01.576818 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:01 crc kubenswrapper[5072]: I1124 11:10:01.576828 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:01 crc kubenswrapper[5072]: I1124 11:10:01.576856 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:01 crc kubenswrapper[5072]: I1124 11:10:01.576866 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:01Z","lastTransitionTime":"2025-11-24T11:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:01 crc kubenswrapper[5072]: I1124 11:10:01.679411 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:01 crc kubenswrapper[5072]: I1124 11:10:01.679449 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:01 crc kubenswrapper[5072]: I1124 11:10:01.679460 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:01 crc kubenswrapper[5072]: I1124 11:10:01.679474 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:01 crc kubenswrapper[5072]: I1124 11:10:01.679484 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:01Z","lastTransitionTime":"2025-11-24T11:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:01 crc kubenswrapper[5072]: I1124 11:10:01.782543 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:01 crc kubenswrapper[5072]: I1124 11:10:01.782598 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:01 crc kubenswrapper[5072]: I1124 11:10:01.782611 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:01 crc kubenswrapper[5072]: I1124 11:10:01.782633 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:01 crc kubenswrapper[5072]: I1124 11:10:01.782646 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:01Z","lastTransitionTime":"2025-11-24T11:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:01 crc kubenswrapper[5072]: I1124 11:10:01.801256 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:10:01 crc kubenswrapper[5072]: E1124 11:10:01.801470 5072 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 11:10:01 crc kubenswrapper[5072]: E1124 11:10:01.801579 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 11:10:33.801551182 +0000 UTC m=+85.513075688 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 11:10:01 crc kubenswrapper[5072]: I1124 11:10:01.886332 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:01 crc kubenswrapper[5072]: I1124 11:10:01.886397 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:01 crc kubenswrapper[5072]: I1124 11:10:01.886411 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:01 crc kubenswrapper[5072]: I1124 11:10:01.886430 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:01 crc kubenswrapper[5072]: I1124 11:10:01.886441 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:01Z","lastTransitionTime":"2025-11-24T11:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:01 crc kubenswrapper[5072]: I1124 11:10:01.902230 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:10:01 crc kubenswrapper[5072]: I1124 11:10:01.902475 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:10:01 crc kubenswrapper[5072]: I1124 11:10:01.902537 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:10:01 crc kubenswrapper[5072]: I1124 11:10:01.902583 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:10:01 crc kubenswrapper[5072]: E1124 11:10:01.902761 5072 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 11:10:01 crc kubenswrapper[5072]: E1124 11:10:01.902795 5072 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 11:10:01 crc kubenswrapper[5072]: E1124 11:10:01.902814 5072 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:10:01 crc kubenswrapper[5072]: E1124 11:10:01.902868 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:10:33.902847984 +0000 UTC m=+85.614372460 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:10:01 crc kubenswrapper[5072]: E1124 11:10:01.902889 5072 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 11:10:01 crc kubenswrapper[5072]: E1124 11:10:01.902907 5072 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 11:10:01 crc kubenswrapper[5072]: E1124 11:10:01.902917 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 11:10:33.902907555 +0000 UTC m=+85.614432031 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:10:01 crc kubenswrapper[5072]: E1124 11:10:01.902921 5072 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:10:01 crc kubenswrapper[5072]: E1124 11:10:01.902978 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 11:10:33.902961437 +0000 UTC m=+85.614485953 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:10:01 crc kubenswrapper[5072]: E1124 11:10:01.903254 5072 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 11:10:01 crc kubenswrapper[5072]: E1124 11:10:01.903456 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 11:10:33.903433658 +0000 UTC m=+85.614958134 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 11:10:01 crc kubenswrapper[5072]: I1124 11:10:01.995125 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:01 crc kubenswrapper[5072]: I1124 11:10:01.995180 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:01 crc kubenswrapper[5072]: I1124 11:10:01.995198 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:01 crc kubenswrapper[5072]: I1124 11:10:01.995220 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:01 crc kubenswrapper[5072]: I1124 11:10:01.995237 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:01Z","lastTransitionTime":"2025-11-24T11:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:02 crc kubenswrapper[5072]: I1124 11:10:02.015521 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:10:02 crc kubenswrapper[5072]: I1124 11:10:02.015567 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:10:02 crc kubenswrapper[5072]: E1124 11:10:02.015671 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:10:02 crc kubenswrapper[5072]: I1124 11:10:02.015705 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:10:02 crc kubenswrapper[5072]: E1124 11:10:02.015854 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:10:02 crc kubenswrapper[5072]: E1124 11:10:02.015955 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:10:02 crc kubenswrapper[5072]: I1124 11:10:02.098139 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:02 crc kubenswrapper[5072]: I1124 11:10:02.098193 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:02 crc kubenswrapper[5072]: I1124 11:10:02.098256 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:02 crc kubenswrapper[5072]: I1124 11:10:02.098283 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:02 crc kubenswrapper[5072]: I1124 11:10:02.098300 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:02Z","lastTransitionTime":"2025-11-24T11:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:02 crc kubenswrapper[5072]: I1124 11:10:02.200752 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:02 crc kubenswrapper[5072]: I1124 11:10:02.200822 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:02 crc kubenswrapper[5072]: I1124 11:10:02.200840 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:02 crc kubenswrapper[5072]: I1124 11:10:02.200867 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:02 crc kubenswrapper[5072]: I1124 11:10:02.200886 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:02Z","lastTransitionTime":"2025-11-24T11:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:02 crc kubenswrapper[5072]: I1124 11:10:02.303055 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:02 crc kubenswrapper[5072]: I1124 11:10:02.303091 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:02 crc kubenswrapper[5072]: I1124 11:10:02.303106 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:02 crc kubenswrapper[5072]: I1124 11:10:02.303127 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:02 crc kubenswrapper[5072]: I1124 11:10:02.303144 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:02Z","lastTransitionTime":"2025-11-24T11:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:02 crc kubenswrapper[5072]: I1124 11:10:02.406006 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:02 crc kubenswrapper[5072]: I1124 11:10:02.406080 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:02 crc kubenswrapper[5072]: I1124 11:10:02.406102 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:02 crc kubenswrapper[5072]: I1124 11:10:02.406132 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:02 crc kubenswrapper[5072]: I1124 11:10:02.406155 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:02Z","lastTransitionTime":"2025-11-24T11:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:02 crc kubenswrapper[5072]: I1124 11:10:02.513515 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:02 crc kubenswrapper[5072]: I1124 11:10:02.513569 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:02 crc kubenswrapper[5072]: I1124 11:10:02.513594 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:02 crc kubenswrapper[5072]: I1124 11:10:02.513613 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:02 crc kubenswrapper[5072]: I1124 11:10:02.513627 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:02Z","lastTransitionTime":"2025-11-24T11:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:02 crc kubenswrapper[5072]: I1124 11:10:02.616628 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:02 crc kubenswrapper[5072]: I1124 11:10:02.616669 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:02 crc kubenswrapper[5072]: I1124 11:10:02.616679 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:02 crc kubenswrapper[5072]: I1124 11:10:02.616717 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:02 crc kubenswrapper[5072]: I1124 11:10:02.616728 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:02Z","lastTransitionTime":"2025-11-24T11:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:02 crc kubenswrapper[5072]: I1124 11:10:02.720029 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:02 crc kubenswrapper[5072]: I1124 11:10:02.720071 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:02 crc kubenswrapper[5072]: I1124 11:10:02.720086 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:02 crc kubenswrapper[5072]: I1124 11:10:02.720108 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:02 crc kubenswrapper[5072]: I1124 11:10:02.720123 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:02Z","lastTransitionTime":"2025-11-24T11:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:02 crc kubenswrapper[5072]: I1124 11:10:02.822646 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:02 crc kubenswrapper[5072]: I1124 11:10:02.822700 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:02 crc kubenswrapper[5072]: I1124 11:10:02.822719 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:02 crc kubenswrapper[5072]: I1124 11:10:02.822742 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:02 crc kubenswrapper[5072]: I1124 11:10:02.822760 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:02Z","lastTransitionTime":"2025-11-24T11:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:02 crc kubenswrapper[5072]: I1124 11:10:02.925986 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:02 crc kubenswrapper[5072]: I1124 11:10:02.926049 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:02 crc kubenswrapper[5072]: I1124 11:10:02.926067 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:02 crc kubenswrapper[5072]: I1124 11:10:02.926092 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:02 crc kubenswrapper[5072]: I1124 11:10:02.926109 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:02Z","lastTransitionTime":"2025-11-24T11:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:03 crc kubenswrapper[5072]: I1124 11:10:03.015709 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nnrv7" Nov 24 11:10:03 crc kubenswrapper[5072]: E1124 11:10:03.015902 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nnrv7" podUID="60100e7d-c8b1-4b18-8567-46e21096fa0f" Nov 24 11:10:03 crc kubenswrapper[5072]: I1124 11:10:03.028163 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:03 crc kubenswrapper[5072]: I1124 11:10:03.028228 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:03 crc kubenswrapper[5072]: I1124 11:10:03.028251 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:03 crc kubenswrapper[5072]: I1124 11:10:03.028276 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:03 crc kubenswrapper[5072]: I1124 11:10:03.028296 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:03Z","lastTransitionTime":"2025-11-24T11:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:03 crc kubenswrapper[5072]: I1124 11:10:03.130273 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:03 crc kubenswrapper[5072]: I1124 11:10:03.130323 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:03 crc kubenswrapper[5072]: I1124 11:10:03.130336 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:03 crc kubenswrapper[5072]: I1124 11:10:03.130353 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:03 crc kubenswrapper[5072]: I1124 11:10:03.130366 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:03Z","lastTransitionTime":"2025-11-24T11:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:03 crc kubenswrapper[5072]: I1124 11:10:03.233023 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:03 crc kubenswrapper[5072]: I1124 11:10:03.233086 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:03 crc kubenswrapper[5072]: I1124 11:10:03.233098 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:03 crc kubenswrapper[5072]: I1124 11:10:03.233114 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:03 crc kubenswrapper[5072]: I1124 11:10:03.233129 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:03Z","lastTransitionTime":"2025-11-24T11:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:03 crc kubenswrapper[5072]: I1124 11:10:03.335990 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:03 crc kubenswrapper[5072]: I1124 11:10:03.336083 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:03 crc kubenswrapper[5072]: I1124 11:10:03.336106 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:03 crc kubenswrapper[5072]: I1124 11:10:03.336135 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:03 crc kubenswrapper[5072]: I1124 11:10:03.336160 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:03Z","lastTransitionTime":"2025-11-24T11:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:03 crc kubenswrapper[5072]: I1124 11:10:03.439330 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:03 crc kubenswrapper[5072]: I1124 11:10:03.439367 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:03 crc kubenswrapper[5072]: I1124 11:10:03.439393 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:03 crc kubenswrapper[5072]: I1124 11:10:03.439409 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:03 crc kubenswrapper[5072]: I1124 11:10:03.439422 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:03Z","lastTransitionTime":"2025-11-24T11:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:03 crc kubenswrapper[5072]: I1124 11:10:03.542896 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:03 crc kubenswrapper[5072]: I1124 11:10:03.542954 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:03 crc kubenswrapper[5072]: I1124 11:10:03.542972 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:03 crc kubenswrapper[5072]: I1124 11:10:03.542994 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:03 crc kubenswrapper[5072]: I1124 11:10:03.543010 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:03Z","lastTransitionTime":"2025-11-24T11:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:03 crc kubenswrapper[5072]: I1124 11:10:03.645863 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:03 crc kubenswrapper[5072]: I1124 11:10:03.645937 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:03 crc kubenswrapper[5072]: I1124 11:10:03.645954 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:03 crc kubenswrapper[5072]: I1124 11:10:03.645976 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:03 crc kubenswrapper[5072]: I1124 11:10:03.645994 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:03Z","lastTransitionTime":"2025-11-24T11:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:03 crc kubenswrapper[5072]: I1124 11:10:03.748649 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:03 crc kubenswrapper[5072]: I1124 11:10:03.748676 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:03 crc kubenswrapper[5072]: I1124 11:10:03.748684 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:03 crc kubenswrapper[5072]: I1124 11:10:03.748718 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:03 crc kubenswrapper[5072]: I1124 11:10:03.748727 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:03Z","lastTransitionTime":"2025-11-24T11:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:03 crc kubenswrapper[5072]: I1124 11:10:03.852053 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:03 crc kubenswrapper[5072]: I1124 11:10:03.852328 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:03 crc kubenswrapper[5072]: I1124 11:10:03.852340 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:03 crc kubenswrapper[5072]: I1124 11:10:03.852357 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:03 crc kubenswrapper[5072]: I1124 11:10:03.852368 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:03Z","lastTransitionTime":"2025-11-24T11:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:03 crc kubenswrapper[5072]: I1124 11:10:03.955648 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:03 crc kubenswrapper[5072]: I1124 11:10:03.955726 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:03 crc kubenswrapper[5072]: I1124 11:10:03.955749 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:03 crc kubenswrapper[5072]: I1124 11:10:03.955780 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:03 crc kubenswrapper[5072]: I1124 11:10:03.955803 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:03Z","lastTransitionTime":"2025-11-24T11:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:04 crc kubenswrapper[5072]: I1124 11:10:04.015358 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:10:04 crc kubenswrapper[5072]: I1124 11:10:04.015358 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:10:04 crc kubenswrapper[5072]: E1124 11:10:04.015584 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:10:04 crc kubenswrapper[5072]: E1124 11:10:04.015682 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:10:04 crc kubenswrapper[5072]: I1124 11:10:04.015407 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:10:04 crc kubenswrapper[5072]: E1124 11:10:04.015799 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:10:04 crc kubenswrapper[5072]: I1124 11:10:04.059458 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:04 crc kubenswrapper[5072]: I1124 11:10:04.059497 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:04 crc kubenswrapper[5072]: I1124 11:10:04.059513 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:04 crc kubenswrapper[5072]: I1124 11:10:04.059534 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:04 crc kubenswrapper[5072]: I1124 11:10:04.059550 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:04Z","lastTransitionTime":"2025-11-24T11:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:04 crc kubenswrapper[5072]: I1124 11:10:04.163129 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:04 crc kubenswrapper[5072]: I1124 11:10:04.163189 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:04 crc kubenswrapper[5072]: I1124 11:10:04.163206 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:04 crc kubenswrapper[5072]: I1124 11:10:04.163231 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:04 crc kubenswrapper[5072]: I1124 11:10:04.163255 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:04Z","lastTransitionTime":"2025-11-24T11:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:04 crc kubenswrapper[5072]: I1124 11:10:04.266482 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:04 crc kubenswrapper[5072]: I1124 11:10:04.266559 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:04 crc kubenswrapper[5072]: I1124 11:10:04.266582 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:04 crc kubenswrapper[5072]: I1124 11:10:04.266610 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:04 crc kubenswrapper[5072]: I1124 11:10:04.266632 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:04Z","lastTransitionTime":"2025-11-24T11:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:04 crc kubenswrapper[5072]: I1124 11:10:04.370411 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:04 crc kubenswrapper[5072]: I1124 11:10:04.370471 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:04 crc kubenswrapper[5072]: I1124 11:10:04.370488 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:04 crc kubenswrapper[5072]: I1124 11:10:04.370511 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:04 crc kubenswrapper[5072]: I1124 11:10:04.370530 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:04Z","lastTransitionTime":"2025-11-24T11:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:04 crc kubenswrapper[5072]: I1124 11:10:04.472934 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:04 crc kubenswrapper[5072]: I1124 11:10:04.472978 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:04 crc kubenswrapper[5072]: I1124 11:10:04.472993 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:04 crc kubenswrapper[5072]: I1124 11:10:04.473016 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:04 crc kubenswrapper[5072]: I1124 11:10:04.473036 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:04Z","lastTransitionTime":"2025-11-24T11:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:04 crc kubenswrapper[5072]: I1124 11:10:04.575895 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:04 crc kubenswrapper[5072]: I1124 11:10:04.575963 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:04 crc kubenswrapper[5072]: I1124 11:10:04.575983 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:04 crc kubenswrapper[5072]: I1124 11:10:04.576010 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:04 crc kubenswrapper[5072]: I1124 11:10:04.576029 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:04Z","lastTransitionTime":"2025-11-24T11:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:04 crc kubenswrapper[5072]: I1124 11:10:04.678815 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:04 crc kubenswrapper[5072]: I1124 11:10:04.679152 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:04 crc kubenswrapper[5072]: I1124 11:10:04.679306 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:04 crc kubenswrapper[5072]: I1124 11:10:04.679481 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:04 crc kubenswrapper[5072]: I1124 11:10:04.679653 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:04Z","lastTransitionTime":"2025-11-24T11:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:04 crc kubenswrapper[5072]: I1124 11:10:04.782765 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:04 crc kubenswrapper[5072]: I1124 11:10:04.782819 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:04 crc kubenswrapper[5072]: I1124 11:10:04.782838 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:04 crc kubenswrapper[5072]: I1124 11:10:04.782860 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:04 crc kubenswrapper[5072]: I1124 11:10:04.782876 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:04Z","lastTransitionTime":"2025-11-24T11:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:04 crc kubenswrapper[5072]: I1124 11:10:04.886902 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:04 crc kubenswrapper[5072]: I1124 11:10:04.886970 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:04 crc kubenswrapper[5072]: I1124 11:10:04.886991 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:04 crc kubenswrapper[5072]: I1124 11:10:04.887020 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:04 crc kubenswrapper[5072]: I1124 11:10:04.887039 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:04Z","lastTransitionTime":"2025-11-24T11:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:04 crc kubenswrapper[5072]: I1124 11:10:04.990033 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:04 crc kubenswrapper[5072]: I1124 11:10:04.990091 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:04 crc kubenswrapper[5072]: I1124 11:10:04.990109 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:04 crc kubenswrapper[5072]: I1124 11:10:04.990134 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:04 crc kubenswrapper[5072]: I1124 11:10:04.990152 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:04Z","lastTransitionTime":"2025-11-24T11:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:05 crc kubenswrapper[5072]: I1124 11:10:05.016459 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nnrv7" Nov 24 11:10:05 crc kubenswrapper[5072]: E1124 11:10:05.016662 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nnrv7" podUID="60100e7d-c8b1-4b18-8567-46e21096fa0f" Nov 24 11:10:05 crc kubenswrapper[5072]: I1124 11:10:05.093190 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:05 crc kubenswrapper[5072]: I1124 11:10:05.093241 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:05 crc kubenswrapper[5072]: I1124 11:10:05.093259 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:05 crc kubenswrapper[5072]: I1124 11:10:05.093278 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:05 crc kubenswrapper[5072]: I1124 11:10:05.093295 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:05Z","lastTransitionTime":"2025-11-24T11:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:05 crc kubenswrapper[5072]: I1124 11:10:05.196084 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:05 crc kubenswrapper[5072]: I1124 11:10:05.196119 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:05 crc kubenswrapper[5072]: I1124 11:10:05.196129 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:05 crc kubenswrapper[5072]: I1124 11:10:05.196143 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:05 crc kubenswrapper[5072]: I1124 11:10:05.196155 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:05Z","lastTransitionTime":"2025-11-24T11:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:05 crc kubenswrapper[5072]: I1124 11:10:05.299353 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:05 crc kubenswrapper[5072]: I1124 11:10:05.299780 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:05 crc kubenswrapper[5072]: I1124 11:10:05.299951 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:05 crc kubenswrapper[5072]: I1124 11:10:05.300120 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:05 crc kubenswrapper[5072]: I1124 11:10:05.300273 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:05Z","lastTransitionTime":"2025-11-24T11:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:05 crc kubenswrapper[5072]: I1124 11:10:05.403140 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:05 crc kubenswrapper[5072]: I1124 11:10:05.403201 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:05 crc kubenswrapper[5072]: I1124 11:10:05.403218 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:05 crc kubenswrapper[5072]: I1124 11:10:05.403241 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:05 crc kubenswrapper[5072]: I1124 11:10:05.403259 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:05Z","lastTransitionTime":"2025-11-24T11:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:05 crc kubenswrapper[5072]: I1124 11:10:05.506106 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:05 crc kubenswrapper[5072]: I1124 11:10:05.506200 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:05 crc kubenswrapper[5072]: I1124 11:10:05.506228 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:05 crc kubenswrapper[5072]: I1124 11:10:05.506259 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:05 crc kubenswrapper[5072]: I1124 11:10:05.506282 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:05Z","lastTransitionTime":"2025-11-24T11:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:05 crc kubenswrapper[5072]: I1124 11:10:05.609349 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:05 crc kubenswrapper[5072]: I1124 11:10:05.609433 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:05 crc kubenswrapper[5072]: I1124 11:10:05.609459 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:05 crc kubenswrapper[5072]: I1124 11:10:05.609488 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:05 crc kubenswrapper[5072]: I1124 11:10:05.609510 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:05Z","lastTransitionTime":"2025-11-24T11:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:05 crc kubenswrapper[5072]: I1124 11:10:05.712769 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:05 crc kubenswrapper[5072]: I1124 11:10:05.712831 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:05 crc kubenswrapper[5072]: I1124 11:10:05.712852 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:05 crc kubenswrapper[5072]: I1124 11:10:05.712880 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:05 crc kubenswrapper[5072]: I1124 11:10:05.712902 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:05Z","lastTransitionTime":"2025-11-24T11:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:05 crc kubenswrapper[5072]: I1124 11:10:05.815588 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:05 crc kubenswrapper[5072]: I1124 11:10:05.816034 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:05 crc kubenswrapper[5072]: I1124 11:10:05.816250 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:05 crc kubenswrapper[5072]: I1124 11:10:05.816477 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:05 crc kubenswrapper[5072]: I1124 11:10:05.816733 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:05Z","lastTransitionTime":"2025-11-24T11:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:05 crc kubenswrapper[5072]: I1124 11:10:05.920200 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:05 crc kubenswrapper[5072]: I1124 11:10:05.920256 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:05 crc kubenswrapper[5072]: I1124 11:10:05.920275 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:05 crc kubenswrapper[5072]: I1124 11:10:05.920299 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:05 crc kubenswrapper[5072]: I1124 11:10:05.920316 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:05Z","lastTransitionTime":"2025-11-24T11:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:06 crc kubenswrapper[5072]: I1124 11:10:06.015738 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:10:06 crc kubenswrapper[5072]: I1124 11:10:06.015844 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:10:06 crc kubenswrapper[5072]: E1124 11:10:06.015915 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:10:06 crc kubenswrapper[5072]: I1124 11:10:06.016005 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:10:06 crc kubenswrapper[5072]: E1124 11:10:06.016011 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:10:06 crc kubenswrapper[5072]: E1124 11:10:06.016097 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:10:06 crc kubenswrapper[5072]: I1124 11:10:06.023461 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:06 crc kubenswrapper[5072]: I1124 11:10:06.023600 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:06 crc kubenswrapper[5072]: I1124 11:10:06.023718 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:06 crc kubenswrapper[5072]: I1124 11:10:06.023800 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:06 crc kubenswrapper[5072]: I1124 11:10:06.023933 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:06Z","lastTransitionTime":"2025-11-24T11:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:06 crc kubenswrapper[5072]: I1124 11:10:06.126933 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:06 crc kubenswrapper[5072]: I1124 11:10:06.126999 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:06 crc kubenswrapper[5072]: I1124 11:10:06.127017 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:06 crc kubenswrapper[5072]: I1124 11:10:06.127040 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:06 crc kubenswrapper[5072]: I1124 11:10:06.127057 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:06Z","lastTransitionTime":"2025-11-24T11:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:06 crc kubenswrapper[5072]: I1124 11:10:06.230715 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:06 crc kubenswrapper[5072]: I1124 11:10:06.230770 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:06 crc kubenswrapper[5072]: I1124 11:10:06.230787 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:06 crc kubenswrapper[5072]: I1124 11:10:06.230812 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:06 crc kubenswrapper[5072]: I1124 11:10:06.230829 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:06Z","lastTransitionTime":"2025-11-24T11:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:06 crc kubenswrapper[5072]: I1124 11:10:06.333329 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:06 crc kubenswrapper[5072]: I1124 11:10:06.333642 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:06 crc kubenswrapper[5072]: I1124 11:10:06.333781 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:06 crc kubenswrapper[5072]: I1124 11:10:06.333872 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:06 crc kubenswrapper[5072]: I1124 11:10:06.333955 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:06Z","lastTransitionTime":"2025-11-24T11:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:06 crc kubenswrapper[5072]: I1124 11:10:06.437927 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:06 crc kubenswrapper[5072]: I1124 11:10:06.437988 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:06 crc kubenswrapper[5072]: I1124 11:10:06.438010 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:06 crc kubenswrapper[5072]: I1124 11:10:06.438036 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:06 crc kubenswrapper[5072]: I1124 11:10:06.438056 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:06Z","lastTransitionTime":"2025-11-24T11:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:06 crc kubenswrapper[5072]: I1124 11:10:06.541552 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:06 crc kubenswrapper[5072]: I1124 11:10:06.541607 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:06 crc kubenswrapper[5072]: I1124 11:10:06.541624 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:06 crc kubenswrapper[5072]: I1124 11:10:06.541650 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:06 crc kubenswrapper[5072]: I1124 11:10:06.541668 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:06Z","lastTransitionTime":"2025-11-24T11:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:06 crc kubenswrapper[5072]: I1124 11:10:06.645584 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:06 crc kubenswrapper[5072]: I1124 11:10:06.645662 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:06 crc kubenswrapper[5072]: I1124 11:10:06.645688 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:06 crc kubenswrapper[5072]: I1124 11:10:06.645717 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:06 crc kubenswrapper[5072]: I1124 11:10:06.645742 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:06Z","lastTransitionTime":"2025-11-24T11:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:06 crc kubenswrapper[5072]: I1124 11:10:06.749305 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:06 crc kubenswrapper[5072]: I1124 11:10:06.749363 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:06 crc kubenswrapper[5072]: I1124 11:10:06.749426 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:06 crc kubenswrapper[5072]: I1124 11:10:06.749456 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:06 crc kubenswrapper[5072]: I1124 11:10:06.749475 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:06Z","lastTransitionTime":"2025-11-24T11:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:06 crc kubenswrapper[5072]: I1124 11:10:06.852435 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:06 crc kubenswrapper[5072]: I1124 11:10:06.852487 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:06 crc kubenswrapper[5072]: I1124 11:10:06.852504 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:06 crc kubenswrapper[5072]: I1124 11:10:06.852526 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:06 crc kubenswrapper[5072]: I1124 11:10:06.852543 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:06Z","lastTransitionTime":"2025-11-24T11:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:06 crc kubenswrapper[5072]: I1124 11:10:06.955150 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:06 crc kubenswrapper[5072]: I1124 11:10:06.955495 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:06 crc kubenswrapper[5072]: I1124 11:10:06.955867 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:06 crc kubenswrapper[5072]: I1124 11:10:06.956061 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:06 crc kubenswrapper[5072]: I1124 11:10:06.956215 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:06Z","lastTransitionTime":"2025-11-24T11:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:07 crc kubenswrapper[5072]: I1124 11:10:07.016480 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nnrv7" Nov 24 11:10:07 crc kubenswrapper[5072]: E1124 11:10:07.016699 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nnrv7" podUID="60100e7d-c8b1-4b18-8567-46e21096fa0f" Nov 24 11:10:07 crc kubenswrapper[5072]: I1124 11:10:07.059343 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:07 crc kubenswrapper[5072]: I1124 11:10:07.059402 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:07 crc kubenswrapper[5072]: I1124 11:10:07.059414 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:07 crc kubenswrapper[5072]: I1124 11:10:07.059432 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:07 crc kubenswrapper[5072]: I1124 11:10:07.059442 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:07Z","lastTransitionTime":"2025-11-24T11:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:07 crc kubenswrapper[5072]: I1124 11:10:07.161512 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:07 crc kubenswrapper[5072]: I1124 11:10:07.161582 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:07 crc kubenswrapper[5072]: I1124 11:10:07.161607 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:07 crc kubenswrapper[5072]: I1124 11:10:07.161635 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:07 crc kubenswrapper[5072]: I1124 11:10:07.161655 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:07Z","lastTransitionTime":"2025-11-24T11:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:07 crc kubenswrapper[5072]: I1124 11:10:07.270013 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:07 crc kubenswrapper[5072]: I1124 11:10:07.270101 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:07 crc kubenswrapper[5072]: I1124 11:10:07.270122 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:07 crc kubenswrapper[5072]: I1124 11:10:07.270145 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:07 crc kubenswrapper[5072]: I1124 11:10:07.270172 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:07Z","lastTransitionTime":"2025-11-24T11:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:07 crc kubenswrapper[5072]: I1124 11:10:07.374157 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:07 crc kubenswrapper[5072]: I1124 11:10:07.374234 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:07 crc kubenswrapper[5072]: I1124 11:10:07.374256 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:07 crc kubenswrapper[5072]: I1124 11:10:07.374285 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:07 crc kubenswrapper[5072]: I1124 11:10:07.374308 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:07Z","lastTransitionTime":"2025-11-24T11:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:07 crc kubenswrapper[5072]: I1124 11:10:07.477685 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:07 crc kubenswrapper[5072]: I1124 11:10:07.477744 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:07 crc kubenswrapper[5072]: I1124 11:10:07.477762 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:07 crc kubenswrapper[5072]: I1124 11:10:07.477786 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:07 crc kubenswrapper[5072]: I1124 11:10:07.477804 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:07Z","lastTransitionTime":"2025-11-24T11:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:07 crc kubenswrapper[5072]: I1124 11:10:07.581118 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:07 crc kubenswrapper[5072]: I1124 11:10:07.581175 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:07 crc kubenswrapper[5072]: I1124 11:10:07.581191 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:07 crc kubenswrapper[5072]: I1124 11:10:07.581214 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:07 crc kubenswrapper[5072]: I1124 11:10:07.581232 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:07Z","lastTransitionTime":"2025-11-24T11:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:07 crc kubenswrapper[5072]: I1124 11:10:07.685190 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:07 crc kubenswrapper[5072]: I1124 11:10:07.685254 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:07 crc kubenswrapper[5072]: I1124 11:10:07.685278 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:07 crc kubenswrapper[5072]: I1124 11:10:07.685307 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:07 crc kubenswrapper[5072]: I1124 11:10:07.685328 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:07Z","lastTransitionTime":"2025-11-24T11:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:07 crc kubenswrapper[5072]: I1124 11:10:07.788496 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:07 crc kubenswrapper[5072]: I1124 11:10:07.788572 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:07 crc kubenswrapper[5072]: I1124 11:10:07.788589 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:07 crc kubenswrapper[5072]: I1124 11:10:07.788639 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:07 crc kubenswrapper[5072]: I1124 11:10:07.788657 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:07Z","lastTransitionTime":"2025-11-24T11:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:07 crc kubenswrapper[5072]: I1124 11:10:07.891823 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:07 crc kubenswrapper[5072]: I1124 11:10:07.891866 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:07 crc kubenswrapper[5072]: I1124 11:10:07.891883 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:07 crc kubenswrapper[5072]: I1124 11:10:07.891902 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:07 crc kubenswrapper[5072]: I1124 11:10:07.891918 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:07Z","lastTransitionTime":"2025-11-24T11:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:07 crc kubenswrapper[5072]: I1124 11:10:07.994798 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:07 crc kubenswrapper[5072]: I1124 11:10:07.994847 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:07 crc kubenswrapper[5072]: I1124 11:10:07.994859 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:07 crc kubenswrapper[5072]: I1124 11:10:07.994876 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:07 crc kubenswrapper[5072]: I1124 11:10:07.994891 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:07Z","lastTransitionTime":"2025-11-24T11:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:08 crc kubenswrapper[5072]: I1124 11:10:08.016509 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:10:08 crc kubenswrapper[5072]: I1124 11:10:08.016541 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:10:08 crc kubenswrapper[5072]: I1124 11:10:08.016555 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:10:08 crc kubenswrapper[5072]: E1124 11:10:08.016733 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:10:08 crc kubenswrapper[5072]: E1124 11:10:08.016923 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:10:08 crc kubenswrapper[5072]: E1124 11:10:08.016989 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:10:08 crc kubenswrapper[5072]: I1124 11:10:08.098074 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:08 crc kubenswrapper[5072]: I1124 11:10:08.098158 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:08 crc kubenswrapper[5072]: I1124 11:10:08.098183 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:08 crc kubenswrapper[5072]: I1124 11:10:08.098216 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:08 crc kubenswrapper[5072]: I1124 11:10:08.098240 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:08Z","lastTransitionTime":"2025-11-24T11:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:08 crc kubenswrapper[5072]: I1124 11:10:08.200956 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:08 crc kubenswrapper[5072]: I1124 11:10:08.201001 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:08 crc kubenswrapper[5072]: I1124 11:10:08.201010 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:08 crc kubenswrapper[5072]: I1124 11:10:08.201023 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:08 crc kubenswrapper[5072]: I1124 11:10:08.201032 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:08Z","lastTransitionTime":"2025-11-24T11:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:08 crc kubenswrapper[5072]: I1124 11:10:08.304037 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:08 crc kubenswrapper[5072]: I1124 11:10:08.304091 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:08 crc kubenswrapper[5072]: I1124 11:10:08.304111 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:08 crc kubenswrapper[5072]: I1124 11:10:08.304135 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:08 crc kubenswrapper[5072]: I1124 11:10:08.304152 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:08Z","lastTransitionTime":"2025-11-24T11:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:08 crc kubenswrapper[5072]: I1124 11:10:08.406854 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:08 crc kubenswrapper[5072]: I1124 11:10:08.406907 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:08 crc kubenswrapper[5072]: I1124 11:10:08.406915 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:08 crc kubenswrapper[5072]: I1124 11:10:08.406928 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:08 crc kubenswrapper[5072]: I1124 11:10:08.406937 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:08Z","lastTransitionTime":"2025-11-24T11:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:08 crc kubenswrapper[5072]: I1124 11:10:08.509860 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:08 crc kubenswrapper[5072]: I1124 11:10:08.509904 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:08 crc kubenswrapper[5072]: I1124 11:10:08.509920 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:08 crc kubenswrapper[5072]: I1124 11:10:08.509941 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:08 crc kubenswrapper[5072]: I1124 11:10:08.509958 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:08Z","lastTransitionTime":"2025-11-24T11:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:08 crc kubenswrapper[5072]: I1124 11:10:08.613250 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:08 crc kubenswrapper[5072]: I1124 11:10:08.613340 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:08 crc kubenswrapper[5072]: I1124 11:10:08.613364 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:08 crc kubenswrapper[5072]: I1124 11:10:08.613428 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:08 crc kubenswrapper[5072]: I1124 11:10:08.613445 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:08Z","lastTransitionTime":"2025-11-24T11:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:08 crc kubenswrapper[5072]: I1124 11:10:08.716479 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:08 crc kubenswrapper[5072]: I1124 11:10:08.716539 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:08 crc kubenswrapper[5072]: I1124 11:10:08.716556 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:08 crc kubenswrapper[5072]: I1124 11:10:08.716579 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:08 crc kubenswrapper[5072]: I1124 11:10:08.716595 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:08Z","lastTransitionTime":"2025-11-24T11:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:08 crc kubenswrapper[5072]: I1124 11:10:08.819786 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:08 crc kubenswrapper[5072]: I1124 11:10:08.819825 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:08 crc kubenswrapper[5072]: I1124 11:10:08.819838 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:08 crc kubenswrapper[5072]: I1124 11:10:08.819857 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:08 crc kubenswrapper[5072]: I1124 11:10:08.819871 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:08Z","lastTransitionTime":"2025-11-24T11:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:08 crc kubenswrapper[5072]: I1124 11:10:08.922833 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:08 crc kubenswrapper[5072]: I1124 11:10:08.922902 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:08 crc kubenswrapper[5072]: I1124 11:10:08.922921 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:08 crc kubenswrapper[5072]: I1124 11:10:08.922951 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:08 crc kubenswrapper[5072]: I1124 11:10:08.922970 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:08Z","lastTransitionTime":"2025-11-24T11:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:09 crc kubenswrapper[5072]: I1124 11:10:09.015811 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nnrv7" Nov 24 11:10:09 crc kubenswrapper[5072]: E1124 11:10:09.016036 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nnrv7" podUID="60100e7d-c8b1-4b18-8567-46e21096fa0f" Nov 24 11:10:09 crc kubenswrapper[5072]: I1124 11:10:09.025993 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:09 crc kubenswrapper[5072]: I1124 11:10:09.026041 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:09 crc kubenswrapper[5072]: I1124 11:10:09.026058 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:09 crc kubenswrapper[5072]: I1124 11:10:09.026083 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:09 crc kubenswrapper[5072]: I1124 11:10:09.026187 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:09Z","lastTransitionTime":"2025-11-24T11:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:09 crc kubenswrapper[5072]: I1124 11:10:09.034503 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jz4mm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19d555ef-9635-4aa7-bce1-7b1eb4805445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc7d5e96171aeadf92196d2b795c03ec634abd92814569a974200484569c145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8k8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jz4mm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:09Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:09 crc kubenswrapper[5072]: I1124 11:10:09.054307 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wndk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c05ddf6-986e-4bd6-95f0-7d734bc59140\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://894e58e94d99e8ef26722db709e0135d59ac4847daf001e37ce266c9baf02e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gztmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea4b260f16a11dade8c8b120408cf2d167dd868a9b938f4231aa811546252c56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gztmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wndk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:09Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:09 crc kubenswrapper[5072]: I1124 11:10:09.070130 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nnrv7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60100e7d-c8b1-4b18-8567-46e21096fa0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbdfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbdfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:45Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nnrv7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:09Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:09 crc kubenswrapper[5072]: I1124 11:10:09.089531 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b45fbff892ae7b15dc056d52d6485a995bb8a62ae423498027fe4866ef51e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcaa27616bc15c5ce26c371eb8a8f155914434949662b30894cd1ef7aa8e04a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:09Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:09 crc kubenswrapper[5072]: I1124 11:10:09.108560 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3973b61727227663fde759ad817fc73088f78293c67fc1bbbf5d5543afa7bbb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:09Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:09 crc kubenswrapper[5072]: I1124 11:10:09.124674 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bkjf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"175fd540-009b-4cb4-9c3e-e2ebc7e787f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d000a9d98b0e3ed54c1cc50148360bb8103d332c45ee03e745f14929132d2c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcts8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bkjf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:09Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:09 crc kubenswrapper[5072]: I1124 11:10:09.128345 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:09 crc kubenswrapper[5072]: I1124 11:10:09.128409 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:09 crc kubenswrapper[5072]: I1124 11:10:09.128447 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:09 crc kubenswrapper[5072]: I1124 11:10:09.128467 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:09 crc kubenswrapper[5072]: I1124 11:10:09.128481 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:09Z","lastTransitionTime":"2025-11-24T11:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:09 crc kubenswrapper[5072]: I1124 11:10:09.148429 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t8b9x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a9fe7b3-71a3-4388-8ee4-7531ceef6049\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96637ece9dca11a6b9e2a8fff8e78ca37f48e9f86e3f076e80cbd56aa353ca74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmbvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t8b9x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:09Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:09 crc kubenswrapper[5072]: I1124 11:10:09.161447 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85ee6420-36f0-467c-acf4-ebea8b02c8d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21d57225dc522c1ee3621c75ac8f9f93c47d21afb8b0cb1aae2d6aea1d17a252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3509fd52379451e43594c096ef652d92778331f2aef6b689e547f35a384b976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jfxnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:09Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:09 crc kubenswrapper[5072]: I1124 11:10:09.175961 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9007e2c-ce36-49d5-ac3f-a2a0ced4e662\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://631c19835680cfbfc94d8d2864f79bb327a834aae717a2c9c525383029e44001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a299161b21fb4a4bc255d765f39eaafa3c87549cc62d458d28ff57fbb4b5fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25ce4f3c52e2096622385f0bd213a058de7ddd3967ed8ba918e79fc63b00429c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28c581f99dcf7d549d235350230e7c3ef380dfeb4fdff577353410642700cb1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:09Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:09 crc kubenswrapper[5072]: I1124 11:10:09.198583 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:09Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:09 crc kubenswrapper[5072]: I1124 11:10:09.218476 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a948c39e09b468da8df5726e7734af35e1d5324d44a6ad11f6e30031f27060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:09Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:09 crc kubenswrapper[5072]: I1124 11:10:09.232050 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:09 crc kubenswrapper[5072]: I1124 11:10:09.232113 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:09 crc kubenswrapper[5072]: I1124 11:10:09.232137 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:09 crc kubenswrapper[5072]: I1124 11:10:09.232166 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:09 crc kubenswrapper[5072]: I1124 11:10:09.232190 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:09Z","lastTransitionTime":"2025-11-24T11:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:09 crc kubenswrapper[5072]: I1124 11:10:09.237790 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:09Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:09 crc kubenswrapper[5072]: I1124 11:10:09.256176 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3de15bd-d863-49c9-a84d-44e5af94f01c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1845d620994797b0fad3550ee243fdb5719b076cd21e2cd9fbdbfd84d5afd805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://802b58c2bb92a1887147eee76414a66c948e077ad8a3835bccd344ae67562b89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24ca0cd9727c9f25252266ba758cfa75b6d48b1f683f97b36bc3a40d6e4d9346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91aa9d18d2efa1c3559a3a17858453a13c76b7567ffb215046c57556b661890c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91aa9d18d2efa1c3559a3a17858453a13c76b7567ffb215046c57556b661890c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:09Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:09 crc kubenswrapper[5072]: I1124 11:10:09.282155 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a60343a1-7193-420d-b6ef-81505cfad266\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6597a19c8ed876fea1aaa8077315a8f39d0a79dee6af94970a3abcd552d673e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89e652bfaac124e13e0b3dfd3f167688a6b417b3613fb94d5422e2134ad95a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c9b314ea6e67a2866adfd0dc2e429523b6db6dab450a1a95fe5528548a0fcb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5f54ddd554c2e52a492be6b3e237793c7b7bed201d942c23d11983e154863a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e03b85333c8be2e5efe40f082369652f009482373f8e230fd948b2dee4e2ee39\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:09:23Z\\\",\\\"message\\\":\\\"W1124 11:09:12.543261 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 11:09:12.543592 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763982552 cert, and key in /tmp/serving-cert-2249531990/serving-signer.crt, /tmp/serving-cert-2249531990/serving-signer.key\\\\nI1124 11:09:13.042739 1 observer_polling.go:159] Starting file observer\\\\nW1124 11:09:13.046128 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 11:09:13.046351 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:09:13.048981 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2249531990/tls.crt::/tmp/serving-cert-2249531990/tls.key\\\\\\\"\\\\nF1124 11:09:23.567420 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d2187669c4dc9aae8ca2f2141104aee1e20df96f0bccf45ecd4c8528f51d1af\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a6b0468c00ca40213d12dd7b80c9f0dcfb93509a44ae37414053672e674f9f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6b0468c00ca40213d12dd7b80c9f0dcfb93509a44ae37414053672e674f9f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:09Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:09 crc kubenswrapper[5072]: I1124 11:10:09.302508 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:09Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:09 crc kubenswrapper[5072]: I1124 11:10:09.336606 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:09 crc kubenswrapper[5072]: I1124 11:10:09.336657 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:09 crc kubenswrapper[5072]: I1124 11:10:09.336673 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:09 crc kubenswrapper[5072]: I1124 11:10:09.336701 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:09 crc kubenswrapper[5072]: I1124 11:10:09.336718 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:09Z","lastTransitionTime":"2025-11-24T11:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:09 crc kubenswrapper[5072]: I1124 11:10:09.338097 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1421e4bd297d99e68c36da933221bbabf8d74aa5fbfa7cbfe831215de52d4790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c82cb1df0677da29463f84139b09b8ee263695e4c994ef7d17846556260b5c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89dd7133a078fe05808fdf20f22b6939004406ae85d3b6ef854a3e4031350491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f6526ffcce8bc139bd9442203e460c71b46e2e8cf9e1f0d03beb067f5dc1c39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98470930757c0529cc831f91feab9f4b004c808efbfdf40e3e95b12e6af1c6d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7621cb39fa8d0330ee899d4962150519618be95eabfc592e6678bb5f5fbbdbfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06ce6673e7a7189e88659cf5cb63428c7ad38aea24f770411a7de6b3754b27b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06ce6673e7a7189e88659cf5cb63428c7ad38aea24f770411a7de6b3754b27b7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:09:57Z\\\",\\\"message\\\":\\\"_cluster\\\\\\\", UUID:\\\\\\\"ba175bbe-5cc4-47e6-a32d-57693e1320bd\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-controller-manager/kube-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-controller-manager/kube-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-controller-manager/kube-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.36\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1124 11:09:57.933863 6751 ovnkube.go:599] Stopped ovnkube\\\\nI1124 11:09:57.933893 6751 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 11:09:57.933975 6751 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-n4qmw_openshift-ovn-kubernetes(80fda759-ddfd-438a-b5a2-cb775ee1bf7e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af4c3d6857b6aaa6a401604f5423cfb55488de707a08698b4cf9f420b9c07975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4qmw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:09Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:09 crc kubenswrapper[5072]: I1124 11:10:09.404424 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qjsxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74eb978f-00ff-4ed3-a5da-8026a3211592\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a69b8017daa872327d88eab8150845309e30c5cf37b229292e7c8a80e5d599c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://911b5942d35c25032791bf5a43559a6234acf215f5d3f84a30e69aced0caecc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://911b5942d35c25032791bf5a43559a6234acf215f5d3f84a30e69aced0caecc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://829da19d26a0ee0192a826e0b355266bcc48c77cf7b1fcf97a9e56add5d48645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://829da19d26a0ee0192a826e0b355266bcc48c77cf7b1fcf97a9e56add5d48645\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5add393950b53ed615d28b3d65833ae6a5174616b7170577babf1f4b7b6a2336\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5add393950b53ed615d28b3d65833ae6a5174616b7170577babf1f4b7b6a2336\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4771d3054f62a25ec9be8b6628ead9e7eb99ad4ae545d803919cb0122343c0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4771d3054f62a25ec9be8b6628ead9e7eb99ad4ae545d803919cb0122343c0ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd19ed803c2b441c4dde807b4cd4461c581058658db24f32dea39ad73b9cef14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd19ed803c2b441c4dde807b4cd4461c581058658db24f32dea39ad73b9cef14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09dba82c18fac19ddd5bbbeecab58a5dc685dbda72e7570cde5d445990066d2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09dba82c18fac19ddd5bbbeecab58a5dc685dbda72e7570cde5d445990066d2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qjsxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:09Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:09 crc kubenswrapper[5072]: I1124 11:10:09.439181 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:09 crc kubenswrapper[5072]: I1124 11:10:09.439281 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:09 crc kubenswrapper[5072]: I1124 11:10:09.439302 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:09 crc kubenswrapper[5072]: I1124 11:10:09.439330 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:09 crc kubenswrapper[5072]: I1124 11:10:09.439352 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:09Z","lastTransitionTime":"2025-11-24T11:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:09 crc kubenswrapper[5072]: I1124 11:10:09.541335 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:09 crc kubenswrapper[5072]: I1124 11:10:09.541362 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:09 crc kubenswrapper[5072]: I1124 11:10:09.541388 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:09 crc kubenswrapper[5072]: I1124 11:10:09.541399 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:09 crc kubenswrapper[5072]: I1124 11:10:09.541407 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:09Z","lastTransitionTime":"2025-11-24T11:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:09 crc kubenswrapper[5072]: I1124 11:10:09.644549 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:09 crc kubenswrapper[5072]: I1124 11:10:09.644695 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:09 crc kubenswrapper[5072]: I1124 11:10:09.644722 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:09 crc kubenswrapper[5072]: I1124 11:10:09.644751 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:09 crc kubenswrapper[5072]: I1124 11:10:09.644776 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:09Z","lastTransitionTime":"2025-11-24T11:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:09 crc kubenswrapper[5072]: I1124 11:10:09.747511 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:09 crc kubenswrapper[5072]: I1124 11:10:09.747559 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:09 crc kubenswrapper[5072]: I1124 11:10:09.747575 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:09 crc kubenswrapper[5072]: I1124 11:10:09.747627 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:09 crc kubenswrapper[5072]: I1124 11:10:09.747644 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:09Z","lastTransitionTime":"2025-11-24T11:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:09 crc kubenswrapper[5072]: I1124 11:10:09.850618 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:09 crc kubenswrapper[5072]: I1124 11:10:09.850670 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:09 crc kubenswrapper[5072]: I1124 11:10:09.850687 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:09 crc kubenswrapper[5072]: I1124 11:10:09.850710 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:09 crc kubenswrapper[5072]: I1124 11:10:09.850727 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:09Z","lastTransitionTime":"2025-11-24T11:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:09 crc kubenswrapper[5072]: I1124 11:10:09.953685 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:09 crc kubenswrapper[5072]: I1124 11:10:09.953735 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:09 crc kubenswrapper[5072]: I1124 11:10:09.953751 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:09 crc kubenswrapper[5072]: I1124 11:10:09.953774 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:09 crc kubenswrapper[5072]: I1124 11:10:09.953792 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:09Z","lastTransitionTime":"2025-11-24T11:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:10 crc kubenswrapper[5072]: I1124 11:10:10.015909 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:10:10 crc kubenswrapper[5072]: I1124 11:10:10.015963 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:10:10 crc kubenswrapper[5072]: I1124 11:10:10.015996 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:10:10 crc kubenswrapper[5072]: E1124 11:10:10.016120 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:10:10 crc kubenswrapper[5072]: E1124 11:10:10.016263 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:10:10 crc kubenswrapper[5072]: E1124 11:10:10.016406 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:10:10 crc kubenswrapper[5072]: I1124 11:10:10.056199 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:10 crc kubenswrapper[5072]: I1124 11:10:10.056266 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:10 crc kubenswrapper[5072]: I1124 11:10:10.056283 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:10 crc kubenswrapper[5072]: I1124 11:10:10.056306 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:10 crc kubenswrapper[5072]: I1124 11:10:10.056323 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:10Z","lastTransitionTime":"2025-11-24T11:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:10 crc kubenswrapper[5072]: I1124 11:10:10.159817 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:10 crc kubenswrapper[5072]: I1124 11:10:10.159859 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:10 crc kubenswrapper[5072]: I1124 11:10:10.159876 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:10 crc kubenswrapper[5072]: I1124 11:10:10.159897 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:10 crc kubenswrapper[5072]: I1124 11:10:10.159914 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:10Z","lastTransitionTime":"2025-11-24T11:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:10 crc kubenswrapper[5072]: I1124 11:10:10.263292 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:10 crc kubenswrapper[5072]: I1124 11:10:10.263349 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:10 crc kubenswrapper[5072]: I1124 11:10:10.263365 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:10 crc kubenswrapper[5072]: I1124 11:10:10.263418 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:10 crc kubenswrapper[5072]: I1124 11:10:10.263435 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:10Z","lastTransitionTime":"2025-11-24T11:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:10 crc kubenswrapper[5072]: I1124 11:10:10.285077 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:10 crc kubenswrapper[5072]: I1124 11:10:10.285139 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:10 crc kubenswrapper[5072]: I1124 11:10:10.285156 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:10 crc kubenswrapper[5072]: I1124 11:10:10.285186 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:10 crc kubenswrapper[5072]: I1124 11:10:10.285205 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:10Z","lastTransitionTime":"2025-11-24T11:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:10 crc kubenswrapper[5072]: E1124 11:10:10.306636 5072 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:10:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:10:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:10:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:10:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:10:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:10:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:10:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:10:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a41d3a9c-0834-482e-9391-dff98db0f196\\\",\\\"systemUUID\\\":\\\"d0383649-b062-48ed-9fc1-5e553cb9256a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:10Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:10 crc kubenswrapper[5072]: I1124 11:10:10.312174 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:10 crc kubenswrapper[5072]: I1124 11:10:10.312224 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:10 crc kubenswrapper[5072]: I1124 11:10:10.312241 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:10 crc kubenswrapper[5072]: I1124 11:10:10.312268 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:10 crc kubenswrapper[5072]: I1124 11:10:10.312286 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:10Z","lastTransitionTime":"2025-11-24T11:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:10 crc kubenswrapper[5072]: E1124 11:10:10.333522 5072 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:10:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:10:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:10:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:10:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:10:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:10:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:10:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:10:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a41d3a9c-0834-482e-9391-dff98db0f196\\\",\\\"systemUUID\\\":\\\"d0383649-b062-48ed-9fc1-5e553cb9256a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:10Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:10 crc kubenswrapper[5072]: I1124 11:10:10.339987 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:10 crc kubenswrapper[5072]: I1124 11:10:10.340043 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:10 crc kubenswrapper[5072]: I1124 11:10:10.340063 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:10 crc kubenswrapper[5072]: I1124 11:10:10.340087 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:10 crc kubenswrapper[5072]: I1124 11:10:10.340110 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:10Z","lastTransitionTime":"2025-11-24T11:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:10 crc kubenswrapper[5072]: E1124 11:10:10.362093 5072 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:10:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:10:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:10:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:10:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:10:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:10:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:10:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:10:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a41d3a9c-0834-482e-9391-dff98db0f196\\\",\\\"systemUUID\\\":\\\"d0383649-b062-48ed-9fc1-5e553cb9256a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:10Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:10 crc kubenswrapper[5072]: I1124 11:10:10.367590 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:10 crc kubenswrapper[5072]: I1124 11:10:10.367672 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:10 crc kubenswrapper[5072]: I1124 11:10:10.367691 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:10 crc kubenswrapper[5072]: I1124 11:10:10.367712 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:10 crc kubenswrapper[5072]: I1124 11:10:10.367728 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:10Z","lastTransitionTime":"2025-11-24T11:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:10 crc kubenswrapper[5072]: E1124 11:10:10.388541 5072 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:10:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:10:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:10:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:10:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:10:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:10:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:10:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:10:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a41d3a9c-0834-482e-9391-dff98db0f196\\\",\\\"systemUUID\\\":\\\"d0383649-b062-48ed-9fc1-5e553cb9256a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:10Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:10 crc kubenswrapper[5072]: I1124 11:10:10.393620 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:10 crc kubenswrapper[5072]: I1124 11:10:10.393702 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:10 crc kubenswrapper[5072]: I1124 11:10:10.393725 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:10 crc kubenswrapper[5072]: I1124 11:10:10.393756 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:10 crc kubenswrapper[5072]: I1124 11:10:10.393781 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:10Z","lastTransitionTime":"2025-11-24T11:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:10 crc kubenswrapper[5072]: E1124 11:10:10.418184 5072 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:10:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:10:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:10:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:10:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:10:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:10:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:10:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:10:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a41d3a9c-0834-482e-9391-dff98db0f196\\\",\\\"systemUUID\\\":\\\"d0383649-b062-48ed-9fc1-5e553cb9256a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:10Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:10 crc kubenswrapper[5072]: E1124 11:10:10.418313 5072 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 24 11:10:10 crc kubenswrapper[5072]: I1124 11:10:10.421495 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:10 crc kubenswrapper[5072]: I1124 11:10:10.421521 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:10 crc kubenswrapper[5072]: I1124 11:10:10.421529 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:10 crc kubenswrapper[5072]: I1124 11:10:10.421547 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:10 crc kubenswrapper[5072]: I1124 11:10:10.421558 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:10Z","lastTransitionTime":"2025-11-24T11:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:10 crc kubenswrapper[5072]: I1124 11:10:10.524142 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:10 crc kubenswrapper[5072]: I1124 11:10:10.524213 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:10 crc kubenswrapper[5072]: I1124 11:10:10.524242 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:10 crc kubenswrapper[5072]: I1124 11:10:10.524273 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:10 crc kubenswrapper[5072]: I1124 11:10:10.524294 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:10Z","lastTransitionTime":"2025-11-24T11:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:10 crc kubenswrapper[5072]: I1124 11:10:10.627661 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:10 crc kubenswrapper[5072]: I1124 11:10:10.627722 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:10 crc kubenswrapper[5072]: I1124 11:10:10.627738 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:10 crc kubenswrapper[5072]: I1124 11:10:10.627764 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:10 crc kubenswrapper[5072]: I1124 11:10:10.627781 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:10Z","lastTransitionTime":"2025-11-24T11:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:10 crc kubenswrapper[5072]: I1124 11:10:10.730535 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:10 crc kubenswrapper[5072]: I1124 11:10:10.730594 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:10 crc kubenswrapper[5072]: I1124 11:10:10.730611 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:10 crc kubenswrapper[5072]: I1124 11:10:10.730633 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:10 crc kubenswrapper[5072]: I1124 11:10:10.730649 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:10Z","lastTransitionTime":"2025-11-24T11:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:10 crc kubenswrapper[5072]: I1124 11:10:10.833773 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:10 crc kubenswrapper[5072]: I1124 11:10:10.833832 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:10 crc kubenswrapper[5072]: I1124 11:10:10.833849 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:10 crc kubenswrapper[5072]: I1124 11:10:10.833932 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:10 crc kubenswrapper[5072]: I1124 11:10:10.833953 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:10Z","lastTransitionTime":"2025-11-24T11:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:10 crc kubenswrapper[5072]: I1124 11:10:10.936986 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:10 crc kubenswrapper[5072]: I1124 11:10:10.937049 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:10 crc kubenswrapper[5072]: I1124 11:10:10.937073 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:10 crc kubenswrapper[5072]: I1124 11:10:10.937105 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:10 crc kubenswrapper[5072]: I1124 11:10:10.937123 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:10Z","lastTransitionTime":"2025-11-24T11:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:11 crc kubenswrapper[5072]: I1124 11:10:11.016228 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nnrv7" Nov 24 11:10:11 crc kubenswrapper[5072]: E1124 11:10:11.016476 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nnrv7" podUID="60100e7d-c8b1-4b18-8567-46e21096fa0f" Nov 24 11:10:11 crc kubenswrapper[5072]: I1124 11:10:11.039662 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:11 crc kubenswrapper[5072]: I1124 11:10:11.039719 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:11 crc kubenswrapper[5072]: I1124 11:10:11.039734 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:11 crc kubenswrapper[5072]: I1124 11:10:11.039757 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:11 crc kubenswrapper[5072]: I1124 11:10:11.039776 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:11Z","lastTransitionTime":"2025-11-24T11:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:11 crc kubenswrapper[5072]: I1124 11:10:11.143099 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:11 crc kubenswrapper[5072]: I1124 11:10:11.143149 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:11 crc kubenswrapper[5072]: I1124 11:10:11.143170 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:11 crc kubenswrapper[5072]: I1124 11:10:11.143196 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:11 crc kubenswrapper[5072]: I1124 11:10:11.143216 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:11Z","lastTransitionTime":"2025-11-24T11:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:11 crc kubenswrapper[5072]: I1124 11:10:11.245952 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:11 crc kubenswrapper[5072]: I1124 11:10:11.245987 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:11 crc kubenswrapper[5072]: I1124 11:10:11.245995 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:11 crc kubenswrapper[5072]: I1124 11:10:11.246008 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:11 crc kubenswrapper[5072]: I1124 11:10:11.246018 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:11Z","lastTransitionTime":"2025-11-24T11:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:11 crc kubenswrapper[5072]: I1124 11:10:11.348957 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:11 crc kubenswrapper[5072]: I1124 11:10:11.349049 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:11 crc kubenswrapper[5072]: I1124 11:10:11.349066 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:11 crc kubenswrapper[5072]: I1124 11:10:11.349092 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:11 crc kubenswrapper[5072]: I1124 11:10:11.349111 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:11Z","lastTransitionTime":"2025-11-24T11:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:11 crc kubenswrapper[5072]: I1124 11:10:11.451839 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:11 crc kubenswrapper[5072]: I1124 11:10:11.451911 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:11 crc kubenswrapper[5072]: I1124 11:10:11.451924 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:11 crc kubenswrapper[5072]: I1124 11:10:11.451941 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:11 crc kubenswrapper[5072]: I1124 11:10:11.451954 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:11Z","lastTransitionTime":"2025-11-24T11:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:11 crc kubenswrapper[5072]: I1124 11:10:11.554662 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:11 crc kubenswrapper[5072]: I1124 11:10:11.554743 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:11 crc kubenswrapper[5072]: I1124 11:10:11.554759 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:11 crc kubenswrapper[5072]: I1124 11:10:11.554785 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:11 crc kubenswrapper[5072]: I1124 11:10:11.554806 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:11Z","lastTransitionTime":"2025-11-24T11:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:11 crc kubenswrapper[5072]: I1124 11:10:11.658252 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:11 crc kubenswrapper[5072]: I1124 11:10:11.658317 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:11 crc kubenswrapper[5072]: I1124 11:10:11.658352 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:11 crc kubenswrapper[5072]: I1124 11:10:11.658419 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:11 crc kubenswrapper[5072]: I1124 11:10:11.658447 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:11Z","lastTransitionTime":"2025-11-24T11:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:11 crc kubenswrapper[5072]: I1124 11:10:11.761188 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:11 crc kubenswrapper[5072]: I1124 11:10:11.761251 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:11 crc kubenswrapper[5072]: I1124 11:10:11.761270 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:11 crc kubenswrapper[5072]: I1124 11:10:11.761295 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:11 crc kubenswrapper[5072]: I1124 11:10:11.761312 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:11Z","lastTransitionTime":"2025-11-24T11:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:11 crc kubenswrapper[5072]: I1124 11:10:11.873691 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:11 crc kubenswrapper[5072]: I1124 11:10:11.873759 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:11 crc kubenswrapper[5072]: I1124 11:10:11.873777 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:11 crc kubenswrapper[5072]: I1124 11:10:11.873805 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:11 crc kubenswrapper[5072]: I1124 11:10:11.873827 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:11Z","lastTransitionTime":"2025-11-24T11:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:11 crc kubenswrapper[5072]: I1124 11:10:11.975958 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:11 crc kubenswrapper[5072]: I1124 11:10:11.976008 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:11 crc kubenswrapper[5072]: I1124 11:10:11.976028 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:11 crc kubenswrapper[5072]: I1124 11:10:11.976047 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:11 crc kubenswrapper[5072]: I1124 11:10:11.976066 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:11Z","lastTransitionTime":"2025-11-24T11:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:12 crc kubenswrapper[5072]: I1124 11:10:12.015318 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:10:12 crc kubenswrapper[5072]: I1124 11:10:12.015343 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:10:12 crc kubenswrapper[5072]: I1124 11:10:12.015361 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:10:12 crc kubenswrapper[5072]: E1124 11:10:12.015481 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:10:12 crc kubenswrapper[5072]: E1124 11:10:12.015580 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:10:12 crc kubenswrapper[5072]: E1124 11:10:12.015667 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:10:12 crc kubenswrapper[5072]: I1124 11:10:12.079475 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:12 crc kubenswrapper[5072]: I1124 11:10:12.079566 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:12 crc kubenswrapper[5072]: I1124 11:10:12.079602 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:12 crc kubenswrapper[5072]: I1124 11:10:12.079632 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:12 crc kubenswrapper[5072]: I1124 11:10:12.079653 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:12Z","lastTransitionTime":"2025-11-24T11:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:12 crc kubenswrapper[5072]: I1124 11:10:12.182846 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:12 crc kubenswrapper[5072]: I1124 11:10:12.182948 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:12 crc kubenswrapper[5072]: I1124 11:10:12.182973 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:12 crc kubenswrapper[5072]: I1124 11:10:12.182998 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:12 crc kubenswrapper[5072]: I1124 11:10:12.183019 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:12Z","lastTransitionTime":"2025-11-24T11:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:12 crc kubenswrapper[5072]: I1124 11:10:12.286593 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:12 crc kubenswrapper[5072]: I1124 11:10:12.286628 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:12 crc kubenswrapper[5072]: I1124 11:10:12.286637 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:12 crc kubenswrapper[5072]: I1124 11:10:12.286651 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:12 crc kubenswrapper[5072]: I1124 11:10:12.286662 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:12Z","lastTransitionTime":"2025-11-24T11:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:12 crc kubenswrapper[5072]: I1124 11:10:12.389615 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:12 crc kubenswrapper[5072]: I1124 11:10:12.389658 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:12 crc kubenswrapper[5072]: I1124 11:10:12.389669 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:12 crc kubenswrapper[5072]: I1124 11:10:12.389687 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:12 crc kubenswrapper[5072]: I1124 11:10:12.389701 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:12Z","lastTransitionTime":"2025-11-24T11:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:12 crc kubenswrapper[5072]: I1124 11:10:12.492641 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:12 crc kubenswrapper[5072]: I1124 11:10:12.492704 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:12 crc kubenswrapper[5072]: I1124 11:10:12.492722 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:12 crc kubenswrapper[5072]: I1124 11:10:12.492750 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:12 crc kubenswrapper[5072]: I1124 11:10:12.492769 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:12Z","lastTransitionTime":"2025-11-24T11:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:12 crc kubenswrapper[5072]: I1124 11:10:12.596578 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:12 crc kubenswrapper[5072]: I1124 11:10:12.596641 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:12 crc kubenswrapper[5072]: I1124 11:10:12.596659 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:12 crc kubenswrapper[5072]: I1124 11:10:12.596684 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:12 crc kubenswrapper[5072]: I1124 11:10:12.596701 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:12Z","lastTransitionTime":"2025-11-24T11:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:12 crc kubenswrapper[5072]: I1124 11:10:12.699717 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:12 crc kubenswrapper[5072]: I1124 11:10:12.699788 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:12 crc kubenswrapper[5072]: I1124 11:10:12.699800 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:12 crc kubenswrapper[5072]: I1124 11:10:12.699837 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:12 crc kubenswrapper[5072]: I1124 11:10:12.699851 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:12Z","lastTransitionTime":"2025-11-24T11:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:12 crc kubenswrapper[5072]: I1124 11:10:12.802733 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:12 crc kubenswrapper[5072]: I1124 11:10:12.802788 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:12 crc kubenswrapper[5072]: I1124 11:10:12.802803 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:12 crc kubenswrapper[5072]: I1124 11:10:12.802824 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:12 crc kubenswrapper[5072]: I1124 11:10:12.802839 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:12Z","lastTransitionTime":"2025-11-24T11:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:12 crc kubenswrapper[5072]: I1124 11:10:12.905353 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:12 crc kubenswrapper[5072]: I1124 11:10:12.905726 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:12 crc kubenswrapper[5072]: I1124 11:10:12.905743 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:12 crc kubenswrapper[5072]: I1124 11:10:12.905767 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:12 crc kubenswrapper[5072]: I1124 11:10:12.905787 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:12Z","lastTransitionTime":"2025-11-24T11:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:13 crc kubenswrapper[5072]: I1124 11:10:13.008227 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:13 crc kubenswrapper[5072]: I1124 11:10:13.008271 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:13 crc kubenswrapper[5072]: I1124 11:10:13.008287 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:13 crc kubenswrapper[5072]: I1124 11:10:13.008306 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:13 crc kubenswrapper[5072]: I1124 11:10:13.008320 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:13Z","lastTransitionTime":"2025-11-24T11:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:13 crc kubenswrapper[5072]: I1124 11:10:13.015699 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nnrv7" Nov 24 11:10:13 crc kubenswrapper[5072]: E1124 11:10:13.015860 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nnrv7" podUID="60100e7d-c8b1-4b18-8567-46e21096fa0f" Nov 24 11:10:13 crc kubenswrapper[5072]: I1124 11:10:13.110995 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:13 crc kubenswrapper[5072]: I1124 11:10:13.111029 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:13 crc kubenswrapper[5072]: I1124 11:10:13.111038 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:13 crc kubenswrapper[5072]: I1124 11:10:13.111051 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:13 crc kubenswrapper[5072]: I1124 11:10:13.111062 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:13Z","lastTransitionTime":"2025-11-24T11:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:13 crc kubenswrapper[5072]: I1124 11:10:13.213497 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:13 crc kubenswrapper[5072]: I1124 11:10:13.213531 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:13 crc kubenswrapper[5072]: I1124 11:10:13.213539 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:13 crc kubenswrapper[5072]: I1124 11:10:13.213552 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:13 crc kubenswrapper[5072]: I1124 11:10:13.213561 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:13Z","lastTransitionTime":"2025-11-24T11:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:13 crc kubenswrapper[5072]: I1124 11:10:13.315582 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:13 crc kubenswrapper[5072]: I1124 11:10:13.315616 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:13 crc kubenswrapper[5072]: I1124 11:10:13.315624 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:13 crc kubenswrapper[5072]: I1124 11:10:13.315636 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:13 crc kubenswrapper[5072]: I1124 11:10:13.315645 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:13Z","lastTransitionTime":"2025-11-24T11:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:13 crc kubenswrapper[5072]: I1124 11:10:13.417817 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:13 crc kubenswrapper[5072]: I1124 11:10:13.417854 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:13 crc kubenswrapper[5072]: I1124 11:10:13.417864 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:13 crc kubenswrapper[5072]: I1124 11:10:13.417879 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:13 crc kubenswrapper[5072]: I1124 11:10:13.417890 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:13Z","lastTransitionTime":"2025-11-24T11:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:13 crc kubenswrapper[5072]: I1124 11:10:13.520508 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:13 crc kubenswrapper[5072]: I1124 11:10:13.520547 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:13 crc kubenswrapper[5072]: I1124 11:10:13.520555 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:13 crc kubenswrapper[5072]: I1124 11:10:13.520570 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:13 crc kubenswrapper[5072]: I1124 11:10:13.520580 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:13Z","lastTransitionTime":"2025-11-24T11:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:13 crc kubenswrapper[5072]: I1124 11:10:13.622930 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:13 crc kubenswrapper[5072]: I1124 11:10:13.622988 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:13 crc kubenswrapper[5072]: I1124 11:10:13.623005 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:13 crc kubenswrapper[5072]: I1124 11:10:13.623029 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:13 crc kubenswrapper[5072]: I1124 11:10:13.623046 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:13Z","lastTransitionTime":"2025-11-24T11:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:13 crc kubenswrapper[5072]: I1124 11:10:13.725815 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:13 crc kubenswrapper[5072]: I1124 11:10:13.725867 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:13 crc kubenswrapper[5072]: I1124 11:10:13.725878 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:13 crc kubenswrapper[5072]: I1124 11:10:13.725896 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:13 crc kubenswrapper[5072]: I1124 11:10:13.725908 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:13Z","lastTransitionTime":"2025-11-24T11:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:13 crc kubenswrapper[5072]: I1124 11:10:13.829324 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:13 crc kubenswrapper[5072]: I1124 11:10:13.829428 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:13 crc kubenswrapper[5072]: I1124 11:10:13.829450 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:13 crc kubenswrapper[5072]: I1124 11:10:13.829476 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:13 crc kubenswrapper[5072]: I1124 11:10:13.829493 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:13Z","lastTransitionTime":"2025-11-24T11:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:13 crc kubenswrapper[5072]: I1124 11:10:13.931946 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:13 crc kubenswrapper[5072]: I1124 11:10:13.932001 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:13 crc kubenswrapper[5072]: I1124 11:10:13.932018 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:13 crc kubenswrapper[5072]: I1124 11:10:13.932042 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:13 crc kubenswrapper[5072]: I1124 11:10:13.932059 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:13Z","lastTransitionTime":"2025-11-24T11:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:14 crc kubenswrapper[5072]: I1124 11:10:14.016194 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:10:14 crc kubenswrapper[5072]: I1124 11:10:14.016226 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:10:14 crc kubenswrapper[5072]: E1124 11:10:14.016288 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:10:14 crc kubenswrapper[5072]: I1124 11:10:14.016338 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:10:14 crc kubenswrapper[5072]: E1124 11:10:14.016402 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:10:14 crc kubenswrapper[5072]: E1124 11:10:14.016481 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:10:14 crc kubenswrapper[5072]: I1124 11:10:14.033555 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:14 crc kubenswrapper[5072]: I1124 11:10:14.033622 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:14 crc kubenswrapper[5072]: I1124 11:10:14.033641 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:14 crc kubenswrapper[5072]: I1124 11:10:14.033666 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:14 crc kubenswrapper[5072]: I1124 11:10:14.033685 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:14Z","lastTransitionTime":"2025-11-24T11:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:14 crc kubenswrapper[5072]: I1124 11:10:14.136467 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:14 crc kubenswrapper[5072]: I1124 11:10:14.136542 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:14 crc kubenswrapper[5072]: I1124 11:10:14.136565 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:14 crc kubenswrapper[5072]: I1124 11:10:14.136588 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:14 crc kubenswrapper[5072]: I1124 11:10:14.136606 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:14Z","lastTransitionTime":"2025-11-24T11:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:14 crc kubenswrapper[5072]: I1124 11:10:14.240036 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:14 crc kubenswrapper[5072]: I1124 11:10:14.240088 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:14 crc kubenswrapper[5072]: I1124 11:10:14.240106 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:14 crc kubenswrapper[5072]: I1124 11:10:14.240129 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:14 crc kubenswrapper[5072]: I1124 11:10:14.240146 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:14Z","lastTransitionTime":"2025-11-24T11:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:14 crc kubenswrapper[5072]: I1124 11:10:14.344038 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:14 crc kubenswrapper[5072]: I1124 11:10:14.344114 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:14 crc kubenswrapper[5072]: I1124 11:10:14.344132 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:14 crc kubenswrapper[5072]: I1124 11:10:14.344161 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:14 crc kubenswrapper[5072]: I1124 11:10:14.344182 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:14Z","lastTransitionTime":"2025-11-24T11:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:14 crc kubenswrapper[5072]: I1124 11:10:14.446939 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:14 crc kubenswrapper[5072]: I1124 11:10:14.446994 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:14 crc kubenswrapper[5072]: I1124 11:10:14.447013 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:14 crc kubenswrapper[5072]: I1124 11:10:14.447039 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:14 crc kubenswrapper[5072]: I1124 11:10:14.447058 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:14Z","lastTransitionTime":"2025-11-24T11:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:14 crc kubenswrapper[5072]: I1124 11:10:14.549673 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:14 crc kubenswrapper[5072]: I1124 11:10:14.549723 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:14 crc kubenswrapper[5072]: I1124 11:10:14.549734 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:14 crc kubenswrapper[5072]: I1124 11:10:14.549753 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:14 crc kubenswrapper[5072]: I1124 11:10:14.549765 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:14Z","lastTransitionTime":"2025-11-24T11:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:14 crc kubenswrapper[5072]: I1124 11:10:14.652692 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:14 crc kubenswrapper[5072]: I1124 11:10:14.652757 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:14 crc kubenswrapper[5072]: I1124 11:10:14.652776 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:14 crc kubenswrapper[5072]: I1124 11:10:14.652802 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:14 crc kubenswrapper[5072]: I1124 11:10:14.652819 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:14Z","lastTransitionTime":"2025-11-24T11:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:14 crc kubenswrapper[5072]: I1124 11:10:14.755438 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:14 crc kubenswrapper[5072]: I1124 11:10:14.755497 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:14 crc kubenswrapper[5072]: I1124 11:10:14.755514 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:14 crc kubenswrapper[5072]: I1124 11:10:14.755536 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:14 crc kubenswrapper[5072]: I1124 11:10:14.755553 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:14Z","lastTransitionTime":"2025-11-24T11:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:14 crc kubenswrapper[5072]: I1124 11:10:14.858176 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:14 crc kubenswrapper[5072]: I1124 11:10:14.858237 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:14 crc kubenswrapper[5072]: I1124 11:10:14.858253 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:14 crc kubenswrapper[5072]: I1124 11:10:14.858279 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:14 crc kubenswrapper[5072]: I1124 11:10:14.858297 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:14Z","lastTransitionTime":"2025-11-24T11:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:14 crc kubenswrapper[5072]: I1124 11:10:14.961010 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:14 crc kubenswrapper[5072]: I1124 11:10:14.961064 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:14 crc kubenswrapper[5072]: I1124 11:10:14.961074 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:14 crc kubenswrapper[5072]: I1124 11:10:14.961088 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:14 crc kubenswrapper[5072]: I1124 11:10:14.961098 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:14Z","lastTransitionTime":"2025-11-24T11:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:15 crc kubenswrapper[5072]: I1124 11:10:15.015670 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nnrv7" Nov 24 11:10:15 crc kubenswrapper[5072]: E1124 11:10:15.016418 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nnrv7" podUID="60100e7d-c8b1-4b18-8567-46e21096fa0f" Nov 24 11:10:15 crc kubenswrapper[5072]: I1124 11:10:15.016941 5072 scope.go:117] "RemoveContainer" containerID="06ce6673e7a7189e88659cf5cb63428c7ad38aea24f770411a7de6b3754b27b7" Nov 24 11:10:15 crc kubenswrapper[5072]: E1124 11:10:15.017338 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-n4qmw_openshift-ovn-kubernetes(80fda759-ddfd-438a-b5a2-cb775ee1bf7e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" podUID="80fda759-ddfd-438a-b5a2-cb775ee1bf7e" Nov 24 11:10:15 crc kubenswrapper[5072]: I1124 11:10:15.063720 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:15 crc kubenswrapper[5072]: I1124 11:10:15.063782 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:15 crc kubenswrapper[5072]: I1124 11:10:15.063804 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:15 crc kubenswrapper[5072]: I1124 11:10:15.063833 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:15 crc kubenswrapper[5072]: I1124 11:10:15.063861 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:15Z","lastTransitionTime":"2025-11-24T11:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:15 crc kubenswrapper[5072]: I1124 11:10:15.166364 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:15 crc kubenswrapper[5072]: I1124 11:10:15.166424 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:15 crc kubenswrapper[5072]: I1124 11:10:15.166438 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:15 crc kubenswrapper[5072]: I1124 11:10:15.166455 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:15 crc kubenswrapper[5072]: I1124 11:10:15.166472 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:15Z","lastTransitionTime":"2025-11-24T11:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:15 crc kubenswrapper[5072]: I1124 11:10:15.269393 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:15 crc kubenswrapper[5072]: I1124 11:10:15.269438 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:15 crc kubenswrapper[5072]: I1124 11:10:15.269450 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:15 crc kubenswrapper[5072]: I1124 11:10:15.269467 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:15 crc kubenswrapper[5072]: I1124 11:10:15.269479 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:15Z","lastTransitionTime":"2025-11-24T11:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:15 crc kubenswrapper[5072]: I1124 11:10:15.372150 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:15 crc kubenswrapper[5072]: I1124 11:10:15.372210 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:15 crc kubenswrapper[5072]: I1124 11:10:15.372226 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:15 crc kubenswrapper[5072]: I1124 11:10:15.372257 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:15 crc kubenswrapper[5072]: I1124 11:10:15.372274 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:15Z","lastTransitionTime":"2025-11-24T11:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:15 crc kubenswrapper[5072]: I1124 11:10:15.474028 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:15 crc kubenswrapper[5072]: I1124 11:10:15.474080 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:15 crc kubenswrapper[5072]: I1124 11:10:15.474091 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:15 crc kubenswrapper[5072]: I1124 11:10:15.474108 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:15 crc kubenswrapper[5072]: I1124 11:10:15.474121 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:15Z","lastTransitionTime":"2025-11-24T11:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:15 crc kubenswrapper[5072]: I1124 11:10:15.576270 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:15 crc kubenswrapper[5072]: I1124 11:10:15.576324 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:15 crc kubenswrapper[5072]: I1124 11:10:15.576342 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:15 crc kubenswrapper[5072]: I1124 11:10:15.576368 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:15 crc kubenswrapper[5072]: I1124 11:10:15.576411 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:15Z","lastTransitionTime":"2025-11-24T11:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:15 crc kubenswrapper[5072]: I1124 11:10:15.679229 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:15 crc kubenswrapper[5072]: I1124 11:10:15.679337 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:15 crc kubenswrapper[5072]: I1124 11:10:15.679362 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:15 crc kubenswrapper[5072]: I1124 11:10:15.679430 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:15 crc kubenswrapper[5072]: I1124 11:10:15.679456 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:15Z","lastTransitionTime":"2025-11-24T11:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:15 crc kubenswrapper[5072]: I1124 11:10:15.781767 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:15 crc kubenswrapper[5072]: I1124 11:10:15.781820 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:15 crc kubenswrapper[5072]: I1124 11:10:15.781828 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:15 crc kubenswrapper[5072]: I1124 11:10:15.781840 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:15 crc kubenswrapper[5072]: I1124 11:10:15.781848 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:15Z","lastTransitionTime":"2025-11-24T11:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:15 crc kubenswrapper[5072]: I1124 11:10:15.885017 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:15 crc kubenswrapper[5072]: I1124 11:10:15.885065 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:15 crc kubenswrapper[5072]: I1124 11:10:15.885082 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:15 crc kubenswrapper[5072]: I1124 11:10:15.885104 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:15 crc kubenswrapper[5072]: I1124 11:10:15.885126 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:15Z","lastTransitionTime":"2025-11-24T11:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:15 crc kubenswrapper[5072]: I1124 11:10:15.987678 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:15 crc kubenswrapper[5072]: I1124 11:10:15.987738 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:15 crc kubenswrapper[5072]: I1124 11:10:15.987760 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:15 crc kubenswrapper[5072]: I1124 11:10:15.987805 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:15 crc kubenswrapper[5072]: I1124 11:10:15.987831 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:15Z","lastTransitionTime":"2025-11-24T11:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:16 crc kubenswrapper[5072]: I1124 11:10:16.015328 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:10:16 crc kubenswrapper[5072]: I1124 11:10:16.015402 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:10:16 crc kubenswrapper[5072]: I1124 11:10:16.015350 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:10:16 crc kubenswrapper[5072]: E1124 11:10:16.015538 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:10:16 crc kubenswrapper[5072]: E1124 11:10:16.015699 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:10:16 crc kubenswrapper[5072]: E1124 11:10:16.015812 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:10:16 crc kubenswrapper[5072]: I1124 11:10:16.090933 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:16 crc kubenswrapper[5072]: I1124 11:10:16.090960 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:16 crc kubenswrapper[5072]: I1124 11:10:16.090970 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:16 crc kubenswrapper[5072]: I1124 11:10:16.090985 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:16 crc kubenswrapper[5072]: I1124 11:10:16.090996 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:16Z","lastTransitionTime":"2025-11-24T11:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:16 crc kubenswrapper[5072]: I1124 11:10:16.194087 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:16 crc kubenswrapper[5072]: I1124 11:10:16.194133 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:16 crc kubenswrapper[5072]: I1124 11:10:16.194148 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:16 crc kubenswrapper[5072]: I1124 11:10:16.194171 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:16 crc kubenswrapper[5072]: I1124 11:10:16.194187 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:16Z","lastTransitionTime":"2025-11-24T11:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:16 crc kubenswrapper[5072]: I1124 11:10:16.296688 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:16 crc kubenswrapper[5072]: I1124 11:10:16.296738 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:16 crc kubenswrapper[5072]: I1124 11:10:16.296754 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:16 crc kubenswrapper[5072]: I1124 11:10:16.296774 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:16 crc kubenswrapper[5072]: I1124 11:10:16.296789 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:16Z","lastTransitionTime":"2025-11-24T11:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:16 crc kubenswrapper[5072]: I1124 11:10:16.399588 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:16 crc kubenswrapper[5072]: I1124 11:10:16.399634 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:16 crc kubenswrapper[5072]: I1124 11:10:16.399652 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:16 crc kubenswrapper[5072]: I1124 11:10:16.399676 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:16 crc kubenswrapper[5072]: I1124 11:10:16.399692 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:16Z","lastTransitionTime":"2025-11-24T11:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:16 crc kubenswrapper[5072]: I1124 11:10:16.502657 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:16 crc kubenswrapper[5072]: I1124 11:10:16.502695 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:16 crc kubenswrapper[5072]: I1124 11:10:16.502705 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:16 crc kubenswrapper[5072]: I1124 11:10:16.502718 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:16 crc kubenswrapper[5072]: I1124 11:10:16.502728 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:16Z","lastTransitionTime":"2025-11-24T11:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:16 crc kubenswrapper[5072]: I1124 11:10:16.605830 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:16 crc kubenswrapper[5072]: I1124 11:10:16.605875 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:16 crc kubenswrapper[5072]: I1124 11:10:16.605886 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:16 crc kubenswrapper[5072]: I1124 11:10:16.605902 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:16 crc kubenswrapper[5072]: I1124 11:10:16.605915 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:16Z","lastTransitionTime":"2025-11-24T11:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:16 crc kubenswrapper[5072]: I1124 11:10:16.708221 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:16 crc kubenswrapper[5072]: I1124 11:10:16.708610 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:16 crc kubenswrapper[5072]: I1124 11:10:16.708778 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:16 crc kubenswrapper[5072]: I1124 11:10:16.708939 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:16 crc kubenswrapper[5072]: I1124 11:10:16.709106 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:16Z","lastTransitionTime":"2025-11-24T11:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:16 crc kubenswrapper[5072]: I1124 11:10:16.812484 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:16 crc kubenswrapper[5072]: I1124 11:10:16.812558 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:16 crc kubenswrapper[5072]: I1124 11:10:16.812583 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:16 crc kubenswrapper[5072]: I1124 11:10:16.812612 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:16 crc kubenswrapper[5072]: I1124 11:10:16.812633 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:16Z","lastTransitionTime":"2025-11-24T11:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:16 crc kubenswrapper[5072]: I1124 11:10:16.914990 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:16 crc kubenswrapper[5072]: I1124 11:10:16.915276 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:16 crc kubenswrapper[5072]: I1124 11:10:16.915425 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:16 crc kubenswrapper[5072]: I1124 11:10:16.915535 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:16 crc kubenswrapper[5072]: I1124 11:10:16.915632 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:16Z","lastTransitionTime":"2025-11-24T11:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:17 crc kubenswrapper[5072]: I1124 11:10:17.016723 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nnrv7" Nov 24 11:10:17 crc kubenswrapper[5072]: E1124 11:10:17.016864 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nnrv7" podUID="60100e7d-c8b1-4b18-8567-46e21096fa0f" Nov 24 11:10:17 crc kubenswrapper[5072]: I1124 11:10:17.018089 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:17 crc kubenswrapper[5072]: I1124 11:10:17.018165 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:17 crc kubenswrapper[5072]: I1124 11:10:17.018188 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:17 crc kubenswrapper[5072]: I1124 11:10:17.018214 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:17 crc kubenswrapper[5072]: I1124 11:10:17.018237 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:17Z","lastTransitionTime":"2025-11-24T11:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:17 crc kubenswrapper[5072]: I1124 11:10:17.120974 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:17 crc kubenswrapper[5072]: I1124 11:10:17.121017 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:17 crc kubenswrapper[5072]: I1124 11:10:17.121028 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:17 crc kubenswrapper[5072]: I1124 11:10:17.121045 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:17 crc kubenswrapper[5072]: I1124 11:10:17.121057 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:17Z","lastTransitionTime":"2025-11-24T11:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:17 crc kubenswrapper[5072]: I1124 11:10:17.223040 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:17 crc kubenswrapper[5072]: I1124 11:10:17.223075 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:17 crc kubenswrapper[5072]: I1124 11:10:17.223085 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:17 crc kubenswrapper[5072]: I1124 11:10:17.223100 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:17 crc kubenswrapper[5072]: I1124 11:10:17.223111 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:17Z","lastTransitionTime":"2025-11-24T11:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:17 crc kubenswrapper[5072]: I1124 11:10:17.325758 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:17 crc kubenswrapper[5072]: I1124 11:10:17.325810 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:17 crc kubenswrapper[5072]: I1124 11:10:17.325825 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:17 crc kubenswrapper[5072]: I1124 11:10:17.325847 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:17 crc kubenswrapper[5072]: I1124 11:10:17.325864 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:17Z","lastTransitionTime":"2025-11-24T11:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:17 crc kubenswrapper[5072]: I1124 11:10:17.427911 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:17 crc kubenswrapper[5072]: I1124 11:10:17.427971 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:17 crc kubenswrapper[5072]: I1124 11:10:17.427989 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:17 crc kubenswrapper[5072]: I1124 11:10:17.428010 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:17 crc kubenswrapper[5072]: I1124 11:10:17.428027 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:17Z","lastTransitionTime":"2025-11-24T11:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:17 crc kubenswrapper[5072]: I1124 11:10:17.485220 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/60100e7d-c8b1-4b18-8567-46e21096fa0f-metrics-certs\") pod \"network-metrics-daemon-nnrv7\" (UID: \"60100e7d-c8b1-4b18-8567-46e21096fa0f\") " pod="openshift-multus/network-metrics-daemon-nnrv7" Nov 24 11:10:17 crc kubenswrapper[5072]: E1124 11:10:17.485348 5072 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 11:10:17 crc kubenswrapper[5072]: E1124 11:10:17.485423 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/60100e7d-c8b1-4b18-8567-46e21096fa0f-metrics-certs podName:60100e7d-c8b1-4b18-8567-46e21096fa0f nodeName:}" failed. No retries permitted until 2025-11-24 11:10:49.485409179 +0000 UTC m=+101.196933655 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/60100e7d-c8b1-4b18-8567-46e21096fa0f-metrics-certs") pod "network-metrics-daemon-nnrv7" (UID: "60100e7d-c8b1-4b18-8567-46e21096fa0f") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 11:10:17 crc kubenswrapper[5072]: I1124 11:10:17.530268 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:17 crc kubenswrapper[5072]: I1124 11:10:17.530324 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:17 crc kubenswrapper[5072]: I1124 11:10:17.530340 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:17 crc kubenswrapper[5072]: I1124 11:10:17.530362 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:17 crc kubenswrapper[5072]: I1124 11:10:17.530407 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:17Z","lastTransitionTime":"2025-11-24T11:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:17 crc kubenswrapper[5072]: I1124 11:10:17.632848 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:17 crc kubenswrapper[5072]: I1124 11:10:17.632904 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:17 crc kubenswrapper[5072]: I1124 11:10:17.632920 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:17 crc kubenswrapper[5072]: I1124 11:10:17.632944 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:17 crc kubenswrapper[5072]: I1124 11:10:17.632963 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:17Z","lastTransitionTime":"2025-11-24T11:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:17 crc kubenswrapper[5072]: I1124 11:10:17.735226 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:17 crc kubenswrapper[5072]: I1124 11:10:17.735275 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:17 crc kubenswrapper[5072]: I1124 11:10:17.735286 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:17 crc kubenswrapper[5072]: I1124 11:10:17.735302 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:17 crc kubenswrapper[5072]: I1124 11:10:17.735313 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:17Z","lastTransitionTime":"2025-11-24T11:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:17 crc kubenswrapper[5072]: I1124 11:10:17.839144 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:17 crc kubenswrapper[5072]: I1124 11:10:17.839199 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:17 crc kubenswrapper[5072]: I1124 11:10:17.839215 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:17 crc kubenswrapper[5072]: I1124 11:10:17.839237 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:17 crc kubenswrapper[5072]: I1124 11:10:17.839253 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:17Z","lastTransitionTime":"2025-11-24T11:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:17 crc kubenswrapper[5072]: I1124 11:10:17.942310 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:17 crc kubenswrapper[5072]: I1124 11:10:17.942363 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:17 crc kubenswrapper[5072]: I1124 11:10:17.942408 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:17 crc kubenswrapper[5072]: I1124 11:10:17.942435 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:17 crc kubenswrapper[5072]: I1124 11:10:17.942452 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:17Z","lastTransitionTime":"2025-11-24T11:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:18 crc kubenswrapper[5072]: I1124 11:10:18.015829 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:10:18 crc kubenswrapper[5072]: I1124 11:10:18.015894 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:10:18 crc kubenswrapper[5072]: I1124 11:10:18.016074 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:10:18 crc kubenswrapper[5072]: E1124 11:10:18.016333 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:10:18 crc kubenswrapper[5072]: E1124 11:10:18.016461 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:10:18 crc kubenswrapper[5072]: E1124 11:10:18.016550 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:10:18 crc kubenswrapper[5072]: I1124 11:10:18.046301 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:18 crc kubenswrapper[5072]: I1124 11:10:18.046403 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:18 crc kubenswrapper[5072]: I1124 11:10:18.046435 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:18 crc kubenswrapper[5072]: I1124 11:10:18.046465 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:18 crc kubenswrapper[5072]: I1124 11:10:18.046486 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:18Z","lastTransitionTime":"2025-11-24T11:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:18 crc kubenswrapper[5072]: I1124 11:10:18.149054 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:18 crc kubenswrapper[5072]: I1124 11:10:18.149112 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:18 crc kubenswrapper[5072]: I1124 11:10:18.149131 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:18 crc kubenswrapper[5072]: I1124 11:10:18.149153 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:18 crc kubenswrapper[5072]: I1124 11:10:18.149169 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:18Z","lastTransitionTime":"2025-11-24T11:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:18 crc kubenswrapper[5072]: I1124 11:10:18.252183 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:18 crc kubenswrapper[5072]: I1124 11:10:18.252247 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:18 crc kubenswrapper[5072]: I1124 11:10:18.252265 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:18 crc kubenswrapper[5072]: I1124 11:10:18.252290 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:18 crc kubenswrapper[5072]: I1124 11:10:18.252314 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:18Z","lastTransitionTime":"2025-11-24T11:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:18 crc kubenswrapper[5072]: I1124 11:10:18.355304 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:18 crc kubenswrapper[5072]: I1124 11:10:18.355413 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:18 crc kubenswrapper[5072]: I1124 11:10:18.355442 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:18 crc kubenswrapper[5072]: I1124 11:10:18.355468 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:18 crc kubenswrapper[5072]: I1124 11:10:18.355487 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:18Z","lastTransitionTime":"2025-11-24T11:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:18 crc kubenswrapper[5072]: I1124 11:10:18.458964 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:18 crc kubenswrapper[5072]: I1124 11:10:18.459021 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:18 crc kubenswrapper[5072]: I1124 11:10:18.459038 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:18 crc kubenswrapper[5072]: I1124 11:10:18.459074 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:18 crc kubenswrapper[5072]: I1124 11:10:18.459092 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:18Z","lastTransitionTime":"2025-11-24T11:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:18 crc kubenswrapper[5072]: I1124 11:10:18.562311 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:18 crc kubenswrapper[5072]: I1124 11:10:18.562415 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:18 crc kubenswrapper[5072]: I1124 11:10:18.562434 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:18 crc kubenswrapper[5072]: I1124 11:10:18.562459 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:18 crc kubenswrapper[5072]: I1124 11:10:18.562478 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:18Z","lastTransitionTime":"2025-11-24T11:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:18 crc kubenswrapper[5072]: I1124 11:10:18.666301 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:18 crc kubenswrapper[5072]: I1124 11:10:18.666346 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:18 crc kubenswrapper[5072]: I1124 11:10:18.666357 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:18 crc kubenswrapper[5072]: I1124 11:10:18.666400 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:18 crc kubenswrapper[5072]: I1124 11:10:18.666412 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:18Z","lastTransitionTime":"2025-11-24T11:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:18 crc kubenswrapper[5072]: I1124 11:10:18.769320 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:18 crc kubenswrapper[5072]: I1124 11:10:18.769354 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:18 crc kubenswrapper[5072]: I1124 11:10:18.769363 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:18 crc kubenswrapper[5072]: I1124 11:10:18.769393 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:18 crc kubenswrapper[5072]: I1124 11:10:18.769405 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:18Z","lastTransitionTime":"2025-11-24T11:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:18 crc kubenswrapper[5072]: I1124 11:10:18.872061 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:18 crc kubenswrapper[5072]: I1124 11:10:18.872106 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:18 crc kubenswrapper[5072]: I1124 11:10:18.872118 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:18 crc kubenswrapper[5072]: I1124 11:10:18.872132 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:18 crc kubenswrapper[5072]: I1124 11:10:18.872141 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:18Z","lastTransitionTime":"2025-11-24T11:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:18 crc kubenswrapper[5072]: I1124 11:10:18.974752 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:18 crc kubenswrapper[5072]: I1124 11:10:18.974796 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:18 crc kubenswrapper[5072]: I1124 11:10:18.974819 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:18 crc kubenswrapper[5072]: I1124 11:10:18.974848 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:18 crc kubenswrapper[5072]: I1124 11:10:18.974871 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:18Z","lastTransitionTime":"2025-11-24T11:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:19 crc kubenswrapper[5072]: I1124 11:10:19.016117 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nnrv7" Nov 24 11:10:19 crc kubenswrapper[5072]: E1124 11:10:19.016336 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nnrv7" podUID="60100e7d-c8b1-4b18-8567-46e21096fa0f" Nov 24 11:10:19 crc kubenswrapper[5072]: I1124 11:10:19.033341 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wndk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c05ddf6-986e-4bd6-95f0-7d734bc59140\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://894e58e94d99e8ef26722db709e0135d59ac4847daf001e37ce266c9baf02e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gztmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea4b260f16a11dade8c8b120408cf2d167dd868a9b938f4231aa811546252c56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gztmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wndk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:19Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:19 crc kubenswrapper[5072]: I1124 11:10:19.045024 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nnrv7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60100e7d-c8b1-4b18-8567-46e21096fa0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbdfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbdfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:45Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nnrv7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:19Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:19 crc kubenswrapper[5072]: I1124 11:10:19.057480 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b45fbff892ae7b15dc056d52d6485a995bb8a62ae423498027fe4866ef51e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcaa27616bc15c5ce26c371eb8a8f155914434949662b30894cd1ef7aa8e04a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:19Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:19 crc kubenswrapper[5072]: I1124 11:10:19.067313 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3973b61727227663fde759ad817fc73088f78293c67fc1bbbf5d5543afa7bbb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:19Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:19 crc kubenswrapper[5072]: I1124 11:10:19.078679 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bkjf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"175fd540-009b-4cb4-9c3e-e2ebc7e787f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d000a9d98b0e3ed54c1cc50148360bb8103d332c45ee03e745f14929132d2c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcts8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bkjf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:19Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:19 crc kubenswrapper[5072]: I1124 11:10:19.080110 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:19 crc kubenswrapper[5072]: I1124 11:10:19.080478 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:19 crc kubenswrapper[5072]: I1124 11:10:19.080531 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:19 crc kubenswrapper[5072]: I1124 11:10:19.080562 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:19 crc kubenswrapper[5072]: I1124 11:10:19.080583 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:19Z","lastTransitionTime":"2025-11-24T11:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:19 crc kubenswrapper[5072]: I1124 11:10:19.091170 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t8b9x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a9fe7b3-71a3-4388-8ee4-7531ceef6049\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96637ece9dca11a6b9e2a8fff8e78ca37f48e9f86e3f076e80cbd56aa353ca74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmbvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t8b9x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:19Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:19 crc kubenswrapper[5072]: I1124 11:10:19.100552 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85ee6420-36f0-467c-acf4-ebea8b02c8d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21d57225dc522c1ee3621c75ac8f9f93c47d21afb8b0cb1aae2d6aea1d17a252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3509fd52379451e43594c096ef652d92778331f2aef6b689e547f35a384b976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jfxnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:19Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:19 crc kubenswrapper[5072]: I1124 11:10:19.109605 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jz4mm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19d555ef-9635-4aa7-bce1-7b1eb4805445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc7d5e96171aeadf92196d2b795c03ec634abd92814569a974200484569c145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8k8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jz4mm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:19Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:19 crc kubenswrapper[5072]: I1124 11:10:19.121806 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9007e2c-ce36-49d5-ac3f-a2a0ced4e662\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://631c19835680cfbfc94d8d2864f79bb327a834aae717a2c9c525383029e44001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a299161b21fb4a4bc255d765f39eaafa3c87549cc62d458d28ff57fbb4b5fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25ce4f3c52e2096622385f0bd213a058de7ddd3967ed8ba918e79fc63b00429c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28c581f99dcf7d549d235350230e7c3ef380dfeb4fdff577353410642700cb1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:19Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:19 crc kubenswrapper[5072]: I1124 11:10:19.134512 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:19Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:19 crc kubenswrapper[5072]: I1124 11:10:19.150099 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a948c39e09b468da8df5726e7734af35e1d5324d44a6ad11f6e30031f27060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:19Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:19 crc kubenswrapper[5072]: I1124 11:10:19.159605 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:19Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:19 crc kubenswrapper[5072]: I1124 11:10:19.170877 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3de15bd-d863-49c9-a84d-44e5af94f01c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1845d620994797b0fad3550ee243fdb5719b076cd21e2cd9fbdbfd84d5afd805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://802b58c2bb92a1887147eee76414a66c948e077ad8a3835bccd344ae67562b89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24ca0cd9727c9f25252266ba758cfa75b6d48b1f683f97b36bc3a40d6e4d9346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91aa9d18d2efa1c3559a3a17858453a13c76b7567ffb215046c57556b661890c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91aa9d18d2efa1c3559a3a17858453a13c76b7567ffb215046c57556b661890c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:19Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:19 crc kubenswrapper[5072]: I1124 11:10:19.182810 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:19 crc kubenswrapper[5072]: I1124 11:10:19.182871 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:19 crc kubenswrapper[5072]: I1124 11:10:19.182885 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:19 crc kubenswrapper[5072]: I1124 11:10:19.182902 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:19 crc kubenswrapper[5072]: I1124 11:10:19.182914 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:19Z","lastTransitionTime":"2025-11-24T11:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:19 crc kubenswrapper[5072]: I1124 11:10:19.183973 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a60343a1-7193-420d-b6ef-81505cfad266\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6597a19c8ed876fea1aaa8077315a8f39d0a79dee6af94970a3abcd552d673e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89e652bfaac124e13e0b3dfd3f167688a6b417b3613fb94d5422e2134ad95a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c9b314ea6e67a2866adfd0dc2e429523b6db6dab450a1a95fe5528548a0fcb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5f54ddd554c2e52a492be6b3e237793c7b7bed201d942c23d11983e154863a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e03b85333c8be2e5efe40f082369652f009482373f8e230fd948b2dee4e2ee39\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:09:23Z\\\",\\\"message\\\":\\\"W1124 11:09:12.543261 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 11:09:12.543592 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763982552 cert, and key in /tmp/serving-cert-2249531990/serving-signer.crt, /tmp/serving-cert-2249531990/serving-signer.key\\\\nI1124 11:09:13.042739 1 observer_polling.go:159] Starting file observer\\\\nW1124 11:09:13.046128 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 11:09:13.046351 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:09:13.048981 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2249531990/tls.crt::/tmp/serving-cert-2249531990/tls.key\\\\\\\"\\\\nF1124 11:09:23.567420 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d2187669c4dc9aae8ca2f2141104aee1e20df96f0bccf45ecd4c8528f51d1af\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a6b0468c00ca40213d12dd7b80c9f0dcfb93509a44ae37414053672e674f9f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6b0468c00ca40213d12dd7b80c9f0dcfb93509a44ae37414053672e674f9f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:19Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:19 crc kubenswrapper[5072]: I1124 11:10:19.195108 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:19Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:19 crc kubenswrapper[5072]: I1124 11:10:19.211705 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1421e4bd297d99e68c36da933221bbabf8d74aa5fbfa7cbfe831215de52d4790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c82cb1df0677da29463f84139b09b8ee263695e4c994ef7d17846556260b5c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89dd7133a078fe05808fdf20f22b6939004406ae85d3b6ef854a3e4031350491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f6526ffcce8bc139bd9442203e460c71b46e2e8cf9e1f0d03beb067f5dc1c39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98470930757c0529cc831f91feab9f4b004c808efbfdf40e3e95b12e6af1c6d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7621cb39fa8d0330ee899d4962150519618be95eabfc592e6678bb5f5fbbdbfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06ce6673e7a7189e88659cf5cb63428c7ad38aea24f770411a7de6b3754b27b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06ce6673e7a7189e88659cf5cb63428c7ad38aea24f770411a7de6b3754b27b7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:09:57Z\\\",\\\"message\\\":\\\"_cluster\\\\\\\", UUID:\\\\\\\"ba175bbe-5cc4-47e6-a32d-57693e1320bd\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-controller-manager/kube-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-controller-manager/kube-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-controller-manager/kube-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.36\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1124 11:09:57.933863 6751 ovnkube.go:599] Stopped ovnkube\\\\nI1124 11:09:57.933893 6751 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 11:09:57.933975 6751 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-n4qmw_openshift-ovn-kubernetes(80fda759-ddfd-438a-b5a2-cb775ee1bf7e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af4c3d6857b6aaa6a401604f5423cfb55488de707a08698b4cf9f420b9c07975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4qmw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:19Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:19 crc kubenswrapper[5072]: I1124 11:10:19.227952 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qjsxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74eb978f-00ff-4ed3-a5da-8026a3211592\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a69b8017daa872327d88eab8150845309e30c5cf37b229292e7c8a80e5d599c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://911b5942d35c25032791bf5a43559a6234acf215f5d3f84a30e69aced0caecc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://911b5942d35c25032791bf5a43559a6234acf215f5d3f84a30e69aced0caecc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://829da19d26a0ee0192a826e0b355266bcc48c77cf7b1fcf97a9e56add5d48645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://829da19d26a0ee0192a826e0b355266bcc48c77cf7b1fcf97a9e56add5d48645\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5add393950b53ed615d28b3d65833ae6a5174616b7170577babf1f4b7b6a2336\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5add393950b53ed615d28b3d65833ae6a5174616b7170577babf1f4b7b6a2336\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4771d3054f62a25ec9be8b6628ead9e7eb99ad4ae545d803919cb0122343c0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4771d3054f62a25ec9be8b6628ead9e7eb99ad4ae545d803919cb0122343c0ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd19ed803c2b441c4dde807b4cd4461c581058658db24f32dea39ad73b9cef14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd19ed803c2b441c4dde807b4cd4461c581058658db24f32dea39ad73b9cef14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09dba82c18fac19ddd5bbbeecab58a5dc685dbda72e7570cde5d445990066d2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09dba82c18fac19ddd5bbbeecab58a5dc685dbda72e7570cde5d445990066d2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qjsxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:19Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:19 crc kubenswrapper[5072]: I1124 11:10:19.287213 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:19 crc kubenswrapper[5072]: I1124 11:10:19.287312 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:19 crc kubenswrapper[5072]: I1124 11:10:19.287338 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:19 crc kubenswrapper[5072]: I1124 11:10:19.287409 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:19 crc kubenswrapper[5072]: I1124 11:10:19.287436 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:19Z","lastTransitionTime":"2025-11-24T11:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:19 crc kubenswrapper[5072]: I1124 11:10:19.389772 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:19 crc kubenswrapper[5072]: I1124 11:10:19.389804 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:19 crc kubenswrapper[5072]: I1124 11:10:19.389812 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:19 crc kubenswrapper[5072]: I1124 11:10:19.389826 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:19 crc kubenswrapper[5072]: I1124 11:10:19.389835 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:19Z","lastTransitionTime":"2025-11-24T11:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:19 crc kubenswrapper[5072]: I1124 11:10:19.456565 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-t8b9x_1a9fe7b3-71a3-4388-8ee4-7531ceef6049/kube-multus/0.log" Nov 24 11:10:19 crc kubenswrapper[5072]: I1124 11:10:19.456616 5072 generic.go:334] "Generic (PLEG): container finished" podID="1a9fe7b3-71a3-4388-8ee4-7531ceef6049" containerID="96637ece9dca11a6b9e2a8fff8e78ca37f48e9f86e3f076e80cbd56aa353ca74" exitCode=1 Nov 24 11:10:19 crc kubenswrapper[5072]: I1124 11:10:19.456642 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-t8b9x" event={"ID":"1a9fe7b3-71a3-4388-8ee4-7531ceef6049","Type":"ContainerDied","Data":"96637ece9dca11a6b9e2a8fff8e78ca37f48e9f86e3f076e80cbd56aa353ca74"} Nov 24 11:10:19 crc kubenswrapper[5072]: I1124 11:10:19.457035 5072 scope.go:117] "RemoveContainer" containerID="96637ece9dca11a6b9e2a8fff8e78ca37f48e9f86e3f076e80cbd56aa353ca74" Nov 24 11:10:19 crc kubenswrapper[5072]: I1124 11:10:19.474990 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nnrv7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60100e7d-c8b1-4b18-8567-46e21096fa0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbdfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbdfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:45Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nnrv7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:19Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:19 crc kubenswrapper[5072]: I1124 11:10:19.490637 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b45fbff892ae7b15dc056d52d6485a995bb8a62ae423498027fe4866ef51e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcaa27616bc15c5ce26c371eb8a8f155914434949662b30894cd1ef7aa8e04a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:19Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:19 crc kubenswrapper[5072]: I1124 11:10:19.493250 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:19 crc kubenswrapper[5072]: I1124 11:10:19.493296 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:19 crc kubenswrapper[5072]: I1124 11:10:19.493308 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:19 crc kubenswrapper[5072]: I1124 11:10:19.493331 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:19 crc kubenswrapper[5072]: I1124 11:10:19.493343 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:19Z","lastTransitionTime":"2025-11-24T11:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:19 crc kubenswrapper[5072]: I1124 11:10:19.507408 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3973b61727227663fde759ad817fc73088f78293c67fc1bbbf5d5543afa7bbb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:19Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:19 crc kubenswrapper[5072]: I1124 11:10:19.519116 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bkjf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"175fd540-009b-4cb4-9c3e-e2ebc7e787f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d000a9d98b0e3ed54c1cc50148360bb8103d332c45ee03e745f14929132d2c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcts8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bkjf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:19Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:19 crc kubenswrapper[5072]: I1124 11:10:19.535821 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t8b9x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a9fe7b3-71a3-4388-8ee4-7531ceef6049\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96637ece9dca11a6b9e2a8fff8e78ca37f48e9f86e3f076e80cbd56aa353ca74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96637ece9dca11a6b9e2a8fff8e78ca37f48e9f86e3f076e80cbd56aa353ca74\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:10:18Z\\\",\\\"message\\\":\\\"2025-11-24T11:09:33+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_93e4312d-4a0d-4245-ac97-02477f03c30c\\\\n2025-11-24T11:09:33+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_93e4312d-4a0d-4245-ac97-02477f03c30c to /host/opt/cni/bin/\\\\n2025-11-24T11:09:33Z [verbose] multus-daemon started\\\\n2025-11-24T11:09:33Z [verbose] Readiness Indicator file check\\\\n2025-11-24T11:10:18Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmbvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t8b9x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:19Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:19 crc kubenswrapper[5072]: I1124 11:10:19.547106 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85ee6420-36f0-467c-acf4-ebea8b02c8d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21d57225dc522c1ee3621c75ac8f9f93c47d21afb8b0cb1aae2d6aea1d17a252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3509fd52379451e43594c096ef652d92778331f2aef6b689e547f35a384b976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jfxnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:19Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:19 crc kubenswrapper[5072]: I1124 11:10:19.556347 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jz4mm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19d555ef-9635-4aa7-bce1-7b1eb4805445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc7d5e96171aeadf92196d2b795c03ec634abd92814569a974200484569c145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8k8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jz4mm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:19Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:19 crc kubenswrapper[5072]: I1124 11:10:19.567160 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wndk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c05ddf6-986e-4bd6-95f0-7d734bc59140\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://894e58e94d99e8ef26722db709e0135d59ac4847daf001e37ce266c9baf02e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gztmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea4b260f16a11dade8c8b120408cf2d167dd868a9b938f4231aa811546252c56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gztmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wndk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:19Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:19 crc kubenswrapper[5072]: I1124 11:10:19.581347 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9007e2c-ce36-49d5-ac3f-a2a0ced4e662\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://631c19835680cfbfc94d8d2864f79bb327a834aae717a2c9c525383029e44001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a299161b21fb4a4bc255d765f39eaafa3c87549cc62d458d28ff57fbb4b5fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25ce4f3c52e2096622385f0bd213a058de7ddd3967ed8ba918e79fc63b00429c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28c581f99dcf7d549d235350230e7c3ef380dfeb4fdff577353410642700cb1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:19Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:19 crc kubenswrapper[5072]: I1124 11:10:19.596194 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:19 crc kubenswrapper[5072]: I1124 11:10:19.596236 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:19 crc kubenswrapper[5072]: I1124 11:10:19.596245 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:19 crc kubenswrapper[5072]: I1124 11:10:19.596262 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:19 crc kubenswrapper[5072]: I1124 11:10:19.596273 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:19Z","lastTransitionTime":"2025-11-24T11:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:19 crc kubenswrapper[5072]: I1124 11:10:19.597689 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:19Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:19 crc kubenswrapper[5072]: I1124 11:10:19.617207 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a948c39e09b468da8df5726e7734af35e1d5324d44a6ad11f6e30031f27060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:19Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:19 crc kubenswrapper[5072]: I1124 11:10:19.637838 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:19Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:19 crc kubenswrapper[5072]: I1124 11:10:19.651617 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3de15bd-d863-49c9-a84d-44e5af94f01c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1845d620994797b0fad3550ee243fdb5719b076cd21e2cd9fbdbfd84d5afd805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://802b58c2bb92a1887147eee76414a66c948e077ad8a3835bccd344ae67562b89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24ca0cd9727c9f25252266ba758cfa75b6d48b1f683f97b36bc3a40d6e4d9346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91aa9d18d2efa1c3559a3a17858453a13c76b7567ffb215046c57556b661890c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91aa9d18d2efa1c3559a3a17858453a13c76b7567ffb215046c57556b661890c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:19Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:19 crc kubenswrapper[5072]: I1124 11:10:19.671828 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a60343a1-7193-420d-b6ef-81505cfad266\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6597a19c8ed876fea1aaa8077315a8f39d0a79dee6af94970a3abcd552d673e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89e652bfaac124e13e0b3dfd3f167688a6b417b3613fb94d5422e2134ad95a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c9b314ea6e67a2866adfd0dc2e429523b6db6dab450a1a95fe5528548a0fcb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5f54ddd554c2e52a492be6b3e237793c7b7bed201d942c23d11983e154863a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e03b85333c8be2e5efe40f082369652f009482373f8e230fd948b2dee4e2ee39\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:09:23Z\\\",\\\"message\\\":\\\"W1124 11:09:12.543261 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 11:09:12.543592 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763982552 cert, and key in /tmp/serving-cert-2249531990/serving-signer.crt, /tmp/serving-cert-2249531990/serving-signer.key\\\\nI1124 11:09:13.042739 1 observer_polling.go:159] Starting file observer\\\\nW1124 11:09:13.046128 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 11:09:13.046351 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:09:13.048981 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2249531990/tls.crt::/tmp/serving-cert-2249531990/tls.key\\\\\\\"\\\\nF1124 11:09:23.567420 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d2187669c4dc9aae8ca2f2141104aee1e20df96f0bccf45ecd4c8528f51d1af\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a6b0468c00ca40213d12dd7b80c9f0dcfb93509a44ae37414053672e674f9f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6b0468c00ca40213d12dd7b80c9f0dcfb93509a44ae37414053672e674f9f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:19Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:19 crc kubenswrapper[5072]: I1124 11:10:19.691030 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:19Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:19 crc kubenswrapper[5072]: I1124 11:10:19.698775 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:19 crc kubenswrapper[5072]: I1124 11:10:19.699245 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:19 crc kubenswrapper[5072]: I1124 11:10:19.699332 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:19 crc kubenswrapper[5072]: I1124 11:10:19.699449 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:19 crc kubenswrapper[5072]: I1124 11:10:19.699546 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:19Z","lastTransitionTime":"2025-11-24T11:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:19 crc kubenswrapper[5072]: I1124 11:10:19.723568 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1421e4bd297d99e68c36da933221bbabf8d74aa5fbfa7cbfe831215de52d4790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c82cb1df0677da29463f84139b09b8ee263695e4c994ef7d17846556260b5c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89dd7133a078fe05808fdf20f22b6939004406ae85d3b6ef854a3e4031350491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f6526ffcce8bc139bd9442203e460c71b46e2e8cf9e1f0d03beb067f5dc1c39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98470930757c0529cc831f91feab9f4b004c808efbfdf40e3e95b12e6af1c6d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7621cb39fa8d0330ee899d4962150519618be95eabfc592e6678bb5f5fbbdbfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06ce6673e7a7189e88659cf5cb63428c7ad38aea24f770411a7de6b3754b27b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06ce6673e7a7189e88659cf5cb63428c7ad38aea24f770411a7de6b3754b27b7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:09:57Z\\\",\\\"message\\\":\\\"_cluster\\\\\\\", UUID:\\\\\\\"ba175bbe-5cc4-47e6-a32d-57693e1320bd\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-controller-manager/kube-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-controller-manager/kube-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-controller-manager/kube-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.36\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1124 11:09:57.933863 6751 ovnkube.go:599] Stopped ovnkube\\\\nI1124 11:09:57.933893 6751 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 11:09:57.933975 6751 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-n4qmw_openshift-ovn-kubernetes(80fda759-ddfd-438a-b5a2-cb775ee1bf7e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af4c3d6857b6aaa6a401604f5423cfb55488de707a08698b4cf9f420b9c07975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4qmw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:19Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:19 crc kubenswrapper[5072]: I1124 11:10:19.746618 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qjsxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74eb978f-00ff-4ed3-a5da-8026a3211592\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a69b8017daa872327d88eab8150845309e30c5cf37b229292e7c8a80e5d599c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://911b5942d35c25032791bf5a43559a6234acf215f5d3f84a30e69aced0caecc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://911b5942d35c25032791bf5a43559a6234acf215f5d3f84a30e69aced0caecc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://829da19d26a0ee0192a826e0b355266bcc48c77cf7b1fcf97a9e56add5d48645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://829da19d26a0ee0192a826e0b355266bcc48c77cf7b1fcf97a9e56add5d48645\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5add393950b53ed615d28b3d65833ae6a5174616b7170577babf1f4b7b6a2336\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5add393950b53ed615d28b3d65833ae6a5174616b7170577babf1f4b7b6a2336\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4771d3054f62a25ec9be8b6628ead9e7eb99ad4ae545d803919cb0122343c0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4771d3054f62a25ec9be8b6628ead9e7eb99ad4ae545d803919cb0122343c0ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd19ed803c2b441c4dde807b4cd4461c581058658db24f32dea39ad73b9cef14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd19ed803c2b441c4dde807b4cd4461c581058658db24f32dea39ad73b9cef14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09dba82c18fac19ddd5bbbeecab58a5dc685dbda72e7570cde5d445990066d2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09dba82c18fac19ddd5bbbeecab58a5dc685dbda72e7570cde5d445990066d2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qjsxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:19Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:19 crc kubenswrapper[5072]: I1124 11:10:19.802098 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:19 crc kubenswrapper[5072]: I1124 11:10:19.802172 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:19 crc kubenswrapper[5072]: I1124 11:10:19.802197 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:19 crc kubenswrapper[5072]: I1124 11:10:19.802224 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:19 crc kubenswrapper[5072]: I1124 11:10:19.802249 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:19Z","lastTransitionTime":"2025-11-24T11:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:19 crc kubenswrapper[5072]: I1124 11:10:19.904922 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:19 crc kubenswrapper[5072]: I1124 11:10:19.904979 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:19 crc kubenswrapper[5072]: I1124 11:10:19.904998 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:19 crc kubenswrapper[5072]: I1124 11:10:19.905025 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:19 crc kubenswrapper[5072]: I1124 11:10:19.905043 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:19Z","lastTransitionTime":"2025-11-24T11:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:20 crc kubenswrapper[5072]: I1124 11:10:20.008461 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:20 crc kubenswrapper[5072]: I1124 11:10:20.008729 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:20 crc kubenswrapper[5072]: I1124 11:10:20.008818 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:20 crc kubenswrapper[5072]: I1124 11:10:20.008901 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:20 crc kubenswrapper[5072]: I1124 11:10:20.009072 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:20Z","lastTransitionTime":"2025-11-24T11:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:20 crc kubenswrapper[5072]: I1124 11:10:20.016032 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:10:20 crc kubenswrapper[5072]: E1124 11:10:20.016343 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:10:20 crc kubenswrapper[5072]: I1124 11:10:20.016068 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:10:20 crc kubenswrapper[5072]: E1124 11:10:20.016641 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:10:20 crc kubenswrapper[5072]: I1124 11:10:20.016031 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:10:20 crc kubenswrapper[5072]: E1124 11:10:20.016882 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:10:20 crc kubenswrapper[5072]: I1124 11:10:20.112063 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:20 crc kubenswrapper[5072]: I1124 11:10:20.112335 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:20 crc kubenswrapper[5072]: I1124 11:10:20.112425 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:20 crc kubenswrapper[5072]: I1124 11:10:20.112499 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:20 crc kubenswrapper[5072]: I1124 11:10:20.112580 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:20Z","lastTransitionTime":"2025-11-24T11:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:20 crc kubenswrapper[5072]: I1124 11:10:20.215448 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:20 crc kubenswrapper[5072]: I1124 11:10:20.215480 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:20 crc kubenswrapper[5072]: I1124 11:10:20.215488 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:20 crc kubenswrapper[5072]: I1124 11:10:20.215502 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:20 crc kubenswrapper[5072]: I1124 11:10:20.215511 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:20Z","lastTransitionTime":"2025-11-24T11:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:20 crc kubenswrapper[5072]: I1124 11:10:20.317138 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:20 crc kubenswrapper[5072]: I1124 11:10:20.317193 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:20 crc kubenswrapper[5072]: I1124 11:10:20.317217 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:20 crc kubenswrapper[5072]: I1124 11:10:20.317231 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:20 crc kubenswrapper[5072]: I1124 11:10:20.317240 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:20Z","lastTransitionTime":"2025-11-24T11:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:20 crc kubenswrapper[5072]: I1124 11:10:20.419871 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:20 crc kubenswrapper[5072]: I1124 11:10:20.419957 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:20 crc kubenswrapper[5072]: I1124 11:10:20.419982 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:20 crc kubenswrapper[5072]: I1124 11:10:20.420013 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:20 crc kubenswrapper[5072]: I1124 11:10:20.420036 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:20Z","lastTransitionTime":"2025-11-24T11:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:20 crc kubenswrapper[5072]: I1124 11:10:20.462101 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-t8b9x_1a9fe7b3-71a3-4388-8ee4-7531ceef6049/kube-multus/0.log" Nov 24 11:10:20 crc kubenswrapper[5072]: I1124 11:10:20.462197 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-t8b9x" event={"ID":"1a9fe7b3-71a3-4388-8ee4-7531ceef6049","Type":"ContainerStarted","Data":"db181b35d5ddd8cb7ce31d9293b82a515a8889794cf9696c664b101693247cc6"} Nov 24 11:10:20 crc kubenswrapper[5072]: I1124 11:10:20.480206 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9007e2c-ce36-49d5-ac3f-a2a0ced4e662\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://631c19835680cfbfc94d8d2864f79bb327a834aae717a2c9c525383029e44001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a299161b21fb4a4bc255d765f39eaafa3c87549cc62d458d28ff57fbb4b5fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25ce4f3c52e2096622385f0bd213a058de7ddd3967ed8ba918e79fc63b00429c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28c581f99dcf7d549d235350230e7c3ef380dfeb4fdff577353410642700cb1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:20Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:20 crc kubenswrapper[5072]: I1124 11:10:20.497715 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:20Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:20 crc kubenswrapper[5072]: I1124 11:10:20.522110 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a948c39e09b468da8df5726e7734af35e1d5324d44a6ad11f6e30031f27060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:20Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:20 crc kubenswrapper[5072]: I1124 11:10:20.522592 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:20 crc kubenswrapper[5072]: I1124 11:10:20.522649 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:20 crc kubenswrapper[5072]: I1124 11:10:20.522667 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:20 crc kubenswrapper[5072]: I1124 11:10:20.522695 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:20 crc kubenswrapper[5072]: I1124 11:10:20.522714 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:20Z","lastTransitionTime":"2025-11-24T11:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:20 crc kubenswrapper[5072]: I1124 11:10:20.544180 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:20Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:20 crc kubenswrapper[5072]: I1124 11:10:20.559898 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a60343a1-7193-420d-b6ef-81505cfad266\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6597a19c8ed876fea1aaa8077315a8f39d0a79dee6af94970a3abcd552d673e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89e652bfaac124e13e0b3dfd3f167688a6b417b3613fb94d5422e2134ad95a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c9b314ea6e67a2866adfd0dc2e429523b6db6dab450a1a95fe5528548a0fcb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5f54ddd554c2e52a492be6b3e237793c7b7bed201d942c23d11983e154863a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e03b85333c8be2e5efe40f082369652f009482373f8e230fd948b2dee4e2ee39\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:09:23Z\\\",\\\"message\\\":\\\"W1124 11:09:12.543261 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 11:09:12.543592 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763982552 cert, and key in /tmp/serving-cert-2249531990/serving-signer.crt, /tmp/serving-cert-2249531990/serving-signer.key\\\\nI1124 11:09:13.042739 1 observer_polling.go:159] Starting file observer\\\\nW1124 11:09:13.046128 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 11:09:13.046351 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:09:13.048981 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2249531990/tls.crt::/tmp/serving-cert-2249531990/tls.key\\\\\\\"\\\\nF1124 11:09:23.567420 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d2187669c4dc9aae8ca2f2141104aee1e20df96f0bccf45ecd4c8528f51d1af\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a6b0468c00ca40213d12dd7b80c9f0dcfb93509a44ae37414053672e674f9f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6b0468c00ca40213d12dd7b80c9f0dcfb93509a44ae37414053672e674f9f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:20Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:20 crc kubenswrapper[5072]: I1124 11:10:20.576896 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:20Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:20 crc kubenswrapper[5072]: I1124 11:10:20.600211 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1421e4bd297d99e68c36da933221bbabf8d74aa5fbfa7cbfe831215de52d4790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c82cb1df0677da29463f84139b09b8ee263695e4c994ef7d17846556260b5c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89dd7133a078fe05808fdf20f22b6939004406ae85d3b6ef854a3e4031350491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f6526ffcce8bc139bd9442203e460c71b46e2e8cf9e1f0d03beb067f5dc1c39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98470930757c0529cc831f91feab9f4b004c808efbfdf40e3e95b12e6af1c6d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7621cb39fa8d0330ee899d4962150519618be95eabfc592e6678bb5f5fbbdbfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06ce6673e7a7189e88659cf5cb63428c7ad38aea24f770411a7de6b3754b27b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06ce6673e7a7189e88659cf5cb63428c7ad38aea24f770411a7de6b3754b27b7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:09:57Z\\\",\\\"message\\\":\\\"_cluster\\\\\\\", UUID:\\\\\\\"ba175bbe-5cc4-47e6-a32d-57693e1320bd\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-controller-manager/kube-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-controller-manager/kube-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-controller-manager/kube-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.36\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1124 11:09:57.933863 6751 ovnkube.go:599] Stopped ovnkube\\\\nI1124 11:09:57.933893 6751 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 11:09:57.933975 6751 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-n4qmw_openshift-ovn-kubernetes(80fda759-ddfd-438a-b5a2-cb775ee1bf7e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af4c3d6857b6aaa6a401604f5423cfb55488de707a08698b4cf9f420b9c07975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4qmw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:20Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:20 crc kubenswrapper[5072]: I1124 11:10:20.617264 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3de15bd-d863-49c9-a84d-44e5af94f01c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1845d620994797b0fad3550ee243fdb5719b076cd21e2cd9fbdbfd84d5afd805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://802b58c2bb92a1887147eee76414a66c948e077ad8a3835bccd344ae67562b89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24ca0cd9727c9f25252266ba758cfa75b6d48b1f683f97b36bc3a40d6e4d9346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91aa9d18d2efa1c3559a3a17858453a13c76b7567ffb215046c57556b661890c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91aa9d18d2efa1c3559a3a17858453a13c76b7567ffb215046c57556b661890c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:20Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:20 crc kubenswrapper[5072]: I1124 11:10:20.625168 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:20 crc kubenswrapper[5072]: I1124 11:10:20.625221 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:20 crc kubenswrapper[5072]: I1124 11:10:20.625238 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:20 crc kubenswrapper[5072]: I1124 11:10:20.625261 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:20 crc kubenswrapper[5072]: I1124 11:10:20.625278 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:20Z","lastTransitionTime":"2025-11-24T11:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:20 crc kubenswrapper[5072]: I1124 11:10:20.640527 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qjsxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74eb978f-00ff-4ed3-a5da-8026a3211592\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a69b8017daa872327d88eab8150845309e30c5cf37b229292e7c8a80e5d599c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://911b5942d35c25032791bf5a43559a6234acf215f5d3f84a30e69aced0caecc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://911b5942d35c25032791bf5a43559a6234acf215f5d3f84a30e69aced0caecc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://829da19d26a0ee0192a826e0b355266bcc48c77cf7b1fcf97a9e56add5d48645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://829da19d26a0ee0192a826e0b355266bcc48c77cf7b1fcf97a9e56add5d48645\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5add393950b53ed615d28b3d65833ae6a5174616b7170577babf1f4b7b6a2336\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5add393950b53ed615d28b3d65833ae6a5174616b7170577babf1f4b7b6a2336\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4771d3054f62a25ec9be8b6628ead9e7eb99ad4ae545d803919cb0122343c0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4771d3054f62a25ec9be8b6628ead9e7eb99ad4ae545d803919cb0122343c0ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd19ed803c2b441c4dde807b4cd4461c581058658db24f32dea39ad73b9cef14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd19ed803c2b441c4dde807b4cd4461c581058658db24f32dea39ad73b9cef14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09dba82c18fac19ddd5bbbeecab58a5dc685dbda72e7570cde5d445990066d2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09dba82c18fac19ddd5bbbeecab58a5dc685dbda72e7570cde5d445990066d2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qjsxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:20Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:20 crc kubenswrapper[5072]: I1124 11:10:20.657634 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3973b61727227663fde759ad817fc73088f78293c67fc1bbbf5d5543afa7bbb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:20Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:20 crc kubenswrapper[5072]: I1124 11:10:20.672118 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bkjf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"175fd540-009b-4cb4-9c3e-e2ebc7e787f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d000a9d98b0e3ed54c1cc50148360bb8103d332c45ee03e745f14929132d2c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcts8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bkjf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:20Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:20 crc kubenswrapper[5072]: I1124 11:10:20.690059 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t8b9x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a9fe7b3-71a3-4388-8ee4-7531ceef6049\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db181b35d5ddd8cb7ce31d9293b82a515a8889794cf9696c664b101693247cc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96637ece9dca11a6b9e2a8fff8e78ca37f48e9f86e3f076e80cbd56aa353ca74\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:10:18Z\\\",\\\"message\\\":\\\"2025-11-24T11:09:33+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_93e4312d-4a0d-4245-ac97-02477f03c30c\\\\n2025-11-24T11:09:33+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_93e4312d-4a0d-4245-ac97-02477f03c30c to /host/opt/cni/bin/\\\\n2025-11-24T11:09:33Z [verbose] multus-daemon started\\\\n2025-11-24T11:09:33Z [verbose] Readiness Indicator file check\\\\n2025-11-24T11:10:18Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmbvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t8b9x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:20Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:20 crc kubenswrapper[5072]: I1124 11:10:20.706032 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85ee6420-36f0-467c-acf4-ebea8b02c8d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21d57225dc522c1ee3621c75ac8f9f93c47d21afb8b0cb1aae2d6aea1d17a252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3509fd52379451e43594c096ef652d92778331f2aef6b689e547f35a384b976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jfxnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:20Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:20 crc kubenswrapper[5072]: I1124 11:10:20.717097 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jz4mm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19d555ef-9635-4aa7-bce1-7b1eb4805445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc7d5e96171aeadf92196d2b795c03ec634abd92814569a974200484569c145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8k8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jz4mm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:20Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:20 crc kubenswrapper[5072]: I1124 11:10:20.729245 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:20 crc kubenswrapper[5072]: I1124 11:10:20.729331 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:20 crc kubenswrapper[5072]: I1124 11:10:20.729353 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:20 crc kubenswrapper[5072]: I1124 11:10:20.729414 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:20 crc kubenswrapper[5072]: I1124 11:10:20.729437 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:20Z","lastTransitionTime":"2025-11-24T11:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:20 crc kubenswrapper[5072]: I1124 11:10:20.730706 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wndk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c05ddf6-986e-4bd6-95f0-7d734bc59140\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://894e58e94d99e8ef26722db709e0135d59ac4847daf001e37ce266c9baf02e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gztmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea4b260f16a11dade8c8b120408cf2d167dd868a9b938f4231aa811546252c56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gztmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wndk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:20Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:20 crc kubenswrapper[5072]: I1124 11:10:20.739551 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:20 crc kubenswrapper[5072]: I1124 11:10:20.739599 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:20 crc kubenswrapper[5072]: I1124 11:10:20.739615 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:20 crc kubenswrapper[5072]: I1124 11:10:20.739638 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:20 crc kubenswrapper[5072]: I1124 11:10:20.739655 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:20Z","lastTransitionTime":"2025-11-24T11:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:20 crc kubenswrapper[5072]: I1124 11:10:20.744874 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nnrv7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60100e7d-c8b1-4b18-8567-46e21096fa0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbdfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbdfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:45Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nnrv7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:20Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:20 crc kubenswrapper[5072]: E1124 11:10:20.759329 5072 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:10:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:10:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:10:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:10:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:10:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:10:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:10:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:10:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a41d3a9c-0834-482e-9391-dff98db0f196\\\",\\\"systemUUID\\\":\\\"d0383649-b062-48ed-9fc1-5e553cb9256a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:20Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:20 crc kubenswrapper[5072]: I1124 11:10:20.763856 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:20 crc kubenswrapper[5072]: I1124 11:10:20.763916 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:20 crc kubenswrapper[5072]: I1124 11:10:20.763938 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:20 crc kubenswrapper[5072]: I1124 11:10:20.763967 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:20 crc kubenswrapper[5072]: I1124 11:10:20.763989 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:20Z","lastTransitionTime":"2025-11-24T11:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:20 crc kubenswrapper[5072]: I1124 11:10:20.768296 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b45fbff892ae7b15dc056d52d6485a995bb8a62ae423498027fe4866ef51e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcaa27616bc15c5ce26c371eb8a8f155914434949662b30894cd1ef7aa8e04a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:20Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:20 crc kubenswrapper[5072]: E1124 11:10:20.778059 5072 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:10:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:10:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:10:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:10:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:10:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:10:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:10:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:10:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a41d3a9c-0834-482e-9391-dff98db0f196\\\",\\\"systemUUID\\\":\\\"d0383649-b062-48ed-9fc1-5e553cb9256a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:20Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:20 crc kubenswrapper[5072]: I1124 11:10:20.782241 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:20 crc kubenswrapper[5072]: I1124 11:10:20.782294 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:20 crc kubenswrapper[5072]: I1124 11:10:20.782311 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:20 crc kubenswrapper[5072]: I1124 11:10:20.782336 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:20 crc kubenswrapper[5072]: I1124 11:10:20.782356 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:20Z","lastTransitionTime":"2025-11-24T11:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:20 crc kubenswrapper[5072]: E1124 11:10:20.795980 5072 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:10:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:10:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:10:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:10:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:10:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:10:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:10:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:10:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a41d3a9c-0834-482e-9391-dff98db0f196\\\",\\\"systemUUID\\\":\\\"d0383649-b062-48ed-9fc1-5e553cb9256a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:20Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:20 crc kubenswrapper[5072]: I1124 11:10:20.799773 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:20 crc kubenswrapper[5072]: I1124 11:10:20.799808 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:20 crc kubenswrapper[5072]: I1124 11:10:20.799816 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:20 crc kubenswrapper[5072]: I1124 11:10:20.799843 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:20 crc kubenswrapper[5072]: I1124 11:10:20.799855 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:20Z","lastTransitionTime":"2025-11-24T11:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:20 crc kubenswrapper[5072]: E1124 11:10:20.815625 5072 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:10:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:10:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:10:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:10:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:10:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:10:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:10:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:10:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a41d3a9c-0834-482e-9391-dff98db0f196\\\",\\\"systemUUID\\\":\\\"d0383649-b062-48ed-9fc1-5e553cb9256a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:20Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:20 crc kubenswrapper[5072]: I1124 11:10:20.820511 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:20 crc kubenswrapper[5072]: I1124 11:10:20.820623 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:20 crc kubenswrapper[5072]: I1124 11:10:20.820649 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:20 crc kubenswrapper[5072]: I1124 11:10:20.820680 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:20 crc kubenswrapper[5072]: I1124 11:10:20.820716 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:20Z","lastTransitionTime":"2025-11-24T11:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:20 crc kubenswrapper[5072]: E1124 11:10:20.838123 5072 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:10:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:10:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:10:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:10:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:10:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:10:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:10:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:10:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a41d3a9c-0834-482e-9391-dff98db0f196\\\",\\\"systemUUID\\\":\\\"d0383649-b062-48ed-9fc1-5e553cb9256a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:20Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:20 crc kubenswrapper[5072]: E1124 11:10:20.838228 5072 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 24 11:10:20 crc kubenswrapper[5072]: I1124 11:10:20.840050 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:20 crc kubenswrapper[5072]: I1124 11:10:20.840081 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:20 crc kubenswrapper[5072]: I1124 11:10:20.840090 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:20 crc kubenswrapper[5072]: I1124 11:10:20.840101 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:20 crc kubenswrapper[5072]: I1124 11:10:20.840110 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:20Z","lastTransitionTime":"2025-11-24T11:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:20 crc kubenswrapper[5072]: I1124 11:10:20.942783 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:20 crc kubenswrapper[5072]: I1124 11:10:20.942817 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:20 crc kubenswrapper[5072]: I1124 11:10:20.942825 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:20 crc kubenswrapper[5072]: I1124 11:10:20.942838 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:20 crc kubenswrapper[5072]: I1124 11:10:20.942848 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:20Z","lastTransitionTime":"2025-11-24T11:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:21 crc kubenswrapper[5072]: I1124 11:10:21.015927 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nnrv7" Nov 24 11:10:21 crc kubenswrapper[5072]: E1124 11:10:21.016125 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nnrv7" podUID="60100e7d-c8b1-4b18-8567-46e21096fa0f" Nov 24 11:10:21 crc kubenswrapper[5072]: I1124 11:10:21.045628 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:21 crc kubenswrapper[5072]: I1124 11:10:21.045674 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:21 crc kubenswrapper[5072]: I1124 11:10:21.045684 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:21 crc kubenswrapper[5072]: I1124 11:10:21.045698 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:21 crc kubenswrapper[5072]: I1124 11:10:21.045706 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:21Z","lastTransitionTime":"2025-11-24T11:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:21 crc kubenswrapper[5072]: I1124 11:10:21.148199 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:21 crc kubenswrapper[5072]: I1124 11:10:21.148271 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:21 crc kubenswrapper[5072]: I1124 11:10:21.148288 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:21 crc kubenswrapper[5072]: I1124 11:10:21.148314 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:21 crc kubenswrapper[5072]: I1124 11:10:21.148333 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:21Z","lastTransitionTime":"2025-11-24T11:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:21 crc kubenswrapper[5072]: I1124 11:10:21.251367 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:21 crc kubenswrapper[5072]: I1124 11:10:21.251462 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:21 crc kubenswrapper[5072]: I1124 11:10:21.251479 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:21 crc kubenswrapper[5072]: I1124 11:10:21.251505 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:21 crc kubenswrapper[5072]: I1124 11:10:21.251524 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:21Z","lastTransitionTime":"2025-11-24T11:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:21 crc kubenswrapper[5072]: I1124 11:10:21.353703 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:21 crc kubenswrapper[5072]: I1124 11:10:21.353749 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:21 crc kubenswrapper[5072]: I1124 11:10:21.353758 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:21 crc kubenswrapper[5072]: I1124 11:10:21.353773 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:21 crc kubenswrapper[5072]: I1124 11:10:21.353784 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:21Z","lastTransitionTime":"2025-11-24T11:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:21 crc kubenswrapper[5072]: I1124 11:10:21.456273 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:21 crc kubenswrapper[5072]: I1124 11:10:21.456410 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:21 crc kubenswrapper[5072]: I1124 11:10:21.456433 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:21 crc kubenswrapper[5072]: I1124 11:10:21.456457 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:21 crc kubenswrapper[5072]: I1124 11:10:21.456476 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:21Z","lastTransitionTime":"2025-11-24T11:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:21 crc kubenswrapper[5072]: I1124 11:10:21.558949 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:21 crc kubenswrapper[5072]: I1124 11:10:21.558996 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:21 crc kubenswrapper[5072]: I1124 11:10:21.559013 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:21 crc kubenswrapper[5072]: I1124 11:10:21.559036 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:21 crc kubenswrapper[5072]: I1124 11:10:21.559055 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:21Z","lastTransitionTime":"2025-11-24T11:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:21 crc kubenswrapper[5072]: I1124 11:10:21.661908 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:21 crc kubenswrapper[5072]: I1124 11:10:21.661950 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:21 crc kubenswrapper[5072]: I1124 11:10:21.661963 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:21 crc kubenswrapper[5072]: I1124 11:10:21.661980 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:21 crc kubenswrapper[5072]: I1124 11:10:21.661991 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:21Z","lastTransitionTime":"2025-11-24T11:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:21 crc kubenswrapper[5072]: I1124 11:10:21.765026 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:21 crc kubenswrapper[5072]: I1124 11:10:21.765088 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:21 crc kubenswrapper[5072]: I1124 11:10:21.765099 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:21 crc kubenswrapper[5072]: I1124 11:10:21.765116 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:21 crc kubenswrapper[5072]: I1124 11:10:21.765128 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:21Z","lastTransitionTime":"2025-11-24T11:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:21 crc kubenswrapper[5072]: I1124 11:10:21.867247 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:21 crc kubenswrapper[5072]: I1124 11:10:21.867307 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:21 crc kubenswrapper[5072]: I1124 11:10:21.867330 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:21 crc kubenswrapper[5072]: I1124 11:10:21.867357 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:21 crc kubenswrapper[5072]: I1124 11:10:21.867418 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:21Z","lastTransitionTime":"2025-11-24T11:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:21 crc kubenswrapper[5072]: I1124 11:10:21.968955 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:21 crc kubenswrapper[5072]: I1124 11:10:21.968989 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:21 crc kubenswrapper[5072]: I1124 11:10:21.968998 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:21 crc kubenswrapper[5072]: I1124 11:10:21.969013 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:21 crc kubenswrapper[5072]: I1124 11:10:21.969023 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:21Z","lastTransitionTime":"2025-11-24T11:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:22 crc kubenswrapper[5072]: I1124 11:10:22.015896 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:10:22 crc kubenswrapper[5072]: I1124 11:10:22.015932 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:10:22 crc kubenswrapper[5072]: E1124 11:10:22.016059 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:10:22 crc kubenswrapper[5072]: I1124 11:10:22.015907 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:10:22 crc kubenswrapper[5072]: E1124 11:10:22.016209 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:10:22 crc kubenswrapper[5072]: E1124 11:10:22.016316 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:10:22 crc kubenswrapper[5072]: I1124 11:10:22.070900 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:22 crc kubenswrapper[5072]: I1124 11:10:22.070925 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:22 crc kubenswrapper[5072]: I1124 11:10:22.070936 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:22 crc kubenswrapper[5072]: I1124 11:10:22.070948 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:22 crc kubenswrapper[5072]: I1124 11:10:22.070958 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:22Z","lastTransitionTime":"2025-11-24T11:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:22 crc kubenswrapper[5072]: I1124 11:10:22.174102 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:22 crc kubenswrapper[5072]: I1124 11:10:22.174168 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:22 crc kubenswrapper[5072]: I1124 11:10:22.174188 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:22 crc kubenswrapper[5072]: I1124 11:10:22.174213 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:22 crc kubenswrapper[5072]: I1124 11:10:22.174235 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:22Z","lastTransitionTime":"2025-11-24T11:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:22 crc kubenswrapper[5072]: I1124 11:10:22.277396 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:22 crc kubenswrapper[5072]: I1124 11:10:22.277431 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:22 crc kubenswrapper[5072]: I1124 11:10:22.277443 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:22 crc kubenswrapper[5072]: I1124 11:10:22.277456 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:22 crc kubenswrapper[5072]: I1124 11:10:22.277466 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:22Z","lastTransitionTime":"2025-11-24T11:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:22 crc kubenswrapper[5072]: I1124 11:10:22.380470 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:22 crc kubenswrapper[5072]: I1124 11:10:22.380565 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:22 crc kubenswrapper[5072]: I1124 11:10:22.380582 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:22 crc kubenswrapper[5072]: I1124 11:10:22.380604 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:22 crc kubenswrapper[5072]: I1124 11:10:22.380622 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:22Z","lastTransitionTime":"2025-11-24T11:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:22 crc kubenswrapper[5072]: I1124 11:10:22.482993 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:22 crc kubenswrapper[5072]: I1124 11:10:22.483073 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:22 crc kubenswrapper[5072]: I1124 11:10:22.483091 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:22 crc kubenswrapper[5072]: I1124 11:10:22.483115 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:22 crc kubenswrapper[5072]: I1124 11:10:22.483134 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:22Z","lastTransitionTime":"2025-11-24T11:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:22 crc kubenswrapper[5072]: I1124 11:10:22.586291 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:22 crc kubenswrapper[5072]: I1124 11:10:22.586347 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:22 crc kubenswrapper[5072]: I1124 11:10:22.586363 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:22 crc kubenswrapper[5072]: I1124 11:10:22.586416 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:22 crc kubenswrapper[5072]: I1124 11:10:22.586439 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:22Z","lastTransitionTime":"2025-11-24T11:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:22 crc kubenswrapper[5072]: I1124 11:10:22.692602 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:22 crc kubenswrapper[5072]: I1124 11:10:22.692674 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:22 crc kubenswrapper[5072]: I1124 11:10:22.692686 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:22 crc kubenswrapper[5072]: I1124 11:10:22.692707 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:22 crc kubenswrapper[5072]: I1124 11:10:22.692719 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:22Z","lastTransitionTime":"2025-11-24T11:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:22 crc kubenswrapper[5072]: I1124 11:10:22.795715 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:22 crc kubenswrapper[5072]: I1124 11:10:22.795756 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:22 crc kubenswrapper[5072]: I1124 11:10:22.795764 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:22 crc kubenswrapper[5072]: I1124 11:10:22.795777 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:22 crc kubenswrapper[5072]: I1124 11:10:22.795787 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:22Z","lastTransitionTime":"2025-11-24T11:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:22 crc kubenswrapper[5072]: I1124 11:10:22.899393 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:22 crc kubenswrapper[5072]: I1124 11:10:22.899450 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:22 crc kubenswrapper[5072]: I1124 11:10:22.899467 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:22 crc kubenswrapper[5072]: I1124 11:10:22.899490 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:22 crc kubenswrapper[5072]: I1124 11:10:22.899514 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:22Z","lastTransitionTime":"2025-11-24T11:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:23 crc kubenswrapper[5072]: I1124 11:10:23.003108 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:23 crc kubenswrapper[5072]: I1124 11:10:23.003175 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:23 crc kubenswrapper[5072]: I1124 11:10:23.003192 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:23 crc kubenswrapper[5072]: I1124 11:10:23.003215 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:23 crc kubenswrapper[5072]: I1124 11:10:23.003234 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:23Z","lastTransitionTime":"2025-11-24T11:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:23 crc kubenswrapper[5072]: I1124 11:10:23.015904 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nnrv7" Nov 24 11:10:23 crc kubenswrapper[5072]: E1124 11:10:23.016200 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nnrv7" podUID="60100e7d-c8b1-4b18-8567-46e21096fa0f" Nov 24 11:10:23 crc kubenswrapper[5072]: I1124 11:10:23.105996 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:23 crc kubenswrapper[5072]: I1124 11:10:23.106071 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:23 crc kubenswrapper[5072]: I1124 11:10:23.106094 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:23 crc kubenswrapper[5072]: I1124 11:10:23.106123 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:23 crc kubenswrapper[5072]: I1124 11:10:23.106145 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:23Z","lastTransitionTime":"2025-11-24T11:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:23 crc kubenswrapper[5072]: I1124 11:10:23.209454 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:23 crc kubenswrapper[5072]: I1124 11:10:23.209520 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:23 crc kubenswrapper[5072]: I1124 11:10:23.209540 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:23 crc kubenswrapper[5072]: I1124 11:10:23.209566 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:23 crc kubenswrapper[5072]: I1124 11:10:23.209595 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:23Z","lastTransitionTime":"2025-11-24T11:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:23 crc kubenswrapper[5072]: I1124 11:10:23.312849 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:23 crc kubenswrapper[5072]: I1124 11:10:23.312899 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:23 crc kubenswrapper[5072]: I1124 11:10:23.312911 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:23 crc kubenswrapper[5072]: I1124 11:10:23.312928 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:23 crc kubenswrapper[5072]: I1124 11:10:23.312941 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:23Z","lastTransitionTime":"2025-11-24T11:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:23 crc kubenswrapper[5072]: I1124 11:10:23.416846 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:23 crc kubenswrapper[5072]: I1124 11:10:23.416895 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:23 crc kubenswrapper[5072]: I1124 11:10:23.416912 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:23 crc kubenswrapper[5072]: I1124 11:10:23.416939 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:23 crc kubenswrapper[5072]: I1124 11:10:23.416958 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:23Z","lastTransitionTime":"2025-11-24T11:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:23 crc kubenswrapper[5072]: I1124 11:10:23.519530 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:23 crc kubenswrapper[5072]: I1124 11:10:23.519603 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:23 crc kubenswrapper[5072]: I1124 11:10:23.519626 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:23 crc kubenswrapper[5072]: I1124 11:10:23.519656 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:23 crc kubenswrapper[5072]: I1124 11:10:23.519682 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:23Z","lastTransitionTime":"2025-11-24T11:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:23 crc kubenswrapper[5072]: I1124 11:10:23.622988 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:23 crc kubenswrapper[5072]: I1124 11:10:23.623047 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:23 crc kubenswrapper[5072]: I1124 11:10:23.623064 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:23 crc kubenswrapper[5072]: I1124 11:10:23.623087 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:23 crc kubenswrapper[5072]: I1124 11:10:23.623107 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:23Z","lastTransitionTime":"2025-11-24T11:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:23 crc kubenswrapper[5072]: I1124 11:10:23.726793 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:23 crc kubenswrapper[5072]: I1124 11:10:23.726860 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:23 crc kubenswrapper[5072]: I1124 11:10:23.726878 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:23 crc kubenswrapper[5072]: I1124 11:10:23.726903 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:23 crc kubenswrapper[5072]: I1124 11:10:23.726921 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:23Z","lastTransitionTime":"2025-11-24T11:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:23 crc kubenswrapper[5072]: I1124 11:10:23.830820 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:23 crc kubenswrapper[5072]: I1124 11:10:23.830887 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:23 crc kubenswrapper[5072]: I1124 11:10:23.830905 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:23 crc kubenswrapper[5072]: I1124 11:10:23.830931 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:23 crc kubenswrapper[5072]: I1124 11:10:23.830949 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:23Z","lastTransitionTime":"2025-11-24T11:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:23 crc kubenswrapper[5072]: I1124 11:10:23.934214 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:23 crc kubenswrapper[5072]: I1124 11:10:23.934335 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:23 crc kubenswrapper[5072]: I1124 11:10:23.934353 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:23 crc kubenswrapper[5072]: I1124 11:10:23.934416 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:23 crc kubenswrapper[5072]: I1124 11:10:23.934441 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:23Z","lastTransitionTime":"2025-11-24T11:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:24 crc kubenswrapper[5072]: I1124 11:10:24.016259 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:10:24 crc kubenswrapper[5072]: I1124 11:10:24.016416 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:10:24 crc kubenswrapper[5072]: I1124 11:10:24.016464 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:10:24 crc kubenswrapper[5072]: E1124 11:10:24.016614 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:10:24 crc kubenswrapper[5072]: E1124 11:10:24.016949 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:10:24 crc kubenswrapper[5072]: E1124 11:10:24.017278 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:10:24 crc kubenswrapper[5072]: I1124 11:10:24.038912 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:24 crc kubenswrapper[5072]: I1124 11:10:24.038967 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:24 crc kubenswrapper[5072]: I1124 11:10:24.038985 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:24 crc kubenswrapper[5072]: I1124 11:10:24.039007 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:24 crc kubenswrapper[5072]: I1124 11:10:24.039025 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:24Z","lastTransitionTime":"2025-11-24T11:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:24 crc kubenswrapper[5072]: I1124 11:10:24.142305 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:24 crc kubenswrapper[5072]: I1124 11:10:24.142784 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:24 crc kubenswrapper[5072]: I1124 11:10:24.142990 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:24 crc kubenswrapper[5072]: I1124 11:10:24.143203 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:24 crc kubenswrapper[5072]: I1124 11:10:24.143545 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:24Z","lastTransitionTime":"2025-11-24T11:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:24 crc kubenswrapper[5072]: I1124 11:10:24.248098 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:24 crc kubenswrapper[5072]: I1124 11:10:24.248150 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:24 crc kubenswrapper[5072]: I1124 11:10:24.248167 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:24 crc kubenswrapper[5072]: I1124 11:10:24.248195 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:24 crc kubenswrapper[5072]: I1124 11:10:24.248214 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:24Z","lastTransitionTime":"2025-11-24T11:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:24 crc kubenswrapper[5072]: I1124 11:10:24.351532 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:24 crc kubenswrapper[5072]: I1124 11:10:24.351587 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:24 crc kubenswrapper[5072]: I1124 11:10:24.351603 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:24 crc kubenswrapper[5072]: I1124 11:10:24.351627 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:24 crc kubenswrapper[5072]: I1124 11:10:24.351648 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:24Z","lastTransitionTime":"2025-11-24T11:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:24 crc kubenswrapper[5072]: I1124 11:10:24.454752 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:24 crc kubenswrapper[5072]: I1124 11:10:24.454829 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:24 crc kubenswrapper[5072]: I1124 11:10:24.454845 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:24 crc kubenswrapper[5072]: I1124 11:10:24.454868 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:24 crc kubenswrapper[5072]: I1124 11:10:24.454886 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:24Z","lastTransitionTime":"2025-11-24T11:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:24 crc kubenswrapper[5072]: I1124 11:10:24.557815 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:24 crc kubenswrapper[5072]: I1124 11:10:24.557868 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:24 crc kubenswrapper[5072]: I1124 11:10:24.557884 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:24 crc kubenswrapper[5072]: I1124 11:10:24.557905 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:24 crc kubenswrapper[5072]: I1124 11:10:24.557924 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:24Z","lastTransitionTime":"2025-11-24T11:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:24 crc kubenswrapper[5072]: I1124 11:10:24.661185 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:24 crc kubenswrapper[5072]: I1124 11:10:24.661712 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:24 crc kubenswrapper[5072]: I1124 11:10:24.661876 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:24 crc kubenswrapper[5072]: I1124 11:10:24.662029 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:24 crc kubenswrapper[5072]: I1124 11:10:24.662163 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:24Z","lastTransitionTime":"2025-11-24T11:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:24 crc kubenswrapper[5072]: I1124 11:10:24.764888 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:24 crc kubenswrapper[5072]: I1124 11:10:24.764953 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:24 crc kubenswrapper[5072]: I1124 11:10:24.765031 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:24 crc kubenswrapper[5072]: I1124 11:10:24.765064 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:24 crc kubenswrapper[5072]: I1124 11:10:24.765086 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:24Z","lastTransitionTime":"2025-11-24T11:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:24 crc kubenswrapper[5072]: I1124 11:10:24.868236 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:24 crc kubenswrapper[5072]: I1124 11:10:24.868297 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:24 crc kubenswrapper[5072]: I1124 11:10:24.868313 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:24 crc kubenswrapper[5072]: I1124 11:10:24.868336 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:24 crc kubenswrapper[5072]: I1124 11:10:24.868353 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:24Z","lastTransitionTime":"2025-11-24T11:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:24 crc kubenswrapper[5072]: I1124 11:10:24.971267 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:24 crc kubenswrapper[5072]: I1124 11:10:24.971319 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:24 crc kubenswrapper[5072]: I1124 11:10:24.971335 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:24 crc kubenswrapper[5072]: I1124 11:10:24.971360 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:24 crc kubenswrapper[5072]: I1124 11:10:24.971403 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:24Z","lastTransitionTime":"2025-11-24T11:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:25 crc kubenswrapper[5072]: I1124 11:10:25.016502 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nnrv7" Nov 24 11:10:25 crc kubenswrapper[5072]: E1124 11:10:25.016760 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nnrv7" podUID="60100e7d-c8b1-4b18-8567-46e21096fa0f" Nov 24 11:10:25 crc kubenswrapper[5072]: I1124 11:10:25.074516 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:25 crc kubenswrapper[5072]: I1124 11:10:25.074587 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:25 crc kubenswrapper[5072]: I1124 11:10:25.074622 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:25 crc kubenswrapper[5072]: I1124 11:10:25.074652 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:25 crc kubenswrapper[5072]: I1124 11:10:25.074675 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:25Z","lastTransitionTime":"2025-11-24T11:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:25 crc kubenswrapper[5072]: I1124 11:10:25.184727 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:25 crc kubenswrapper[5072]: I1124 11:10:25.184819 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:25 crc kubenswrapper[5072]: I1124 11:10:25.184840 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:25 crc kubenswrapper[5072]: I1124 11:10:25.184873 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:25 crc kubenswrapper[5072]: I1124 11:10:25.184894 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:25Z","lastTransitionTime":"2025-11-24T11:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:25 crc kubenswrapper[5072]: I1124 11:10:25.289059 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:25 crc kubenswrapper[5072]: I1124 11:10:25.289135 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:25 crc kubenswrapper[5072]: I1124 11:10:25.289149 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:25 crc kubenswrapper[5072]: I1124 11:10:25.289176 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:25 crc kubenswrapper[5072]: I1124 11:10:25.289191 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:25Z","lastTransitionTime":"2025-11-24T11:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:25 crc kubenswrapper[5072]: I1124 11:10:25.392992 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:25 crc kubenswrapper[5072]: I1124 11:10:25.393055 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:25 crc kubenswrapper[5072]: I1124 11:10:25.393069 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:25 crc kubenswrapper[5072]: I1124 11:10:25.393097 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:25 crc kubenswrapper[5072]: I1124 11:10:25.393118 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:25Z","lastTransitionTime":"2025-11-24T11:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:25 crc kubenswrapper[5072]: I1124 11:10:25.495968 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:25 crc kubenswrapper[5072]: I1124 11:10:25.496053 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:25 crc kubenswrapper[5072]: I1124 11:10:25.496078 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:25 crc kubenswrapper[5072]: I1124 11:10:25.496113 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:25 crc kubenswrapper[5072]: I1124 11:10:25.496139 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:25Z","lastTransitionTime":"2025-11-24T11:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:25 crc kubenswrapper[5072]: I1124 11:10:25.599341 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:25 crc kubenswrapper[5072]: I1124 11:10:25.599436 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:25 crc kubenswrapper[5072]: I1124 11:10:25.599457 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:25 crc kubenswrapper[5072]: I1124 11:10:25.599484 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:25 crc kubenswrapper[5072]: I1124 11:10:25.599503 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:25Z","lastTransitionTime":"2025-11-24T11:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:25 crc kubenswrapper[5072]: I1124 11:10:25.702540 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:25 crc kubenswrapper[5072]: I1124 11:10:25.702602 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:25 crc kubenswrapper[5072]: I1124 11:10:25.702619 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:25 crc kubenswrapper[5072]: I1124 11:10:25.702644 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:25 crc kubenswrapper[5072]: I1124 11:10:25.702661 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:25Z","lastTransitionTime":"2025-11-24T11:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:25 crc kubenswrapper[5072]: I1124 11:10:25.806777 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:25 crc kubenswrapper[5072]: I1124 11:10:25.806847 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:25 crc kubenswrapper[5072]: I1124 11:10:25.806866 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:25 crc kubenswrapper[5072]: I1124 11:10:25.806898 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:25 crc kubenswrapper[5072]: I1124 11:10:25.806919 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:25Z","lastTransitionTime":"2025-11-24T11:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:25 crc kubenswrapper[5072]: I1124 11:10:25.909065 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:25 crc kubenswrapper[5072]: I1124 11:10:25.909150 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:25 crc kubenswrapper[5072]: I1124 11:10:25.909169 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:25 crc kubenswrapper[5072]: I1124 11:10:25.909192 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:25 crc kubenswrapper[5072]: I1124 11:10:25.909214 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:25Z","lastTransitionTime":"2025-11-24T11:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:26 crc kubenswrapper[5072]: I1124 11:10:26.011600 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:26 crc kubenswrapper[5072]: I1124 11:10:26.011683 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:26 crc kubenswrapper[5072]: I1124 11:10:26.011704 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:26 crc kubenswrapper[5072]: I1124 11:10:26.011733 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:26 crc kubenswrapper[5072]: I1124 11:10:26.011762 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:26Z","lastTransitionTime":"2025-11-24T11:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:26 crc kubenswrapper[5072]: I1124 11:10:26.016309 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:10:26 crc kubenswrapper[5072]: I1124 11:10:26.016351 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:10:26 crc kubenswrapper[5072]: I1124 11:10:26.016316 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:10:26 crc kubenswrapper[5072]: E1124 11:10:26.017024 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:10:26 crc kubenswrapper[5072]: E1124 11:10:26.017224 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:10:26 crc kubenswrapper[5072]: E1124 11:10:26.017351 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:10:26 crc kubenswrapper[5072]: I1124 11:10:26.017478 5072 scope.go:117] "RemoveContainer" containerID="06ce6673e7a7189e88659cf5cb63428c7ad38aea24f770411a7de6b3754b27b7" Nov 24 11:10:26 crc kubenswrapper[5072]: I1124 11:10:26.114284 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:26 crc kubenswrapper[5072]: I1124 11:10:26.114323 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:26 crc kubenswrapper[5072]: I1124 11:10:26.114338 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:26 crc kubenswrapper[5072]: I1124 11:10:26.114358 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:26 crc kubenswrapper[5072]: I1124 11:10:26.114390 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:26Z","lastTransitionTime":"2025-11-24T11:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:26 crc kubenswrapper[5072]: I1124 11:10:26.217457 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:26 crc kubenswrapper[5072]: I1124 11:10:26.217557 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:26 crc kubenswrapper[5072]: I1124 11:10:26.217588 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:26 crc kubenswrapper[5072]: I1124 11:10:26.217618 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:26 crc kubenswrapper[5072]: I1124 11:10:26.217639 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:26Z","lastTransitionTime":"2025-11-24T11:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:26 crc kubenswrapper[5072]: I1124 11:10:26.319888 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:26 crc kubenswrapper[5072]: I1124 11:10:26.319945 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:26 crc kubenswrapper[5072]: I1124 11:10:26.319962 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:26 crc kubenswrapper[5072]: I1124 11:10:26.319985 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:26 crc kubenswrapper[5072]: I1124 11:10:26.320014 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:26Z","lastTransitionTime":"2025-11-24T11:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:26 crc kubenswrapper[5072]: I1124 11:10:26.422654 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:26 crc kubenswrapper[5072]: I1124 11:10:26.422707 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:26 crc kubenswrapper[5072]: I1124 11:10:26.422719 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:26 crc kubenswrapper[5072]: I1124 11:10:26.422737 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:26 crc kubenswrapper[5072]: I1124 11:10:26.422749 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:26Z","lastTransitionTime":"2025-11-24T11:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:26 crc kubenswrapper[5072]: I1124 11:10:26.486196 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n4qmw_80fda759-ddfd-438a-b5a2-cb775ee1bf7e/ovnkube-controller/2.log" Nov 24 11:10:26 crc kubenswrapper[5072]: I1124 11:10:26.488768 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" event={"ID":"80fda759-ddfd-438a-b5a2-cb775ee1bf7e","Type":"ContainerStarted","Data":"b30fc71ef9fdf26e114844d344131e79b2ea981d3e69760bb92b1279f0b3c434"} Nov 24 11:10:26 crc kubenswrapper[5072]: I1124 11:10:26.489495 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" Nov 24 11:10:26 crc kubenswrapper[5072]: I1124 11:10:26.511798 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qjsxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74eb978f-00ff-4ed3-a5da-8026a3211592\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a69b8017daa872327d88eab8150845309e30c5cf37b229292e7c8a80e5d599c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://911b5942d35c25032791bf5a43559a6234acf215f5d3f84a30e69aced0caecc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://911b5942d35c25032791bf5a43559a6234acf215f5d3f84a30e69aced0caecc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://829da19d26a0ee0192a826e0b355266bcc48c77cf7b1fcf97a9e56add5d48645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://829da19d26a0ee0192a826e0b355266bcc48c77cf7b1fcf97a9e56add5d48645\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5add393950b53ed615d28b3d65833ae6a5174616b7170577babf1f4b7b6a2336\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5add393950b53ed615d28b3d65833ae6a5174616b7170577babf1f4b7b6a2336\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4771d3054f62a25ec9be8b6628ead9e7eb99ad4ae545d803919cb0122343c0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4771d3054f62a25ec9be8b6628ead9e7eb99ad4ae545d803919cb0122343c0ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd19ed803c2b441c4dde807b4cd4461c581058658db24f32dea39ad73b9cef14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd19ed803c2b441c4dde807b4cd4461c581058658db24f32dea39ad73b9cef14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09dba82c18fac19ddd5bbbeecab58a5dc685dbda72e7570cde5d445990066d2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09dba82c18fac19ddd5bbbeecab58a5dc685dbda72e7570cde5d445990066d2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qjsxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:26Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:26 crc kubenswrapper[5072]: I1124 11:10:26.526100 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:26 crc kubenswrapper[5072]: I1124 11:10:26.526151 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:26 crc kubenswrapper[5072]: I1124 11:10:26.526162 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:26 crc kubenswrapper[5072]: I1124 11:10:26.526180 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:26 crc kubenswrapper[5072]: I1124 11:10:26.526193 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:26Z","lastTransitionTime":"2025-11-24T11:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:26 crc kubenswrapper[5072]: I1124 11:10:26.527198 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jz4mm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19d555ef-9635-4aa7-bce1-7b1eb4805445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc7d5e96171aeadf92196d2b795c03ec634abd92814569a974200484569c145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8k8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jz4mm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:26Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:26 crc kubenswrapper[5072]: I1124 11:10:26.539414 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wndk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c05ddf6-986e-4bd6-95f0-7d734bc59140\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://894e58e94d99e8ef26722db709e0135d59ac4847daf001e37ce266c9baf02e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gztmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea4b260f16a11dade8c8b120408cf2d167dd868a9b938f4231aa811546252c56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gztmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wndk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:26Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:26 crc kubenswrapper[5072]: I1124 11:10:26.556955 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nnrv7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60100e7d-c8b1-4b18-8567-46e21096fa0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbdfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbdfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:45Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nnrv7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:26Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:26 crc kubenswrapper[5072]: I1124 11:10:26.573402 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b45fbff892ae7b15dc056d52d6485a995bb8a62ae423498027fe4866ef51e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcaa27616bc15c5ce26c371eb8a8f155914434949662b30894cd1ef7aa8e04a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:26Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:26 crc kubenswrapper[5072]: I1124 11:10:26.593886 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3973b61727227663fde759ad817fc73088f78293c67fc1bbbf5d5543afa7bbb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:26Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:26 crc kubenswrapper[5072]: I1124 11:10:26.610171 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bkjf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"175fd540-009b-4cb4-9c3e-e2ebc7e787f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d000a9d98b0e3ed54c1cc50148360bb8103d332c45ee03e745f14929132d2c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcts8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bkjf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:26Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:26 crc kubenswrapper[5072]: I1124 11:10:26.628905 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:26 crc kubenswrapper[5072]: I1124 11:10:26.628943 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:26 crc kubenswrapper[5072]: I1124 11:10:26.628954 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:26 crc kubenswrapper[5072]: I1124 11:10:26.628972 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:26 crc kubenswrapper[5072]: I1124 11:10:26.628984 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:26Z","lastTransitionTime":"2025-11-24T11:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:26 crc kubenswrapper[5072]: I1124 11:10:26.635978 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t8b9x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a9fe7b3-71a3-4388-8ee4-7531ceef6049\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db181b35d5ddd8cb7ce31d9293b82a515a8889794cf9696c664b101693247cc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96637ece9dca11a6b9e2a8fff8e78ca37f48e9f86e3f076e80cbd56aa353ca74\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:10:18Z\\\",\\\"message\\\":\\\"2025-11-24T11:09:33+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_93e4312d-4a0d-4245-ac97-02477f03c30c\\\\n2025-11-24T11:09:33+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_93e4312d-4a0d-4245-ac97-02477f03c30c to /host/opt/cni/bin/\\\\n2025-11-24T11:09:33Z [verbose] multus-daemon started\\\\n2025-11-24T11:09:33Z [verbose] Readiness Indicator file check\\\\n2025-11-24T11:10:18Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmbvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t8b9x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:26Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:26 crc kubenswrapper[5072]: I1124 11:10:26.653456 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85ee6420-36f0-467c-acf4-ebea8b02c8d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21d57225dc522c1ee3621c75ac8f9f93c47d21afb8b0cb1aae2d6aea1d17a252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3509fd52379451e43594c096ef652d92778331f2aef6b689e547f35a384b976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jfxnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:26Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:26 crc kubenswrapper[5072]: I1124 11:10:26.671327 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9007e2c-ce36-49d5-ac3f-a2a0ced4e662\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://631c19835680cfbfc94d8d2864f79bb327a834aae717a2c9c525383029e44001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a299161b21fb4a4bc255d765f39eaafa3c87549cc62d458d28ff57fbb4b5fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25ce4f3c52e2096622385f0bd213a058de7ddd3967ed8ba918e79fc63b00429c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28c581f99dcf7d549d235350230e7c3ef380dfeb4fdff577353410642700cb1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:26Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:26 crc kubenswrapper[5072]: I1124 11:10:26.687898 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:26Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:26 crc kubenswrapper[5072]: I1124 11:10:26.703564 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a948c39e09b468da8df5726e7734af35e1d5324d44a6ad11f6e30031f27060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:26Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:26 crc kubenswrapper[5072]: I1124 11:10:26.716697 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:26Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:26 crc kubenswrapper[5072]: I1124 11:10:26.728705 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3de15bd-d863-49c9-a84d-44e5af94f01c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1845d620994797b0fad3550ee243fdb5719b076cd21e2cd9fbdbfd84d5afd805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://802b58c2bb92a1887147eee76414a66c948e077ad8a3835bccd344ae67562b89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24ca0cd9727c9f25252266ba758cfa75b6d48b1f683f97b36bc3a40d6e4d9346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91aa9d18d2efa1c3559a3a17858453a13c76b7567ffb215046c57556b661890c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91aa9d18d2efa1c3559a3a17858453a13c76b7567ffb215046c57556b661890c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:26Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:26 crc kubenswrapper[5072]: I1124 11:10:26.731561 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:26 crc kubenswrapper[5072]: I1124 11:10:26.731585 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:26 crc kubenswrapper[5072]: I1124 11:10:26.731594 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:26 crc kubenswrapper[5072]: I1124 11:10:26.731607 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:26 crc kubenswrapper[5072]: I1124 11:10:26.731616 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:26Z","lastTransitionTime":"2025-11-24T11:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:26 crc kubenswrapper[5072]: I1124 11:10:26.750205 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a60343a1-7193-420d-b6ef-81505cfad266\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6597a19c8ed876fea1aaa8077315a8f39d0a79dee6af94970a3abcd552d673e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89e652bfaac124e13e0b3dfd3f167688a6b417b3613fb94d5422e2134ad95a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c9b314ea6e67a2866adfd0dc2e429523b6db6dab450a1a95fe5528548a0fcb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5f54ddd554c2e52a492be6b3e237793c7b7bed201d942c23d11983e154863a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e03b85333c8be2e5efe40f082369652f009482373f8e230fd948b2dee4e2ee39\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:09:23Z\\\",\\\"message\\\":\\\"W1124 11:09:12.543261 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 11:09:12.543592 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763982552 cert, and key in /tmp/serving-cert-2249531990/serving-signer.crt, /tmp/serving-cert-2249531990/serving-signer.key\\\\nI1124 11:09:13.042739 1 observer_polling.go:159] Starting file observer\\\\nW1124 11:09:13.046128 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 11:09:13.046351 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:09:13.048981 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2249531990/tls.crt::/tmp/serving-cert-2249531990/tls.key\\\\\\\"\\\\nF1124 11:09:23.567420 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d2187669c4dc9aae8ca2f2141104aee1e20df96f0bccf45ecd4c8528f51d1af\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a6b0468c00ca40213d12dd7b80c9f0dcfb93509a44ae37414053672e674f9f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6b0468c00ca40213d12dd7b80c9f0dcfb93509a44ae37414053672e674f9f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:26Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:26 crc kubenswrapper[5072]: I1124 11:10:26.771264 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:26Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:26 crc kubenswrapper[5072]: I1124 11:10:26.800338 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1421e4bd297d99e68c36da933221bbabf8d74aa5fbfa7cbfe831215de52d4790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c82cb1df0677da29463f84139b09b8ee263695e4c994ef7d17846556260b5c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89dd7133a078fe05808fdf20f22b6939004406ae85d3b6ef854a3e4031350491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f6526ffcce8bc139bd9442203e460c71b46e2e8cf9e1f0d03beb067f5dc1c39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98470930757c0529cc831f91feab9f4b004c808efbfdf40e3e95b12e6af1c6d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7621cb39fa8d0330ee899d4962150519618be95eabfc592e6678bb5f5fbbdbfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b30fc71ef9fdf26e114844d344131e79b2ea981d3e69760bb92b1279f0b3c434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06ce6673e7a7189e88659cf5cb63428c7ad38aea24f770411a7de6b3754b27b7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:09:57Z\\\",\\\"message\\\":\\\"_cluster\\\\\\\", UUID:\\\\\\\"ba175bbe-5cc4-47e6-a32d-57693e1320bd\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-controller-manager/kube-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-controller-manager/kube-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-controller-manager/kube-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.36\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1124 11:09:57.933863 6751 ovnkube.go:599] Stopped ovnkube\\\\nI1124 11:09:57.933893 6751 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 11:09:57.933975 6751 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:10:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af4c3d6857b6aaa6a401604f5423cfb55488de707a08698b4cf9f420b9c07975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4qmw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:26Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:26 crc kubenswrapper[5072]: I1124 11:10:26.834304 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:26 crc kubenswrapper[5072]: I1124 11:10:26.834361 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:26 crc kubenswrapper[5072]: I1124 11:10:26.834410 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:26 crc kubenswrapper[5072]: I1124 11:10:26.834435 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:26 crc kubenswrapper[5072]: I1124 11:10:26.834453 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:26Z","lastTransitionTime":"2025-11-24T11:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:26 crc kubenswrapper[5072]: I1124 11:10:26.936534 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:26 crc kubenswrapper[5072]: I1124 11:10:26.936578 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:26 crc kubenswrapper[5072]: I1124 11:10:26.936588 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:26 crc kubenswrapper[5072]: I1124 11:10:26.936602 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:26 crc kubenswrapper[5072]: I1124 11:10:26.936614 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:26Z","lastTransitionTime":"2025-11-24T11:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:27 crc kubenswrapper[5072]: I1124 11:10:27.016703 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nnrv7" Nov 24 11:10:27 crc kubenswrapper[5072]: E1124 11:10:27.016854 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nnrv7" podUID="60100e7d-c8b1-4b18-8567-46e21096fa0f" Nov 24 11:10:27 crc kubenswrapper[5072]: I1124 11:10:27.038652 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:27 crc kubenswrapper[5072]: I1124 11:10:27.038720 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:27 crc kubenswrapper[5072]: I1124 11:10:27.038737 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:27 crc kubenswrapper[5072]: I1124 11:10:27.038759 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:27 crc kubenswrapper[5072]: I1124 11:10:27.038777 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:27Z","lastTransitionTime":"2025-11-24T11:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:27 crc kubenswrapper[5072]: I1124 11:10:27.141189 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:27 crc kubenswrapper[5072]: I1124 11:10:27.141252 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:27 crc kubenswrapper[5072]: I1124 11:10:27.141274 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:27 crc kubenswrapper[5072]: I1124 11:10:27.141302 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:27 crc kubenswrapper[5072]: I1124 11:10:27.141322 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:27Z","lastTransitionTime":"2025-11-24T11:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:27 crc kubenswrapper[5072]: I1124 11:10:27.244276 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:27 crc kubenswrapper[5072]: I1124 11:10:27.244355 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:27 crc kubenswrapper[5072]: I1124 11:10:27.244426 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:27 crc kubenswrapper[5072]: I1124 11:10:27.244451 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:27 crc kubenswrapper[5072]: I1124 11:10:27.244468 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:27Z","lastTransitionTime":"2025-11-24T11:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:27 crc kubenswrapper[5072]: I1124 11:10:27.347426 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:27 crc kubenswrapper[5072]: I1124 11:10:27.347482 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:27 crc kubenswrapper[5072]: I1124 11:10:27.347506 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:27 crc kubenswrapper[5072]: I1124 11:10:27.347534 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:27 crc kubenswrapper[5072]: I1124 11:10:27.347553 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:27Z","lastTransitionTime":"2025-11-24T11:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:27 crc kubenswrapper[5072]: I1124 11:10:27.451855 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:27 crc kubenswrapper[5072]: I1124 11:10:27.451900 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:27 crc kubenswrapper[5072]: I1124 11:10:27.451917 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:27 crc kubenswrapper[5072]: I1124 11:10:27.451936 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:27 crc kubenswrapper[5072]: I1124 11:10:27.451951 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:27Z","lastTransitionTime":"2025-11-24T11:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:27 crc kubenswrapper[5072]: I1124 11:10:27.496431 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n4qmw_80fda759-ddfd-438a-b5a2-cb775ee1bf7e/ovnkube-controller/3.log" Nov 24 11:10:27 crc kubenswrapper[5072]: I1124 11:10:27.498107 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n4qmw_80fda759-ddfd-438a-b5a2-cb775ee1bf7e/ovnkube-controller/2.log" Nov 24 11:10:27 crc kubenswrapper[5072]: I1124 11:10:27.502318 5072 generic.go:334] "Generic (PLEG): container finished" podID="80fda759-ddfd-438a-b5a2-cb775ee1bf7e" containerID="b30fc71ef9fdf26e114844d344131e79b2ea981d3e69760bb92b1279f0b3c434" exitCode=1 Nov 24 11:10:27 crc kubenswrapper[5072]: I1124 11:10:27.502364 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" event={"ID":"80fda759-ddfd-438a-b5a2-cb775ee1bf7e","Type":"ContainerDied","Data":"b30fc71ef9fdf26e114844d344131e79b2ea981d3e69760bb92b1279f0b3c434"} Nov 24 11:10:27 crc kubenswrapper[5072]: I1124 11:10:27.502447 5072 scope.go:117] "RemoveContainer" containerID="06ce6673e7a7189e88659cf5cb63428c7ad38aea24f770411a7de6b3754b27b7" Nov 24 11:10:27 crc kubenswrapper[5072]: I1124 11:10:27.504332 5072 scope.go:117] "RemoveContainer" containerID="b30fc71ef9fdf26e114844d344131e79b2ea981d3e69760bb92b1279f0b3c434" Nov 24 11:10:27 crc kubenswrapper[5072]: E1124 11:10:27.505111 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-n4qmw_openshift-ovn-kubernetes(80fda759-ddfd-438a-b5a2-cb775ee1bf7e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" podUID="80fda759-ddfd-438a-b5a2-cb775ee1bf7e" Nov 24 11:10:27 crc kubenswrapper[5072]: I1124 11:10:27.533232 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3de15bd-d863-49c9-a84d-44e5af94f01c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1845d620994797b0fad3550ee243fdb5719b076cd21e2cd9fbdbfd84d5afd805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://802b58c2bb92a1887147eee76414a66c948e077ad8a3835bccd344ae67562b89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24ca0cd9727c9f25252266ba758cfa75b6d48b1f683f97b36bc3a40d6e4d9346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91aa9d18d2efa1c3559a3a17858453a13c76b7567ffb215046c57556b661890c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91aa9d18d2efa1c3559a3a17858453a13c76b7567ffb215046c57556b661890c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:27Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:27 crc kubenswrapper[5072]: I1124 11:10:27.556037 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:27 crc kubenswrapper[5072]: I1124 11:10:27.556123 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:27 crc kubenswrapper[5072]: I1124 11:10:27.556187 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:27 crc kubenswrapper[5072]: I1124 11:10:27.556217 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:27 crc kubenswrapper[5072]: I1124 11:10:27.556238 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:27Z","lastTransitionTime":"2025-11-24T11:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:27 crc kubenswrapper[5072]: I1124 11:10:27.558976 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a60343a1-7193-420d-b6ef-81505cfad266\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6597a19c8ed876fea1aaa8077315a8f39d0a79dee6af94970a3abcd552d673e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89e652bfaac124e13e0b3dfd3f167688a6b417b3613fb94d5422e2134ad95a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c9b314ea6e67a2866adfd0dc2e429523b6db6dab450a1a95fe5528548a0fcb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5f54ddd554c2e52a492be6b3e237793c7b7bed201d942c23d11983e154863a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e03b85333c8be2e5efe40f082369652f009482373f8e230fd948b2dee4e2ee39\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:09:23Z\\\",\\\"message\\\":\\\"W1124 11:09:12.543261 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 11:09:12.543592 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763982552 cert, and key in /tmp/serving-cert-2249531990/serving-signer.crt, /tmp/serving-cert-2249531990/serving-signer.key\\\\nI1124 11:09:13.042739 1 observer_polling.go:159] Starting file observer\\\\nW1124 11:09:13.046128 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 11:09:13.046351 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:09:13.048981 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2249531990/tls.crt::/tmp/serving-cert-2249531990/tls.key\\\\\\\"\\\\nF1124 11:09:23.567420 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d2187669c4dc9aae8ca2f2141104aee1e20df96f0bccf45ecd4c8528f51d1af\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a6b0468c00ca40213d12dd7b80c9f0dcfb93509a44ae37414053672e674f9f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6b0468c00ca40213d12dd7b80c9f0dcfb93509a44ae37414053672e674f9f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:27Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:27 crc kubenswrapper[5072]: I1124 11:10:27.581501 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:27Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:27 crc kubenswrapper[5072]: I1124 11:10:27.614514 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1421e4bd297d99e68c36da933221bbabf8d74aa5fbfa7cbfe831215de52d4790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c82cb1df0677da29463f84139b09b8ee263695e4c994ef7d17846556260b5c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89dd7133a078fe05808fdf20f22b6939004406ae85d3b6ef854a3e4031350491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f6526ffcce8bc139bd9442203e460c71b46e2e8cf9e1f0d03beb067f5dc1c39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98470930757c0529cc831f91feab9f4b004c808efbfdf40e3e95b12e6af1c6d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7621cb39fa8d0330ee899d4962150519618be95eabfc592e6678bb5f5fbbdbfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b30fc71ef9fdf26e114844d344131e79b2ea981d3e69760bb92b1279f0b3c434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06ce6673e7a7189e88659cf5cb63428c7ad38aea24f770411a7de6b3754b27b7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:09:57Z\\\",\\\"message\\\":\\\"_cluster\\\\\\\", UUID:\\\\\\\"ba175bbe-5cc4-47e6-a32d-57693e1320bd\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-controller-manager/kube-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-controller-manager/kube-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-controller-manager/kube-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.36\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1124 11:09:57.933863 6751 ovnkube.go:599] Stopped ovnkube\\\\nI1124 11:09:57.933893 6751 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 11:09:57.933975 6751 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b30fc71ef9fdf26e114844d344131e79b2ea981d3e69760bb92b1279f0b3c434\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:10:27Z\\\",\\\"message\\\":\\\"twork controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:27Z is after 2025-08-24T17:21:41Z]\\\\nI1124 11:10:27.057212 7115 services_controller.go:434] Service openshift-service-ca-operator/metrics retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{metrics openshift-service-ca-operator 9ab1e41d-7da1-46d4-b0d8-4395ba0a6601 4750 0 2025-02-23 05:12:18 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[app:service-ca-operator] map[include.release.openshift.io/hypershift:true include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-secret-name:serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0072d895f \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{S\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:10:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af4c3d6857b6aaa6a401604f5423cfb55488de707a08698b4cf9f420b9c07975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4qmw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:27Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:27 crc kubenswrapper[5072]: I1124 11:10:27.643078 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qjsxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74eb978f-00ff-4ed3-a5da-8026a3211592\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a69b8017daa872327d88eab8150845309e30c5cf37b229292e7c8a80e5d599c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://911b5942d35c25032791bf5a43559a6234acf215f5d3f84a30e69aced0caecc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://911b5942d35c25032791bf5a43559a6234acf215f5d3f84a30e69aced0caecc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://829da19d26a0ee0192a826e0b355266bcc48c77cf7b1fcf97a9e56add5d48645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://829da19d26a0ee0192a826e0b355266bcc48c77cf7b1fcf97a9e56add5d48645\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5add393950b53ed615d28b3d65833ae6a5174616b7170577babf1f4b7b6a2336\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5add393950b53ed615d28b3d65833ae6a5174616b7170577babf1f4b7b6a2336\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4771d3054f62a25ec9be8b6628ead9e7eb99ad4ae545d803919cb0122343c0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4771d3054f62a25ec9be8b6628ead9e7eb99ad4ae545d803919cb0122343c0ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd19ed803c2b441c4dde807b4cd4461c581058658db24f32dea39ad73b9cef14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd19ed803c2b441c4dde807b4cd4461c581058658db24f32dea39ad73b9cef14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09dba82c18fac19ddd5bbbeecab58a5dc685dbda72e7570cde5d445990066d2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09dba82c18fac19ddd5bbbeecab58a5dc685dbda72e7570cde5d445990066d2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qjsxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:27Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:27 crc kubenswrapper[5072]: I1124 11:10:27.659466 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:27 crc kubenswrapper[5072]: I1124 11:10:27.659519 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:27 crc kubenswrapper[5072]: I1124 11:10:27.659537 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:27 crc kubenswrapper[5072]: I1124 11:10:27.659561 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:27 crc kubenswrapper[5072]: I1124 11:10:27.659578 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:27Z","lastTransitionTime":"2025-11-24T11:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:27 crc kubenswrapper[5072]: I1124 11:10:27.662459 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wndk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c05ddf6-986e-4bd6-95f0-7d734bc59140\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://894e58e94d99e8ef26722db709e0135d59ac4847daf001e37ce266c9baf02e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gztmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea4b260f16a11dade8c8b120408cf2d167dd868a9b938f4231aa811546252c56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gztmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wndk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:27Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:27 crc kubenswrapper[5072]: I1124 11:10:27.679686 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nnrv7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60100e7d-c8b1-4b18-8567-46e21096fa0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbdfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbdfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:45Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nnrv7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:27Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:27 crc kubenswrapper[5072]: I1124 11:10:27.700932 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b45fbff892ae7b15dc056d52d6485a995bb8a62ae423498027fe4866ef51e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcaa27616bc15c5ce26c371eb8a8f155914434949662b30894cd1ef7aa8e04a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:27Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:27 crc kubenswrapper[5072]: I1124 11:10:27.720661 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3973b61727227663fde759ad817fc73088f78293c67fc1bbbf5d5543afa7bbb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:27Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:27 crc kubenswrapper[5072]: I1124 11:10:27.736326 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bkjf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"175fd540-009b-4cb4-9c3e-e2ebc7e787f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d000a9d98b0e3ed54c1cc50148360bb8103d332c45ee03e745f14929132d2c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcts8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bkjf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:27Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:27 crc kubenswrapper[5072]: I1124 11:10:27.757513 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t8b9x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a9fe7b3-71a3-4388-8ee4-7531ceef6049\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db181b35d5ddd8cb7ce31d9293b82a515a8889794cf9696c664b101693247cc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96637ece9dca11a6b9e2a8fff8e78ca37f48e9f86e3f076e80cbd56aa353ca74\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:10:18Z\\\",\\\"message\\\":\\\"2025-11-24T11:09:33+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_93e4312d-4a0d-4245-ac97-02477f03c30c\\\\n2025-11-24T11:09:33+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_93e4312d-4a0d-4245-ac97-02477f03c30c to /host/opt/cni/bin/\\\\n2025-11-24T11:09:33Z [verbose] multus-daemon started\\\\n2025-11-24T11:09:33Z [verbose] Readiness Indicator file check\\\\n2025-11-24T11:10:18Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmbvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t8b9x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:27Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:27 crc kubenswrapper[5072]: I1124 11:10:27.763269 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:27 crc kubenswrapper[5072]: I1124 11:10:27.763330 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:27 crc kubenswrapper[5072]: I1124 11:10:27.763358 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:27 crc kubenswrapper[5072]: I1124 11:10:27.763407 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:27 crc kubenswrapper[5072]: I1124 11:10:27.763425 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:27Z","lastTransitionTime":"2025-11-24T11:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:27 crc kubenswrapper[5072]: I1124 11:10:27.777889 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85ee6420-36f0-467c-acf4-ebea8b02c8d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21d57225dc522c1ee3621c75ac8f9f93c47d21afb8b0cb1aae2d6aea1d17a252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3509fd52379451e43594c096ef652d92778331f2aef6b689e547f35a384b976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jfxnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:27Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:27 crc kubenswrapper[5072]: I1124 11:10:27.793275 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jz4mm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19d555ef-9635-4aa7-bce1-7b1eb4805445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc7d5e96171aeadf92196d2b795c03ec634abd92814569a974200484569c145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8k8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jz4mm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:27Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:27 crc kubenswrapper[5072]: I1124 11:10:27.811322 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9007e2c-ce36-49d5-ac3f-a2a0ced4e662\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://631c19835680cfbfc94d8d2864f79bb327a834aae717a2c9c525383029e44001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a299161b21fb4a4bc255d765f39eaafa3c87549cc62d458d28ff57fbb4b5fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25ce4f3c52e2096622385f0bd213a058de7ddd3967ed8ba918e79fc63b00429c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28c581f99dcf7d549d235350230e7c3ef380dfeb4fdff577353410642700cb1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:27Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:27 crc kubenswrapper[5072]: I1124 11:10:27.834978 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:27Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:27 crc kubenswrapper[5072]: I1124 11:10:27.854694 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a948c39e09b468da8df5726e7734af35e1d5324d44a6ad11f6e30031f27060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:27Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:27 crc kubenswrapper[5072]: I1124 11:10:27.865626 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:27 crc kubenswrapper[5072]: I1124 11:10:27.865741 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:27 crc kubenswrapper[5072]: I1124 11:10:27.865768 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:27 crc kubenswrapper[5072]: I1124 11:10:27.865799 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:27 crc kubenswrapper[5072]: I1124 11:10:27.865823 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:27Z","lastTransitionTime":"2025-11-24T11:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:27 crc kubenswrapper[5072]: I1124 11:10:27.872465 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:27Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:27 crc kubenswrapper[5072]: I1124 11:10:27.968521 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:27 crc kubenswrapper[5072]: I1124 11:10:27.968593 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:27 crc kubenswrapper[5072]: I1124 11:10:27.968611 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:27 crc kubenswrapper[5072]: I1124 11:10:27.968637 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:27 crc kubenswrapper[5072]: I1124 11:10:27.968657 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:27Z","lastTransitionTime":"2025-11-24T11:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:28 crc kubenswrapper[5072]: I1124 11:10:28.018025 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:10:28 crc kubenswrapper[5072]: I1124 11:10:28.018147 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:10:28 crc kubenswrapper[5072]: E1124 11:10:28.018210 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:10:28 crc kubenswrapper[5072]: I1124 11:10:28.018244 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:10:28 crc kubenswrapper[5072]: E1124 11:10:28.018496 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:10:28 crc kubenswrapper[5072]: E1124 11:10:28.018627 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:10:28 crc kubenswrapper[5072]: I1124 11:10:28.071126 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:28 crc kubenswrapper[5072]: I1124 11:10:28.071180 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:28 crc kubenswrapper[5072]: I1124 11:10:28.071198 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:28 crc kubenswrapper[5072]: I1124 11:10:28.071223 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:28 crc kubenswrapper[5072]: I1124 11:10:28.071280 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:28Z","lastTransitionTime":"2025-11-24T11:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:28 crc kubenswrapper[5072]: I1124 11:10:28.174924 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:28 crc kubenswrapper[5072]: I1124 11:10:28.174997 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:28 crc kubenswrapper[5072]: I1124 11:10:28.175014 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:28 crc kubenswrapper[5072]: I1124 11:10:28.175038 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:28 crc kubenswrapper[5072]: I1124 11:10:28.175059 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:28Z","lastTransitionTime":"2025-11-24T11:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:28 crc kubenswrapper[5072]: I1124 11:10:28.277782 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:28 crc kubenswrapper[5072]: I1124 11:10:28.277854 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:28 crc kubenswrapper[5072]: I1124 11:10:28.277867 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:28 crc kubenswrapper[5072]: I1124 11:10:28.277883 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:28 crc kubenswrapper[5072]: I1124 11:10:28.277896 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:28Z","lastTransitionTime":"2025-11-24T11:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:28 crc kubenswrapper[5072]: I1124 11:10:28.380794 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:28 crc kubenswrapper[5072]: I1124 11:10:28.380861 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:28 crc kubenswrapper[5072]: I1124 11:10:28.380877 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:28 crc kubenswrapper[5072]: I1124 11:10:28.380893 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:28 crc kubenswrapper[5072]: I1124 11:10:28.380905 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:28Z","lastTransitionTime":"2025-11-24T11:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:28 crc kubenswrapper[5072]: I1124 11:10:28.483438 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:28 crc kubenswrapper[5072]: I1124 11:10:28.483521 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:28 crc kubenswrapper[5072]: I1124 11:10:28.483533 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:28 crc kubenswrapper[5072]: I1124 11:10:28.483550 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:28 crc kubenswrapper[5072]: I1124 11:10:28.483580 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:28Z","lastTransitionTime":"2025-11-24T11:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:28 crc kubenswrapper[5072]: I1124 11:10:28.506878 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n4qmw_80fda759-ddfd-438a-b5a2-cb775ee1bf7e/ovnkube-controller/3.log" Nov 24 11:10:28 crc kubenswrapper[5072]: I1124 11:10:28.510562 5072 scope.go:117] "RemoveContainer" containerID="b30fc71ef9fdf26e114844d344131e79b2ea981d3e69760bb92b1279f0b3c434" Nov 24 11:10:28 crc kubenswrapper[5072]: E1124 11:10:28.510801 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-n4qmw_openshift-ovn-kubernetes(80fda759-ddfd-438a-b5a2-cb775ee1bf7e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" podUID="80fda759-ddfd-438a-b5a2-cb775ee1bf7e" Nov 24 11:10:28 crc kubenswrapper[5072]: I1124 11:10:28.527521 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3de15bd-d863-49c9-a84d-44e5af94f01c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1845d620994797b0fad3550ee243fdb5719b076cd21e2cd9fbdbfd84d5afd805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://802b58c2bb92a1887147eee76414a66c948e077ad8a3835bccd344ae67562b89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24ca0cd9727c9f25252266ba758cfa75b6d48b1f683f97b36bc3a40d6e4d9346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91aa9d18d2efa1c3559a3a17858453a13c76b7567ffb215046c57556b661890c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91aa9d18d2efa1c3559a3a17858453a13c76b7567ffb215046c57556b661890c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:28Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:28 crc kubenswrapper[5072]: I1124 11:10:28.545053 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a60343a1-7193-420d-b6ef-81505cfad266\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6597a19c8ed876fea1aaa8077315a8f39d0a79dee6af94970a3abcd552d673e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89e652bfaac124e13e0b3dfd3f167688a6b417b3613fb94d5422e2134ad95a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c9b314ea6e67a2866adfd0dc2e429523b6db6dab450a1a95fe5528548a0fcb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5f54ddd554c2e52a492be6b3e237793c7b7bed201d942c23d11983e154863a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e03b85333c8be2e5efe40f082369652f009482373f8e230fd948b2dee4e2ee39\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:09:23Z\\\",\\\"message\\\":\\\"W1124 11:09:12.543261 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 11:09:12.543592 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763982552 cert, and key in /tmp/serving-cert-2249531990/serving-signer.crt, /tmp/serving-cert-2249531990/serving-signer.key\\\\nI1124 11:09:13.042739 1 observer_polling.go:159] Starting file observer\\\\nW1124 11:09:13.046128 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 11:09:13.046351 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:09:13.048981 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2249531990/tls.crt::/tmp/serving-cert-2249531990/tls.key\\\\\\\"\\\\nF1124 11:09:23.567420 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d2187669c4dc9aae8ca2f2141104aee1e20df96f0bccf45ecd4c8528f51d1af\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a6b0468c00ca40213d12dd7b80c9f0dcfb93509a44ae37414053672e674f9f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6b0468c00ca40213d12dd7b80c9f0dcfb93509a44ae37414053672e674f9f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:28Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:28 crc kubenswrapper[5072]: I1124 11:10:28.561637 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:28Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:28 crc kubenswrapper[5072]: I1124 11:10:28.586342 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:28 crc kubenswrapper[5072]: I1124 11:10:28.586424 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:28 crc kubenswrapper[5072]: I1124 11:10:28.586435 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:28 crc kubenswrapper[5072]: I1124 11:10:28.586456 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:28 crc kubenswrapper[5072]: I1124 11:10:28.586469 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:28Z","lastTransitionTime":"2025-11-24T11:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:28 crc kubenswrapper[5072]: I1124 11:10:28.591835 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1421e4bd297d99e68c36da933221bbabf8d74aa5fbfa7cbfe831215de52d4790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c82cb1df0677da29463f84139b09b8ee263695e4c994ef7d17846556260b5c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89dd7133a078fe05808fdf20f22b6939004406ae85d3b6ef854a3e4031350491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f6526ffcce8bc139bd9442203e460c71b46e2e8cf9e1f0d03beb067f5dc1c39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98470930757c0529cc831f91feab9f4b004c808efbfdf40e3e95b12e6af1c6d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7621cb39fa8d0330ee899d4962150519618be95eabfc592e6678bb5f5fbbdbfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b30fc71ef9fdf26e114844d344131e79b2ea981d3e69760bb92b1279f0b3c434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b30fc71ef9fdf26e114844d344131e79b2ea981d3e69760bb92b1279f0b3c434\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:10:27Z\\\",\\\"message\\\":\\\"twork controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:27Z is after 2025-08-24T17:21:41Z]\\\\nI1124 11:10:27.057212 7115 services_controller.go:434] Service openshift-service-ca-operator/metrics retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{metrics openshift-service-ca-operator 9ab1e41d-7da1-46d4-b0d8-4395ba0a6601 4750 0 2025-02-23 05:12:18 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[app:service-ca-operator] map[include.release.openshift.io/hypershift:true include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-secret-name:serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0072d895f \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{S\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:10:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-n4qmw_openshift-ovn-kubernetes(80fda759-ddfd-438a-b5a2-cb775ee1bf7e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af4c3d6857b6aaa6a401604f5423cfb55488de707a08698b4cf9f420b9c07975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4qmw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:28Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:28 crc kubenswrapper[5072]: I1124 11:10:28.612155 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qjsxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74eb978f-00ff-4ed3-a5da-8026a3211592\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a69b8017daa872327d88eab8150845309e30c5cf37b229292e7c8a80e5d599c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://911b5942d35c25032791bf5a43559a6234acf215f5d3f84a30e69aced0caecc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://911b5942d35c25032791bf5a43559a6234acf215f5d3f84a30e69aced0caecc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://829da19d26a0ee0192a826e0b355266bcc48c77cf7b1fcf97a9e56add5d48645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://829da19d26a0ee0192a826e0b355266bcc48c77cf7b1fcf97a9e56add5d48645\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5add393950b53ed615d28b3d65833ae6a5174616b7170577babf1f4b7b6a2336\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5add393950b53ed615d28b3d65833ae6a5174616b7170577babf1f4b7b6a2336\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4771d3054f62a25ec9be8b6628ead9e7eb99ad4ae545d803919cb0122343c0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4771d3054f62a25ec9be8b6628ead9e7eb99ad4ae545d803919cb0122343c0ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd19ed803c2b441c4dde807b4cd4461c581058658db24f32dea39ad73b9cef14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd19ed803c2b441c4dde807b4cd4461c581058658db24f32dea39ad73b9cef14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09dba82c18fac19ddd5bbbeecab58a5dc685dbda72e7570cde5d445990066d2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09dba82c18fac19ddd5bbbeecab58a5dc685dbda72e7570cde5d445990066d2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qjsxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:28Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:28 crc kubenswrapper[5072]: I1124 11:10:28.628781 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b45fbff892ae7b15dc056d52d6485a995bb8a62ae423498027fe4866ef51e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcaa27616bc15c5ce26c371eb8a8f155914434949662b30894cd1ef7aa8e04a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:28Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:28 crc kubenswrapper[5072]: I1124 11:10:28.645286 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3973b61727227663fde759ad817fc73088f78293c67fc1bbbf5d5543afa7bbb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:28Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:28 crc kubenswrapper[5072]: I1124 11:10:28.657552 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bkjf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"175fd540-009b-4cb4-9c3e-e2ebc7e787f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d000a9d98b0e3ed54c1cc50148360bb8103d332c45ee03e745f14929132d2c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcts8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bkjf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:28Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:28 crc kubenswrapper[5072]: I1124 11:10:28.675174 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t8b9x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a9fe7b3-71a3-4388-8ee4-7531ceef6049\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db181b35d5ddd8cb7ce31d9293b82a515a8889794cf9696c664b101693247cc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96637ece9dca11a6b9e2a8fff8e78ca37f48e9f86e3f076e80cbd56aa353ca74\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:10:18Z\\\",\\\"message\\\":\\\"2025-11-24T11:09:33+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_93e4312d-4a0d-4245-ac97-02477f03c30c\\\\n2025-11-24T11:09:33+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_93e4312d-4a0d-4245-ac97-02477f03c30c to /host/opt/cni/bin/\\\\n2025-11-24T11:09:33Z [verbose] multus-daemon started\\\\n2025-11-24T11:09:33Z [verbose] Readiness Indicator file check\\\\n2025-11-24T11:10:18Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmbvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t8b9x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:28Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:28 crc kubenswrapper[5072]: I1124 11:10:28.688884 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:28 crc kubenswrapper[5072]: I1124 11:10:28.688947 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:28 crc kubenswrapper[5072]: I1124 11:10:28.688961 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:28 crc kubenswrapper[5072]: I1124 11:10:28.688979 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:28 crc kubenswrapper[5072]: I1124 11:10:28.688991 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:28Z","lastTransitionTime":"2025-11-24T11:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:28 crc kubenswrapper[5072]: I1124 11:10:28.693671 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85ee6420-36f0-467c-acf4-ebea8b02c8d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21d57225dc522c1ee3621c75ac8f9f93c47d21afb8b0cb1aae2d6aea1d17a252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3509fd52379451e43594c096ef652d92778331f2aef6b689e547f35a384b976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jfxnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:28Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:28 crc kubenswrapper[5072]: I1124 11:10:28.710058 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jz4mm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19d555ef-9635-4aa7-bce1-7b1eb4805445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc7d5e96171aeadf92196d2b795c03ec634abd92814569a974200484569c145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8k8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jz4mm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:28Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:28 crc kubenswrapper[5072]: I1124 11:10:28.726616 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wndk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c05ddf6-986e-4bd6-95f0-7d734bc59140\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://894e58e94d99e8ef26722db709e0135d59ac4847daf001e37ce266c9baf02e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gztmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea4b260f16a11dade8c8b120408cf2d167dd868a9b938f4231aa811546252c56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gztmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wndk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:28Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:28 crc kubenswrapper[5072]: I1124 11:10:28.740939 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nnrv7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60100e7d-c8b1-4b18-8567-46e21096fa0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbdfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbdfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:45Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nnrv7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:28Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:28 crc kubenswrapper[5072]: I1124 11:10:28.759708 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9007e2c-ce36-49d5-ac3f-a2a0ced4e662\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://631c19835680cfbfc94d8d2864f79bb327a834aae717a2c9c525383029e44001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a299161b21fb4a4bc255d765f39eaafa3c87549cc62d458d28ff57fbb4b5fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25ce4f3c52e2096622385f0bd213a058de7ddd3967ed8ba918e79fc63b00429c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28c581f99dcf7d549d235350230e7c3ef380dfeb4fdff577353410642700cb1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:28Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:28 crc kubenswrapper[5072]: I1124 11:10:28.779993 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:28Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:28 crc kubenswrapper[5072]: I1124 11:10:28.791015 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:28 crc kubenswrapper[5072]: I1124 11:10:28.791088 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:28 crc kubenswrapper[5072]: I1124 11:10:28.791114 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:28 crc kubenswrapper[5072]: I1124 11:10:28.791143 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:28 crc kubenswrapper[5072]: I1124 11:10:28.791185 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:28Z","lastTransitionTime":"2025-11-24T11:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:28 crc kubenswrapper[5072]: I1124 11:10:28.802480 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a948c39e09b468da8df5726e7734af35e1d5324d44a6ad11f6e30031f27060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:28Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:28 crc kubenswrapper[5072]: I1124 11:10:28.820770 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:28Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:28 crc kubenswrapper[5072]: I1124 11:10:28.895079 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:28 crc kubenswrapper[5072]: I1124 11:10:28.895139 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:28 crc kubenswrapper[5072]: I1124 11:10:28.895159 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:28 crc kubenswrapper[5072]: I1124 11:10:28.895184 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:28 crc kubenswrapper[5072]: I1124 11:10:28.895201 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:28Z","lastTransitionTime":"2025-11-24T11:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:28 crc kubenswrapper[5072]: I1124 11:10:28.998556 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:28 crc kubenswrapper[5072]: I1124 11:10:28.998599 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:28 crc kubenswrapper[5072]: I1124 11:10:28.998615 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:28 crc kubenswrapper[5072]: I1124 11:10:28.998636 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:28 crc kubenswrapper[5072]: I1124 11:10:28.998653 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:28Z","lastTransitionTime":"2025-11-24T11:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:29 crc kubenswrapper[5072]: I1124 11:10:29.016714 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nnrv7" Nov 24 11:10:29 crc kubenswrapper[5072]: E1124 11:10:29.016846 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nnrv7" podUID="60100e7d-c8b1-4b18-8567-46e21096fa0f" Nov 24 11:10:29 crc kubenswrapper[5072]: I1124 11:10:29.042513 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1421e4bd297d99e68c36da933221bbabf8d74aa5fbfa7cbfe831215de52d4790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c82cb1df0677da29463f84139b09b8ee263695e4c994ef7d17846556260b5c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89dd7133a078fe05808fdf20f22b6939004406ae85d3b6ef854a3e4031350491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f6526ffcce8bc139bd9442203e460c71b46e2e8cf9e1f0d03beb067f5dc1c39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98470930757c0529cc831f91feab9f4b004c808efbfdf40e3e95b12e6af1c6d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7621cb39fa8d0330ee899d4962150519618be95eabfc592e6678bb5f5fbbdbfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b30fc71ef9fdf26e114844d344131e79b2ea981d3e69760bb92b1279f0b3c434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b30fc71ef9fdf26e114844d344131e79b2ea981d3e69760bb92b1279f0b3c434\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:10:27Z\\\",\\\"message\\\":\\\"twork controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:27Z is after 2025-08-24T17:21:41Z]\\\\nI1124 11:10:27.057212 7115 services_controller.go:434] Service openshift-service-ca-operator/metrics retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{metrics openshift-service-ca-operator 9ab1e41d-7da1-46d4-b0d8-4395ba0a6601 4750 0 2025-02-23 05:12:18 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[app:service-ca-operator] map[include.release.openshift.io/hypershift:true include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-secret-name:serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0072d895f \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{S\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:10:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-n4qmw_openshift-ovn-kubernetes(80fda759-ddfd-438a-b5a2-cb775ee1bf7e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af4c3d6857b6aaa6a401604f5423cfb55488de707a08698b4cf9f420b9c07975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4qmw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:29Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:29 crc kubenswrapper[5072]: I1124 11:10:29.065587 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3de15bd-d863-49c9-a84d-44e5af94f01c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1845d620994797b0fad3550ee243fdb5719b076cd21e2cd9fbdbfd84d5afd805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://802b58c2bb92a1887147eee76414a66c948e077ad8a3835bccd344ae67562b89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24ca0cd9727c9f25252266ba758cfa75b6d48b1f683f97b36bc3a40d6e4d9346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91aa9d18d2efa1c3559a3a17858453a13c76b7567ffb215046c57556b661890c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91aa9d18d2efa1c3559a3a17858453a13c76b7567ffb215046c57556b661890c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:29Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:29 crc kubenswrapper[5072]: I1124 11:10:29.086885 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a60343a1-7193-420d-b6ef-81505cfad266\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6597a19c8ed876fea1aaa8077315a8f39d0a79dee6af94970a3abcd552d673e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89e652bfaac124e13e0b3dfd3f167688a6b417b3613fb94d5422e2134ad95a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c9b314ea6e67a2866adfd0dc2e429523b6db6dab450a1a95fe5528548a0fcb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5f54ddd554c2e52a492be6b3e237793c7b7bed201d942c23d11983e154863a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e03b85333c8be2e5efe40f082369652f009482373f8e230fd948b2dee4e2ee39\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:09:23Z\\\",\\\"message\\\":\\\"W1124 11:09:12.543261 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 11:09:12.543592 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763982552 cert, and key in /tmp/serving-cert-2249531990/serving-signer.crt, /tmp/serving-cert-2249531990/serving-signer.key\\\\nI1124 11:09:13.042739 1 observer_polling.go:159] Starting file observer\\\\nW1124 11:09:13.046128 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 11:09:13.046351 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:09:13.048981 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2249531990/tls.crt::/tmp/serving-cert-2249531990/tls.key\\\\\\\"\\\\nF1124 11:09:23.567420 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d2187669c4dc9aae8ca2f2141104aee1e20df96f0bccf45ecd4c8528f51d1af\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a6b0468c00ca40213d12dd7b80c9f0dcfb93509a44ae37414053672e674f9f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6b0468c00ca40213d12dd7b80c9f0dcfb93509a44ae37414053672e674f9f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:29Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:29 crc kubenswrapper[5072]: I1124 11:10:29.100822 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:29 crc kubenswrapper[5072]: I1124 11:10:29.101049 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:29 crc kubenswrapper[5072]: I1124 11:10:29.101237 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:29 crc kubenswrapper[5072]: I1124 11:10:29.101418 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:29 crc kubenswrapper[5072]: I1124 11:10:29.101583 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:29Z","lastTransitionTime":"2025-11-24T11:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:29 crc kubenswrapper[5072]: I1124 11:10:29.113547 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:29Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:29 crc kubenswrapper[5072]: I1124 11:10:29.139954 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qjsxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74eb978f-00ff-4ed3-a5da-8026a3211592\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a69b8017daa872327d88eab8150845309e30c5cf37b229292e7c8a80e5d599c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://911b5942d35c25032791bf5a43559a6234acf215f5d3f84a30e69aced0caecc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://911b5942d35c25032791bf5a43559a6234acf215f5d3f84a30e69aced0caecc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://829da19d26a0ee0192a826e0b355266bcc48c77cf7b1fcf97a9e56add5d48645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://829da19d26a0ee0192a826e0b355266bcc48c77cf7b1fcf97a9e56add5d48645\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5add393950b53ed615d28b3d65833ae6a5174616b7170577babf1f4b7b6a2336\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5add393950b53ed615d28b3d65833ae6a5174616b7170577babf1f4b7b6a2336\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4771d3054f62a25ec9be8b6628ead9e7eb99ad4ae545d803919cb0122343c0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4771d3054f62a25ec9be8b6628ead9e7eb99ad4ae545d803919cb0122343c0ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd19ed803c2b441c4dde807b4cd4461c581058658db24f32dea39ad73b9cef14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd19ed803c2b441c4dde807b4cd4461c581058658db24f32dea39ad73b9cef14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09dba82c18fac19ddd5bbbeecab58a5dc685dbda72e7570cde5d445990066d2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09dba82c18fac19ddd5bbbeecab58a5dc685dbda72e7570cde5d445990066d2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qjsxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:29Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:29 crc kubenswrapper[5072]: I1124 11:10:29.156987 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85ee6420-36f0-467c-acf4-ebea8b02c8d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21d57225dc522c1ee3621c75ac8f9f93c47d21afb8b0cb1aae2d6aea1d17a252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3509fd52379451e43594c096ef652d92778331f2aef6b689e547f35a384b976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jfxnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:29Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:29 crc kubenswrapper[5072]: I1124 11:10:29.171744 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jz4mm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19d555ef-9635-4aa7-bce1-7b1eb4805445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc7d5e96171aeadf92196d2b795c03ec634abd92814569a974200484569c145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8k8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jz4mm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:29Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:29 crc kubenswrapper[5072]: I1124 11:10:29.189553 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wndk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c05ddf6-986e-4bd6-95f0-7d734bc59140\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://894e58e94d99e8ef26722db709e0135d59ac4847daf001e37ce266c9baf02e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gztmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea4b260f16a11dade8c8b120408cf2d167dd868a9b938f4231aa811546252c56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gztmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wndk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:29Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:29 crc kubenswrapper[5072]: I1124 11:10:29.205577 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:29 crc kubenswrapper[5072]: I1124 11:10:29.205626 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:29 crc kubenswrapper[5072]: I1124 11:10:29.205644 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:29 crc kubenswrapper[5072]: I1124 11:10:29.205667 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:29 crc kubenswrapper[5072]: I1124 11:10:29.205685 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:29Z","lastTransitionTime":"2025-11-24T11:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:29 crc kubenswrapper[5072]: I1124 11:10:29.206563 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nnrv7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60100e7d-c8b1-4b18-8567-46e21096fa0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbdfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbdfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:45Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nnrv7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:29Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:29 crc kubenswrapper[5072]: I1124 11:10:29.225955 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b45fbff892ae7b15dc056d52d6485a995bb8a62ae423498027fe4866ef51e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcaa27616bc15c5ce26c371eb8a8f155914434949662b30894cd1ef7aa8e04a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:29Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:29 crc kubenswrapper[5072]: I1124 11:10:29.244881 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3973b61727227663fde759ad817fc73088f78293c67fc1bbbf5d5543afa7bbb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:29Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:29 crc kubenswrapper[5072]: I1124 11:10:29.260998 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bkjf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"175fd540-009b-4cb4-9c3e-e2ebc7e787f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d000a9d98b0e3ed54c1cc50148360bb8103d332c45ee03e745f14929132d2c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcts8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bkjf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:29Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:29 crc kubenswrapper[5072]: I1124 11:10:29.286419 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t8b9x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a9fe7b3-71a3-4388-8ee4-7531ceef6049\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db181b35d5ddd8cb7ce31d9293b82a515a8889794cf9696c664b101693247cc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96637ece9dca11a6b9e2a8fff8e78ca37f48e9f86e3f076e80cbd56aa353ca74\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:10:18Z\\\",\\\"message\\\":\\\"2025-11-24T11:09:33+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_93e4312d-4a0d-4245-ac97-02477f03c30c\\\\n2025-11-24T11:09:33+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_93e4312d-4a0d-4245-ac97-02477f03c30c to /host/opt/cni/bin/\\\\n2025-11-24T11:09:33Z [verbose] multus-daemon started\\\\n2025-11-24T11:09:33Z [verbose] Readiness Indicator file check\\\\n2025-11-24T11:10:18Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmbvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t8b9x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:29Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:29 crc kubenswrapper[5072]: I1124 11:10:29.305456 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:29Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:29 crc kubenswrapper[5072]: I1124 11:10:29.308336 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:29 crc kubenswrapper[5072]: I1124 11:10:29.308599 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:29 crc kubenswrapper[5072]: I1124 11:10:29.308755 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:29 crc kubenswrapper[5072]: I1124 11:10:29.308972 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:29 crc kubenswrapper[5072]: I1124 11:10:29.309114 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:29Z","lastTransitionTime":"2025-11-24T11:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:29 crc kubenswrapper[5072]: I1124 11:10:29.324162 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9007e2c-ce36-49d5-ac3f-a2a0ced4e662\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://631c19835680cfbfc94d8d2864f79bb327a834aae717a2c9c525383029e44001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a299161b21fb4a4bc255d765f39eaafa3c87549cc62d458d28ff57fbb4b5fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25ce4f3c52e2096622385f0bd213a058de7ddd3967ed8ba918e79fc63b00429c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28c581f99dcf7d549d235350230e7c3ef380dfeb4fdff577353410642700cb1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:29Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:29 crc kubenswrapper[5072]: I1124 11:10:29.344673 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:29Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:29 crc kubenswrapper[5072]: I1124 11:10:29.365143 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a948c39e09b468da8df5726e7734af35e1d5324d44a6ad11f6e30031f27060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:29Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:29 crc kubenswrapper[5072]: I1124 11:10:29.411733 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:29 crc kubenswrapper[5072]: I1124 11:10:29.411780 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:29 crc kubenswrapper[5072]: I1124 11:10:29.411817 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:29 crc kubenswrapper[5072]: I1124 11:10:29.411838 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:29 crc kubenswrapper[5072]: I1124 11:10:29.411852 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:29Z","lastTransitionTime":"2025-11-24T11:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:29 crc kubenswrapper[5072]: I1124 11:10:29.514478 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:29 crc kubenswrapper[5072]: I1124 11:10:29.514526 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:29 crc kubenswrapper[5072]: I1124 11:10:29.514542 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:29 crc kubenswrapper[5072]: I1124 11:10:29.514567 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:29 crc kubenswrapper[5072]: I1124 11:10:29.514584 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:29Z","lastTransitionTime":"2025-11-24T11:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:29 crc kubenswrapper[5072]: I1124 11:10:29.617481 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:29 crc kubenswrapper[5072]: I1124 11:10:29.617535 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:29 crc kubenswrapper[5072]: I1124 11:10:29.617557 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:29 crc kubenswrapper[5072]: I1124 11:10:29.617585 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:29 crc kubenswrapper[5072]: I1124 11:10:29.617603 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:29Z","lastTransitionTime":"2025-11-24T11:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:29 crc kubenswrapper[5072]: I1124 11:10:29.720060 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:29 crc kubenswrapper[5072]: I1124 11:10:29.720131 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:29 crc kubenswrapper[5072]: I1124 11:10:29.720153 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:29 crc kubenswrapper[5072]: I1124 11:10:29.720184 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:29 crc kubenswrapper[5072]: I1124 11:10:29.720202 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:29Z","lastTransitionTime":"2025-11-24T11:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:29 crc kubenswrapper[5072]: I1124 11:10:29.822779 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:29 crc kubenswrapper[5072]: I1124 11:10:29.822830 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:29 crc kubenswrapper[5072]: I1124 11:10:29.822846 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:29 crc kubenswrapper[5072]: I1124 11:10:29.822868 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:29 crc kubenswrapper[5072]: I1124 11:10:29.822885 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:29Z","lastTransitionTime":"2025-11-24T11:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:29 crc kubenswrapper[5072]: I1124 11:10:29.925948 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:29 crc kubenswrapper[5072]: I1124 11:10:29.926007 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:29 crc kubenswrapper[5072]: I1124 11:10:29.926026 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:29 crc kubenswrapper[5072]: I1124 11:10:29.926051 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:29 crc kubenswrapper[5072]: I1124 11:10:29.926069 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:29Z","lastTransitionTime":"2025-11-24T11:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:30 crc kubenswrapper[5072]: I1124 11:10:30.016345 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:10:30 crc kubenswrapper[5072]: I1124 11:10:30.016480 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:10:30 crc kubenswrapper[5072]: I1124 11:10:30.016501 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:10:30 crc kubenswrapper[5072]: E1124 11:10:30.016710 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:10:30 crc kubenswrapper[5072]: E1124 11:10:30.016816 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:10:30 crc kubenswrapper[5072]: E1124 11:10:30.016971 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:10:30 crc kubenswrapper[5072]: I1124 11:10:30.029063 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:30 crc kubenswrapper[5072]: I1124 11:10:30.029105 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:30 crc kubenswrapper[5072]: I1124 11:10:30.029123 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:30 crc kubenswrapper[5072]: I1124 11:10:30.029145 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:30 crc kubenswrapper[5072]: I1124 11:10:30.029166 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:30Z","lastTransitionTime":"2025-11-24T11:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:30 crc kubenswrapper[5072]: I1124 11:10:30.142292 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:30 crc kubenswrapper[5072]: I1124 11:10:30.142360 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:30 crc kubenswrapper[5072]: I1124 11:10:30.142414 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:30 crc kubenswrapper[5072]: I1124 11:10:30.142444 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:30 crc kubenswrapper[5072]: I1124 11:10:30.142466 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:30Z","lastTransitionTime":"2025-11-24T11:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:30 crc kubenswrapper[5072]: I1124 11:10:30.246067 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:30 crc kubenswrapper[5072]: I1124 11:10:30.246107 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:30 crc kubenswrapper[5072]: I1124 11:10:30.246117 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:30 crc kubenswrapper[5072]: I1124 11:10:30.246135 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:30 crc kubenswrapper[5072]: I1124 11:10:30.246147 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:30Z","lastTransitionTime":"2025-11-24T11:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:30 crc kubenswrapper[5072]: I1124 11:10:30.349457 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:30 crc kubenswrapper[5072]: I1124 11:10:30.349503 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:30 crc kubenswrapper[5072]: I1124 11:10:30.349517 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:30 crc kubenswrapper[5072]: I1124 11:10:30.349535 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:30 crc kubenswrapper[5072]: I1124 11:10:30.349549 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:30Z","lastTransitionTime":"2025-11-24T11:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:30 crc kubenswrapper[5072]: I1124 11:10:30.452214 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:30 crc kubenswrapper[5072]: I1124 11:10:30.452273 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:30 crc kubenswrapper[5072]: I1124 11:10:30.452291 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:30 crc kubenswrapper[5072]: I1124 11:10:30.452315 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:30 crc kubenswrapper[5072]: I1124 11:10:30.452332 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:30Z","lastTransitionTime":"2025-11-24T11:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:30 crc kubenswrapper[5072]: I1124 11:10:30.554886 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:30 crc kubenswrapper[5072]: I1124 11:10:30.554977 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:30 crc kubenswrapper[5072]: I1124 11:10:30.554993 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:30 crc kubenswrapper[5072]: I1124 11:10:30.555017 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:30 crc kubenswrapper[5072]: I1124 11:10:30.555036 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:30Z","lastTransitionTime":"2025-11-24T11:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:30 crc kubenswrapper[5072]: I1124 11:10:30.657499 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:30 crc kubenswrapper[5072]: I1124 11:10:30.657557 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:30 crc kubenswrapper[5072]: I1124 11:10:30.657572 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:30 crc kubenswrapper[5072]: I1124 11:10:30.657595 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:30 crc kubenswrapper[5072]: I1124 11:10:30.657612 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:30Z","lastTransitionTime":"2025-11-24T11:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:30 crc kubenswrapper[5072]: I1124 11:10:30.760456 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:30 crc kubenswrapper[5072]: I1124 11:10:30.760548 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:30 crc kubenswrapper[5072]: I1124 11:10:30.760565 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:30 crc kubenswrapper[5072]: I1124 11:10:30.760587 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:30 crc kubenswrapper[5072]: I1124 11:10:30.760605 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:30Z","lastTransitionTime":"2025-11-24T11:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:30 crc kubenswrapper[5072]: I1124 11:10:30.863453 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:30 crc kubenswrapper[5072]: I1124 11:10:30.863530 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:30 crc kubenswrapper[5072]: I1124 11:10:30.863544 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:30 crc kubenswrapper[5072]: I1124 11:10:30.863566 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:30 crc kubenswrapper[5072]: I1124 11:10:30.863580 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:30Z","lastTransitionTime":"2025-11-24T11:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:30 crc kubenswrapper[5072]: I1124 11:10:30.966826 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:30 crc kubenswrapper[5072]: I1124 11:10:30.966902 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:30 crc kubenswrapper[5072]: I1124 11:10:30.966927 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:30 crc kubenswrapper[5072]: I1124 11:10:30.966955 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:30 crc kubenswrapper[5072]: I1124 11:10:30.966976 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:30Z","lastTransitionTime":"2025-11-24T11:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:31 crc kubenswrapper[5072]: I1124 11:10:31.015919 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nnrv7" Nov 24 11:10:31 crc kubenswrapper[5072]: E1124 11:10:31.016355 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nnrv7" podUID="60100e7d-c8b1-4b18-8567-46e21096fa0f" Nov 24 11:10:31 crc kubenswrapper[5072]: I1124 11:10:31.039555 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:31 crc kubenswrapper[5072]: I1124 11:10:31.039598 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:31 crc kubenswrapper[5072]: I1124 11:10:31.039608 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:31 crc kubenswrapper[5072]: I1124 11:10:31.039623 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:31 crc kubenswrapper[5072]: I1124 11:10:31.039633 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:31Z","lastTransitionTime":"2025-11-24T11:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:31 crc kubenswrapper[5072]: E1124 11:10:31.051011 5072 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:10:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:10:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:10:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:10:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:10:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:10:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:10:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:10:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a41d3a9c-0834-482e-9391-dff98db0f196\\\",\\\"systemUUID\\\":\\\"d0383649-b062-48ed-9fc1-5e553cb9256a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:31Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:31 crc kubenswrapper[5072]: I1124 11:10:31.054587 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:31 crc kubenswrapper[5072]: I1124 11:10:31.054626 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:31 crc kubenswrapper[5072]: I1124 11:10:31.054638 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:31 crc kubenswrapper[5072]: I1124 11:10:31.054654 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:31 crc kubenswrapper[5072]: I1124 11:10:31.054668 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:31Z","lastTransitionTime":"2025-11-24T11:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:31 crc kubenswrapper[5072]: E1124 11:10:31.070628 5072 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:10:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:10:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:10:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:10:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:10:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:10:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:10:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:10:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a41d3a9c-0834-482e-9391-dff98db0f196\\\",\\\"systemUUID\\\":\\\"d0383649-b062-48ed-9fc1-5e553cb9256a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:31Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:31 crc kubenswrapper[5072]: I1124 11:10:31.074710 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:31 crc kubenswrapper[5072]: I1124 11:10:31.074772 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:31 crc kubenswrapper[5072]: I1124 11:10:31.074798 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:31 crc kubenswrapper[5072]: I1124 11:10:31.074831 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:31 crc kubenswrapper[5072]: I1124 11:10:31.074855 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:31Z","lastTransitionTime":"2025-11-24T11:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:31 crc kubenswrapper[5072]: E1124 11:10:31.090591 5072 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:10:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:10:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:10:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:10:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:10:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:10:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:10:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:10:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a41d3a9c-0834-482e-9391-dff98db0f196\\\",\\\"systemUUID\\\":\\\"d0383649-b062-48ed-9fc1-5e553cb9256a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:31Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:31 crc kubenswrapper[5072]: I1124 11:10:31.094869 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:31 crc kubenswrapper[5072]: I1124 11:10:31.094899 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:31 crc kubenswrapper[5072]: I1124 11:10:31.094908 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:31 crc kubenswrapper[5072]: I1124 11:10:31.094924 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:31 crc kubenswrapper[5072]: I1124 11:10:31.094934 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:31Z","lastTransitionTime":"2025-11-24T11:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:31 crc kubenswrapper[5072]: E1124 11:10:31.111202 5072 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:10:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:10:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:10:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:10:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:10:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:10:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:10:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:10:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a41d3a9c-0834-482e-9391-dff98db0f196\\\",\\\"systemUUID\\\":\\\"d0383649-b062-48ed-9fc1-5e553cb9256a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:31Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:31 crc kubenswrapper[5072]: I1124 11:10:31.114902 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:31 crc kubenswrapper[5072]: I1124 11:10:31.114961 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:31 crc kubenswrapper[5072]: I1124 11:10:31.114979 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:31 crc kubenswrapper[5072]: I1124 11:10:31.115006 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:31 crc kubenswrapper[5072]: I1124 11:10:31.115022 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:31Z","lastTransitionTime":"2025-11-24T11:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:31 crc kubenswrapper[5072]: E1124 11:10:31.130627 5072 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:10:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:10:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:10:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:10:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:10:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:10:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:10:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:10:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a41d3a9c-0834-482e-9391-dff98db0f196\\\",\\\"systemUUID\\\":\\\"d0383649-b062-48ed-9fc1-5e553cb9256a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:31Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:31 crc kubenswrapper[5072]: E1124 11:10:31.130871 5072 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 24 11:10:31 crc kubenswrapper[5072]: I1124 11:10:31.132498 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:31 crc kubenswrapper[5072]: I1124 11:10:31.132543 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:31 crc kubenswrapper[5072]: I1124 11:10:31.132557 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:31 crc kubenswrapper[5072]: I1124 11:10:31.132575 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:31 crc kubenswrapper[5072]: I1124 11:10:31.132587 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:31Z","lastTransitionTime":"2025-11-24T11:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:31 crc kubenswrapper[5072]: I1124 11:10:31.235465 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:31 crc kubenswrapper[5072]: I1124 11:10:31.235904 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:31 crc kubenswrapper[5072]: I1124 11:10:31.235926 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:31 crc kubenswrapper[5072]: I1124 11:10:31.235950 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:31 crc kubenswrapper[5072]: I1124 11:10:31.235967 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:31Z","lastTransitionTime":"2025-11-24T11:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:31 crc kubenswrapper[5072]: I1124 11:10:31.339413 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:31 crc kubenswrapper[5072]: I1124 11:10:31.339484 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:31 crc kubenswrapper[5072]: I1124 11:10:31.339501 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:31 crc kubenswrapper[5072]: I1124 11:10:31.339526 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:31 crc kubenswrapper[5072]: I1124 11:10:31.339544 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:31Z","lastTransitionTime":"2025-11-24T11:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:31 crc kubenswrapper[5072]: I1124 11:10:31.441704 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:31 crc kubenswrapper[5072]: I1124 11:10:31.441757 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:31 crc kubenswrapper[5072]: I1124 11:10:31.441775 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:31 crc kubenswrapper[5072]: I1124 11:10:31.441798 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:31 crc kubenswrapper[5072]: I1124 11:10:31.441815 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:31Z","lastTransitionTime":"2025-11-24T11:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:31 crc kubenswrapper[5072]: I1124 11:10:31.543896 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:31 crc kubenswrapper[5072]: I1124 11:10:31.543945 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:31 crc kubenswrapper[5072]: I1124 11:10:31.543962 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:31 crc kubenswrapper[5072]: I1124 11:10:31.543982 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:31 crc kubenswrapper[5072]: I1124 11:10:31.543998 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:31Z","lastTransitionTime":"2025-11-24T11:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:31 crc kubenswrapper[5072]: I1124 11:10:31.647513 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:31 crc kubenswrapper[5072]: I1124 11:10:31.647568 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:31 crc kubenswrapper[5072]: I1124 11:10:31.647586 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:31 crc kubenswrapper[5072]: I1124 11:10:31.647610 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:31 crc kubenswrapper[5072]: I1124 11:10:31.647630 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:31Z","lastTransitionTime":"2025-11-24T11:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:31 crc kubenswrapper[5072]: I1124 11:10:31.751029 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:31 crc kubenswrapper[5072]: I1124 11:10:31.751082 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:31 crc kubenswrapper[5072]: I1124 11:10:31.751099 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:31 crc kubenswrapper[5072]: I1124 11:10:31.751128 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:31 crc kubenswrapper[5072]: I1124 11:10:31.751145 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:31Z","lastTransitionTime":"2025-11-24T11:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:31 crc kubenswrapper[5072]: I1124 11:10:31.853983 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:31 crc kubenswrapper[5072]: I1124 11:10:31.854026 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:31 crc kubenswrapper[5072]: I1124 11:10:31.854037 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:31 crc kubenswrapper[5072]: I1124 11:10:31.854051 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:31 crc kubenswrapper[5072]: I1124 11:10:31.854060 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:31Z","lastTransitionTime":"2025-11-24T11:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:31 crc kubenswrapper[5072]: I1124 11:10:31.956663 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:31 crc kubenswrapper[5072]: I1124 11:10:31.956761 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:31 crc kubenswrapper[5072]: I1124 11:10:31.956778 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:31 crc kubenswrapper[5072]: I1124 11:10:31.956841 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:31 crc kubenswrapper[5072]: I1124 11:10:31.956863 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:31Z","lastTransitionTime":"2025-11-24T11:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:32 crc kubenswrapper[5072]: I1124 11:10:32.015535 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:10:32 crc kubenswrapper[5072]: I1124 11:10:32.015578 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:10:32 crc kubenswrapper[5072]: I1124 11:10:32.015680 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:10:32 crc kubenswrapper[5072]: E1124 11:10:32.015717 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:10:32 crc kubenswrapper[5072]: E1124 11:10:32.016065 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:10:32 crc kubenswrapper[5072]: E1124 11:10:32.016356 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:10:32 crc kubenswrapper[5072]: I1124 11:10:32.064776 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:32 crc kubenswrapper[5072]: I1124 11:10:32.064835 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:32 crc kubenswrapper[5072]: I1124 11:10:32.064853 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:32 crc kubenswrapper[5072]: I1124 11:10:32.064878 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:32 crc kubenswrapper[5072]: I1124 11:10:32.064897 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:32Z","lastTransitionTime":"2025-11-24T11:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:32 crc kubenswrapper[5072]: I1124 11:10:32.168008 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:32 crc kubenswrapper[5072]: I1124 11:10:32.168045 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:32 crc kubenswrapper[5072]: I1124 11:10:32.168053 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:32 crc kubenswrapper[5072]: I1124 11:10:32.168066 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:32 crc kubenswrapper[5072]: I1124 11:10:32.168075 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:32Z","lastTransitionTime":"2025-11-24T11:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:32 crc kubenswrapper[5072]: I1124 11:10:32.270889 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:32 crc kubenswrapper[5072]: I1124 11:10:32.270975 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:32 crc kubenswrapper[5072]: I1124 11:10:32.271003 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:32 crc kubenswrapper[5072]: I1124 11:10:32.271036 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:32 crc kubenswrapper[5072]: I1124 11:10:32.271061 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:32Z","lastTransitionTime":"2025-11-24T11:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:32 crc kubenswrapper[5072]: I1124 11:10:32.374155 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:32 crc kubenswrapper[5072]: I1124 11:10:32.374244 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:32 crc kubenswrapper[5072]: I1124 11:10:32.374265 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:32 crc kubenswrapper[5072]: I1124 11:10:32.374289 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:32 crc kubenswrapper[5072]: I1124 11:10:32.374307 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:32Z","lastTransitionTime":"2025-11-24T11:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:32 crc kubenswrapper[5072]: I1124 11:10:32.477200 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:32 crc kubenswrapper[5072]: I1124 11:10:32.477256 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:32 crc kubenswrapper[5072]: I1124 11:10:32.477273 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:32 crc kubenswrapper[5072]: I1124 11:10:32.477295 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:32 crc kubenswrapper[5072]: I1124 11:10:32.477312 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:32Z","lastTransitionTime":"2025-11-24T11:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:32 crc kubenswrapper[5072]: I1124 11:10:32.579782 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:32 crc kubenswrapper[5072]: I1124 11:10:32.579842 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:32 crc kubenswrapper[5072]: I1124 11:10:32.579859 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:32 crc kubenswrapper[5072]: I1124 11:10:32.579883 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:32 crc kubenswrapper[5072]: I1124 11:10:32.579901 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:32Z","lastTransitionTime":"2025-11-24T11:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:32 crc kubenswrapper[5072]: I1124 11:10:32.682243 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:32 crc kubenswrapper[5072]: I1124 11:10:32.682289 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:32 crc kubenswrapper[5072]: I1124 11:10:32.682306 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:32 crc kubenswrapper[5072]: I1124 11:10:32.682329 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:32 crc kubenswrapper[5072]: I1124 11:10:32.682345 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:32Z","lastTransitionTime":"2025-11-24T11:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:32 crc kubenswrapper[5072]: I1124 11:10:32.785338 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:32 crc kubenswrapper[5072]: I1124 11:10:32.785420 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:32 crc kubenswrapper[5072]: I1124 11:10:32.785436 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:32 crc kubenswrapper[5072]: I1124 11:10:32.785456 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:32 crc kubenswrapper[5072]: I1124 11:10:32.785472 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:32Z","lastTransitionTime":"2025-11-24T11:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:32 crc kubenswrapper[5072]: I1124 11:10:32.888555 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:32 crc kubenswrapper[5072]: I1124 11:10:32.888611 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:32 crc kubenswrapper[5072]: I1124 11:10:32.888628 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:32 crc kubenswrapper[5072]: I1124 11:10:32.888651 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:32 crc kubenswrapper[5072]: I1124 11:10:32.888667 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:32Z","lastTransitionTime":"2025-11-24T11:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:32 crc kubenswrapper[5072]: I1124 11:10:32.991568 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:32 crc kubenswrapper[5072]: I1124 11:10:32.991637 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:32 crc kubenswrapper[5072]: I1124 11:10:32.991658 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:32 crc kubenswrapper[5072]: I1124 11:10:32.991683 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:32 crc kubenswrapper[5072]: I1124 11:10:32.991702 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:32Z","lastTransitionTime":"2025-11-24T11:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:33 crc kubenswrapper[5072]: I1124 11:10:33.015446 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nnrv7" Nov 24 11:10:33 crc kubenswrapper[5072]: E1124 11:10:33.015623 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nnrv7" podUID="60100e7d-c8b1-4b18-8567-46e21096fa0f" Nov 24 11:10:33 crc kubenswrapper[5072]: I1124 11:10:33.032114 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Nov 24 11:10:33 crc kubenswrapper[5072]: I1124 11:10:33.094536 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:33 crc kubenswrapper[5072]: I1124 11:10:33.094616 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:33 crc kubenswrapper[5072]: I1124 11:10:33.094639 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:33 crc kubenswrapper[5072]: I1124 11:10:33.094671 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:33 crc kubenswrapper[5072]: I1124 11:10:33.094694 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:33Z","lastTransitionTime":"2025-11-24T11:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:33 crc kubenswrapper[5072]: I1124 11:10:33.197303 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:33 crc kubenswrapper[5072]: I1124 11:10:33.197429 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:33 crc kubenswrapper[5072]: I1124 11:10:33.197445 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:33 crc kubenswrapper[5072]: I1124 11:10:33.197464 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:33 crc kubenswrapper[5072]: I1124 11:10:33.197480 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:33Z","lastTransitionTime":"2025-11-24T11:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:33 crc kubenswrapper[5072]: I1124 11:10:33.300481 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:33 crc kubenswrapper[5072]: I1124 11:10:33.300531 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:33 crc kubenswrapper[5072]: I1124 11:10:33.300549 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:33 crc kubenswrapper[5072]: I1124 11:10:33.300573 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:33 crc kubenswrapper[5072]: I1124 11:10:33.300588 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:33Z","lastTransitionTime":"2025-11-24T11:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:33 crc kubenswrapper[5072]: I1124 11:10:33.403769 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:33 crc kubenswrapper[5072]: I1124 11:10:33.403824 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:33 crc kubenswrapper[5072]: I1124 11:10:33.403840 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:33 crc kubenswrapper[5072]: I1124 11:10:33.403862 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:33 crc kubenswrapper[5072]: I1124 11:10:33.403880 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:33Z","lastTransitionTime":"2025-11-24T11:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:33 crc kubenswrapper[5072]: I1124 11:10:33.507446 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:33 crc kubenswrapper[5072]: I1124 11:10:33.507508 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:33 crc kubenswrapper[5072]: I1124 11:10:33.507532 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:33 crc kubenswrapper[5072]: I1124 11:10:33.507562 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:33 crc kubenswrapper[5072]: I1124 11:10:33.507581 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:33Z","lastTransitionTime":"2025-11-24T11:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:33 crc kubenswrapper[5072]: I1124 11:10:33.610753 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:33 crc kubenswrapper[5072]: I1124 11:10:33.610806 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:33 crc kubenswrapper[5072]: I1124 11:10:33.610824 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:33 crc kubenswrapper[5072]: I1124 11:10:33.610847 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:33 crc kubenswrapper[5072]: I1124 11:10:33.610863 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:33Z","lastTransitionTime":"2025-11-24T11:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:33 crc kubenswrapper[5072]: I1124 11:10:33.713742 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:33 crc kubenswrapper[5072]: I1124 11:10:33.713809 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:33 crc kubenswrapper[5072]: I1124 11:10:33.713832 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:33 crc kubenswrapper[5072]: I1124 11:10:33.713859 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:33 crc kubenswrapper[5072]: I1124 11:10:33.713880 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:33Z","lastTransitionTime":"2025-11-24T11:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:33 crc kubenswrapper[5072]: I1124 11:10:33.816619 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:33 crc kubenswrapper[5072]: I1124 11:10:33.816734 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:33 crc kubenswrapper[5072]: I1124 11:10:33.816750 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:33 crc kubenswrapper[5072]: I1124 11:10:33.816774 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:33 crc kubenswrapper[5072]: I1124 11:10:33.816843 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:33Z","lastTransitionTime":"2025-11-24T11:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:33 crc kubenswrapper[5072]: I1124 11:10:33.863133 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:10:33 crc kubenswrapper[5072]: E1124 11:10:33.863407 5072 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 11:10:33 crc kubenswrapper[5072]: E1124 11:10:33.863511 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 11:11:37.863479683 +0000 UTC m=+149.575004229 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 11:10:33 crc kubenswrapper[5072]: I1124 11:10:33.919997 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:33 crc kubenswrapper[5072]: I1124 11:10:33.920030 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:33 crc kubenswrapper[5072]: I1124 11:10:33.920045 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:33 crc kubenswrapper[5072]: I1124 11:10:33.920066 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:33 crc kubenswrapper[5072]: I1124 11:10:33.920085 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:33Z","lastTransitionTime":"2025-11-24T11:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:33 crc kubenswrapper[5072]: I1124 11:10:33.964128 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:10:33 crc kubenswrapper[5072]: I1124 11:10:33.964304 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:10:33 crc kubenswrapper[5072]: E1124 11:10:33.964352 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:11:37.964321989 +0000 UTC m=+149.675846505 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:10:33 crc kubenswrapper[5072]: I1124 11:10:33.964433 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:10:33 crc kubenswrapper[5072]: I1124 11:10:33.964498 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:10:33 crc kubenswrapper[5072]: E1124 11:10:33.964441 5072 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 11:10:33 crc kubenswrapper[5072]: E1124 11:10:33.964547 5072 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 11:10:33 crc kubenswrapper[5072]: E1124 11:10:33.964573 5072 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 11:10:33 crc kubenswrapper[5072]: E1124 11:10:33.964592 5072 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:10:33 crc kubenswrapper[5072]: E1124 11:10:33.964594 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 11:11:37.964580895 +0000 UTC m=+149.676105401 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 11:10:33 crc kubenswrapper[5072]: E1124 11:10:33.964634 5072 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 11:10:33 crc kubenswrapper[5072]: E1124 11:10:33.964669 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 11:11:37.964647666 +0000 UTC m=+149.676172182 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:10:33 crc kubenswrapper[5072]: E1124 11:10:33.964670 5072 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 11:10:33 crc kubenswrapper[5072]: E1124 11:10:33.964693 5072 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:10:33 crc kubenswrapper[5072]: E1124 11:10:33.964758 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 11:11:37.964738478 +0000 UTC m=+149.676262994 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:10:34 crc kubenswrapper[5072]: I1124 11:10:34.016053 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:10:34 crc kubenswrapper[5072]: I1124 11:10:34.016127 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:10:34 crc kubenswrapper[5072]: I1124 11:10:34.016074 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:10:34 crc kubenswrapper[5072]: E1124 11:10:34.016222 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:10:34 crc kubenswrapper[5072]: E1124 11:10:34.016298 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:10:34 crc kubenswrapper[5072]: E1124 11:10:34.016398 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:10:34 crc kubenswrapper[5072]: I1124 11:10:34.024031 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:34 crc kubenswrapper[5072]: I1124 11:10:34.024091 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:34 crc kubenswrapper[5072]: I1124 11:10:34.024112 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:34 crc kubenswrapper[5072]: I1124 11:10:34.024140 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:34 crc kubenswrapper[5072]: I1124 11:10:34.024164 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:34Z","lastTransitionTime":"2025-11-24T11:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:34 crc kubenswrapper[5072]: I1124 11:10:34.126694 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:34 crc kubenswrapper[5072]: I1124 11:10:34.126746 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:34 crc kubenswrapper[5072]: I1124 11:10:34.126765 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:34 crc kubenswrapper[5072]: I1124 11:10:34.126789 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:34 crc kubenswrapper[5072]: I1124 11:10:34.126807 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:34Z","lastTransitionTime":"2025-11-24T11:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:34 crc kubenswrapper[5072]: I1124 11:10:34.229751 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:34 crc kubenswrapper[5072]: I1124 11:10:34.229828 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:34 crc kubenswrapper[5072]: I1124 11:10:34.229855 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:34 crc kubenswrapper[5072]: I1124 11:10:34.229887 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:34 crc kubenswrapper[5072]: I1124 11:10:34.229912 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:34Z","lastTransitionTime":"2025-11-24T11:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:34 crc kubenswrapper[5072]: I1124 11:10:34.332508 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:34 crc kubenswrapper[5072]: I1124 11:10:34.332587 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:34 crc kubenswrapper[5072]: I1124 11:10:34.332612 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:34 crc kubenswrapper[5072]: I1124 11:10:34.332682 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:34 crc kubenswrapper[5072]: I1124 11:10:34.332707 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:34Z","lastTransitionTime":"2025-11-24T11:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:34 crc kubenswrapper[5072]: I1124 11:10:34.436334 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:34 crc kubenswrapper[5072]: I1124 11:10:34.436415 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:34 crc kubenswrapper[5072]: I1124 11:10:34.436433 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:34 crc kubenswrapper[5072]: I1124 11:10:34.436455 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:34 crc kubenswrapper[5072]: I1124 11:10:34.436472 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:34Z","lastTransitionTime":"2025-11-24T11:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:34 crc kubenswrapper[5072]: I1124 11:10:34.540651 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:34 crc kubenswrapper[5072]: I1124 11:10:34.540714 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:34 crc kubenswrapper[5072]: I1124 11:10:34.540728 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:34 crc kubenswrapper[5072]: I1124 11:10:34.540754 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:34 crc kubenswrapper[5072]: I1124 11:10:34.540768 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:34Z","lastTransitionTime":"2025-11-24T11:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:34 crc kubenswrapper[5072]: I1124 11:10:34.644933 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:34 crc kubenswrapper[5072]: I1124 11:10:34.645020 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:34 crc kubenswrapper[5072]: I1124 11:10:34.645066 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:34 crc kubenswrapper[5072]: I1124 11:10:34.645276 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:34 crc kubenswrapper[5072]: I1124 11:10:34.645303 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:34Z","lastTransitionTime":"2025-11-24T11:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:34 crc kubenswrapper[5072]: I1124 11:10:34.748479 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:34 crc kubenswrapper[5072]: I1124 11:10:34.748542 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:34 crc kubenswrapper[5072]: I1124 11:10:34.748562 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:34 crc kubenswrapper[5072]: I1124 11:10:34.748586 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:34 crc kubenswrapper[5072]: I1124 11:10:34.748606 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:34Z","lastTransitionTime":"2025-11-24T11:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:34 crc kubenswrapper[5072]: I1124 11:10:34.852207 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:34 crc kubenswrapper[5072]: I1124 11:10:34.852298 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:34 crc kubenswrapper[5072]: I1124 11:10:34.852316 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:34 crc kubenswrapper[5072]: I1124 11:10:34.852339 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:34 crc kubenswrapper[5072]: I1124 11:10:34.852356 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:34Z","lastTransitionTime":"2025-11-24T11:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:34 crc kubenswrapper[5072]: I1124 11:10:34.955838 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:34 crc kubenswrapper[5072]: I1124 11:10:34.955899 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:34 crc kubenswrapper[5072]: I1124 11:10:34.955917 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:34 crc kubenswrapper[5072]: I1124 11:10:34.955942 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:34 crc kubenswrapper[5072]: I1124 11:10:34.955964 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:34Z","lastTransitionTime":"2025-11-24T11:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:35 crc kubenswrapper[5072]: I1124 11:10:35.015974 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nnrv7" Nov 24 11:10:35 crc kubenswrapper[5072]: E1124 11:10:35.016399 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nnrv7" podUID="60100e7d-c8b1-4b18-8567-46e21096fa0f" Nov 24 11:10:35 crc kubenswrapper[5072]: I1124 11:10:35.038841 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Nov 24 11:10:35 crc kubenswrapper[5072]: I1124 11:10:35.059770 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:35 crc kubenswrapper[5072]: I1124 11:10:35.059852 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:35 crc kubenswrapper[5072]: I1124 11:10:35.059882 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:35 crc kubenswrapper[5072]: I1124 11:10:35.059911 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:35 crc kubenswrapper[5072]: I1124 11:10:35.059928 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:35Z","lastTransitionTime":"2025-11-24T11:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:35 crc kubenswrapper[5072]: I1124 11:10:35.163441 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:35 crc kubenswrapper[5072]: I1124 11:10:35.163505 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:35 crc kubenswrapper[5072]: I1124 11:10:35.163523 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:35 crc kubenswrapper[5072]: I1124 11:10:35.163548 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:35 crc kubenswrapper[5072]: I1124 11:10:35.163569 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:35Z","lastTransitionTime":"2025-11-24T11:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:35 crc kubenswrapper[5072]: I1124 11:10:35.266510 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:35 crc kubenswrapper[5072]: I1124 11:10:35.266598 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:35 crc kubenswrapper[5072]: I1124 11:10:35.266613 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:35 crc kubenswrapper[5072]: I1124 11:10:35.266663 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:35 crc kubenswrapper[5072]: I1124 11:10:35.266681 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:35Z","lastTransitionTime":"2025-11-24T11:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:35 crc kubenswrapper[5072]: I1124 11:10:35.369120 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:35 crc kubenswrapper[5072]: I1124 11:10:35.369162 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:35 crc kubenswrapper[5072]: I1124 11:10:35.369170 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:35 crc kubenswrapper[5072]: I1124 11:10:35.369184 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:35 crc kubenswrapper[5072]: I1124 11:10:35.369196 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:35Z","lastTransitionTime":"2025-11-24T11:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:35 crc kubenswrapper[5072]: I1124 11:10:35.471541 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:35 crc kubenswrapper[5072]: I1124 11:10:35.471596 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:35 crc kubenswrapper[5072]: I1124 11:10:35.471613 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:35 crc kubenswrapper[5072]: I1124 11:10:35.471638 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:35 crc kubenswrapper[5072]: I1124 11:10:35.471655 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:35Z","lastTransitionTime":"2025-11-24T11:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:35 crc kubenswrapper[5072]: I1124 11:10:35.574689 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:35 crc kubenswrapper[5072]: I1124 11:10:35.574744 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:35 crc kubenswrapper[5072]: I1124 11:10:35.574761 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:35 crc kubenswrapper[5072]: I1124 11:10:35.574783 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:35 crc kubenswrapper[5072]: I1124 11:10:35.574801 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:35Z","lastTransitionTime":"2025-11-24T11:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:35 crc kubenswrapper[5072]: I1124 11:10:35.677930 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:35 crc kubenswrapper[5072]: I1124 11:10:35.677984 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:35 crc kubenswrapper[5072]: I1124 11:10:35.678000 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:35 crc kubenswrapper[5072]: I1124 11:10:35.678023 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:35 crc kubenswrapper[5072]: I1124 11:10:35.678040 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:35Z","lastTransitionTime":"2025-11-24T11:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:35 crc kubenswrapper[5072]: I1124 11:10:35.781391 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:35 crc kubenswrapper[5072]: I1124 11:10:35.781450 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:35 crc kubenswrapper[5072]: I1124 11:10:35.781465 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:35 crc kubenswrapper[5072]: I1124 11:10:35.781488 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:35 crc kubenswrapper[5072]: I1124 11:10:35.781503 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:35Z","lastTransitionTime":"2025-11-24T11:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:35 crc kubenswrapper[5072]: I1124 11:10:35.884232 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:35 crc kubenswrapper[5072]: I1124 11:10:35.884269 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:35 crc kubenswrapper[5072]: I1124 11:10:35.884280 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:35 crc kubenswrapper[5072]: I1124 11:10:35.884295 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:35 crc kubenswrapper[5072]: I1124 11:10:35.884307 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:35Z","lastTransitionTime":"2025-11-24T11:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:35 crc kubenswrapper[5072]: I1124 11:10:35.986789 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:35 crc kubenswrapper[5072]: I1124 11:10:35.986868 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:35 crc kubenswrapper[5072]: I1124 11:10:35.986894 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:35 crc kubenswrapper[5072]: I1124 11:10:35.986919 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:35 crc kubenswrapper[5072]: I1124 11:10:35.986936 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:35Z","lastTransitionTime":"2025-11-24T11:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:36 crc kubenswrapper[5072]: I1124 11:10:36.015874 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:10:36 crc kubenswrapper[5072]: I1124 11:10:36.015991 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:10:36 crc kubenswrapper[5072]: I1124 11:10:36.016040 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:10:36 crc kubenswrapper[5072]: E1124 11:10:36.016219 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:10:36 crc kubenswrapper[5072]: E1124 11:10:36.016429 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:10:36 crc kubenswrapper[5072]: E1124 11:10:36.016559 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:10:36 crc kubenswrapper[5072]: I1124 11:10:36.090120 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:36 crc kubenswrapper[5072]: I1124 11:10:36.090211 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:36 crc kubenswrapper[5072]: I1124 11:10:36.090232 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:36 crc kubenswrapper[5072]: I1124 11:10:36.090257 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:36 crc kubenswrapper[5072]: I1124 11:10:36.090273 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:36Z","lastTransitionTime":"2025-11-24T11:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:36 crc kubenswrapper[5072]: I1124 11:10:36.193227 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:36 crc kubenswrapper[5072]: I1124 11:10:36.193299 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:36 crc kubenswrapper[5072]: I1124 11:10:36.193470 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:36 crc kubenswrapper[5072]: I1124 11:10:36.193546 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:36 crc kubenswrapper[5072]: I1124 11:10:36.193588 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:36Z","lastTransitionTime":"2025-11-24T11:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:36 crc kubenswrapper[5072]: I1124 11:10:36.295773 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:36 crc kubenswrapper[5072]: I1124 11:10:36.295838 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:36 crc kubenswrapper[5072]: I1124 11:10:36.295859 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:36 crc kubenswrapper[5072]: I1124 11:10:36.295885 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:36 crc kubenswrapper[5072]: I1124 11:10:36.295904 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:36Z","lastTransitionTime":"2025-11-24T11:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:36 crc kubenswrapper[5072]: I1124 11:10:36.398832 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:36 crc kubenswrapper[5072]: I1124 11:10:36.398897 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:36 crc kubenswrapper[5072]: I1124 11:10:36.398914 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:36 crc kubenswrapper[5072]: I1124 11:10:36.398937 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:36 crc kubenswrapper[5072]: I1124 11:10:36.398953 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:36Z","lastTransitionTime":"2025-11-24T11:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:36 crc kubenswrapper[5072]: I1124 11:10:36.501397 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:36 crc kubenswrapper[5072]: I1124 11:10:36.501478 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:36 crc kubenswrapper[5072]: I1124 11:10:36.501513 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:36 crc kubenswrapper[5072]: I1124 11:10:36.501531 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:36 crc kubenswrapper[5072]: I1124 11:10:36.501543 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:36Z","lastTransitionTime":"2025-11-24T11:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:36 crc kubenswrapper[5072]: I1124 11:10:36.605179 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:36 crc kubenswrapper[5072]: I1124 11:10:36.605221 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:36 crc kubenswrapper[5072]: I1124 11:10:36.605233 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:36 crc kubenswrapper[5072]: I1124 11:10:36.605247 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:36 crc kubenswrapper[5072]: I1124 11:10:36.605259 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:36Z","lastTransitionTime":"2025-11-24T11:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:36 crc kubenswrapper[5072]: I1124 11:10:36.707528 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:36 crc kubenswrapper[5072]: I1124 11:10:36.707569 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:36 crc kubenswrapper[5072]: I1124 11:10:36.707578 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:36 crc kubenswrapper[5072]: I1124 11:10:36.707593 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:36 crc kubenswrapper[5072]: I1124 11:10:36.707604 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:36Z","lastTransitionTime":"2025-11-24T11:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:36 crc kubenswrapper[5072]: I1124 11:10:36.810710 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:36 crc kubenswrapper[5072]: I1124 11:10:36.810856 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:36 crc kubenswrapper[5072]: I1124 11:10:36.810878 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:36 crc kubenswrapper[5072]: I1124 11:10:36.810939 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:36 crc kubenswrapper[5072]: I1124 11:10:36.810964 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:36Z","lastTransitionTime":"2025-11-24T11:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:36 crc kubenswrapper[5072]: I1124 11:10:36.913819 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:36 crc kubenswrapper[5072]: I1124 11:10:36.913884 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:36 crc kubenswrapper[5072]: I1124 11:10:36.913908 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:36 crc kubenswrapper[5072]: I1124 11:10:36.913939 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:36 crc kubenswrapper[5072]: I1124 11:10:36.913964 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:36Z","lastTransitionTime":"2025-11-24T11:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:37 crc kubenswrapper[5072]: I1124 11:10:37.016677 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nnrv7" Nov 24 11:10:37 crc kubenswrapper[5072]: E1124 11:10:37.016938 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nnrv7" podUID="60100e7d-c8b1-4b18-8567-46e21096fa0f" Nov 24 11:10:37 crc kubenswrapper[5072]: I1124 11:10:37.018667 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:37 crc kubenswrapper[5072]: I1124 11:10:37.018737 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:37 crc kubenswrapper[5072]: I1124 11:10:37.018828 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:37 crc kubenswrapper[5072]: I1124 11:10:37.018861 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:37 crc kubenswrapper[5072]: I1124 11:10:37.018886 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:37Z","lastTransitionTime":"2025-11-24T11:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:37 crc kubenswrapper[5072]: I1124 11:10:37.122121 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:37 crc kubenswrapper[5072]: I1124 11:10:37.122185 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:37 crc kubenswrapper[5072]: I1124 11:10:37.122202 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:37 crc kubenswrapper[5072]: I1124 11:10:37.122229 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:37 crc kubenswrapper[5072]: I1124 11:10:37.122248 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:37Z","lastTransitionTime":"2025-11-24T11:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:37 crc kubenswrapper[5072]: I1124 11:10:37.227699 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:37 crc kubenswrapper[5072]: I1124 11:10:37.227763 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:37 crc kubenswrapper[5072]: I1124 11:10:37.227782 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:37 crc kubenswrapper[5072]: I1124 11:10:37.227808 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:37 crc kubenswrapper[5072]: I1124 11:10:37.227826 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:37Z","lastTransitionTime":"2025-11-24T11:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:37 crc kubenswrapper[5072]: I1124 11:10:37.331095 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:37 crc kubenswrapper[5072]: I1124 11:10:37.331167 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:37 crc kubenswrapper[5072]: I1124 11:10:37.331190 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:37 crc kubenswrapper[5072]: I1124 11:10:37.331221 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:37 crc kubenswrapper[5072]: I1124 11:10:37.331245 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:37Z","lastTransitionTime":"2025-11-24T11:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:37 crc kubenswrapper[5072]: I1124 11:10:37.434640 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:37 crc kubenswrapper[5072]: I1124 11:10:37.434714 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:37 crc kubenswrapper[5072]: I1124 11:10:37.434735 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:37 crc kubenswrapper[5072]: I1124 11:10:37.434766 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:37 crc kubenswrapper[5072]: I1124 11:10:37.434789 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:37Z","lastTransitionTime":"2025-11-24T11:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:37 crc kubenswrapper[5072]: I1124 11:10:37.538214 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:37 crc kubenswrapper[5072]: I1124 11:10:37.538329 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:37 crc kubenswrapper[5072]: I1124 11:10:37.538346 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:37 crc kubenswrapper[5072]: I1124 11:10:37.538752 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:37 crc kubenswrapper[5072]: I1124 11:10:37.538777 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:37Z","lastTransitionTime":"2025-11-24T11:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:37 crc kubenswrapper[5072]: I1124 11:10:37.641924 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:37 crc kubenswrapper[5072]: I1124 11:10:37.642012 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:37 crc kubenswrapper[5072]: I1124 11:10:37.642035 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:37 crc kubenswrapper[5072]: I1124 11:10:37.642069 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:37 crc kubenswrapper[5072]: I1124 11:10:37.642093 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:37Z","lastTransitionTime":"2025-11-24T11:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:37 crc kubenswrapper[5072]: I1124 11:10:37.745023 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:37 crc kubenswrapper[5072]: I1124 11:10:37.745062 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:37 crc kubenswrapper[5072]: I1124 11:10:37.745070 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:37 crc kubenswrapper[5072]: I1124 11:10:37.745086 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:37 crc kubenswrapper[5072]: I1124 11:10:37.745096 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:37Z","lastTransitionTime":"2025-11-24T11:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:37 crc kubenswrapper[5072]: I1124 11:10:37.848270 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:37 crc kubenswrapper[5072]: I1124 11:10:37.848355 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:37 crc kubenswrapper[5072]: I1124 11:10:37.848419 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:37 crc kubenswrapper[5072]: I1124 11:10:37.848453 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:37 crc kubenswrapper[5072]: I1124 11:10:37.848475 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:37Z","lastTransitionTime":"2025-11-24T11:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:37 crc kubenswrapper[5072]: I1124 11:10:37.951338 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:37 crc kubenswrapper[5072]: I1124 11:10:37.951516 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:37 crc kubenswrapper[5072]: I1124 11:10:37.951546 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:37 crc kubenswrapper[5072]: I1124 11:10:37.951580 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:37 crc kubenswrapper[5072]: I1124 11:10:37.951602 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:37Z","lastTransitionTime":"2025-11-24T11:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:38 crc kubenswrapper[5072]: I1124 11:10:38.015709 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:10:38 crc kubenswrapper[5072]: I1124 11:10:38.015767 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:10:38 crc kubenswrapper[5072]: I1124 11:10:38.015788 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:10:38 crc kubenswrapper[5072]: E1124 11:10:38.015964 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:10:38 crc kubenswrapper[5072]: E1124 11:10:38.016041 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:10:38 crc kubenswrapper[5072]: E1124 11:10:38.016137 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:10:38 crc kubenswrapper[5072]: I1124 11:10:38.054978 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:38 crc kubenswrapper[5072]: I1124 11:10:38.055034 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:38 crc kubenswrapper[5072]: I1124 11:10:38.055051 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:38 crc kubenswrapper[5072]: I1124 11:10:38.055079 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:38 crc kubenswrapper[5072]: I1124 11:10:38.055099 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:38Z","lastTransitionTime":"2025-11-24T11:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:38 crc kubenswrapper[5072]: I1124 11:10:38.158624 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:38 crc kubenswrapper[5072]: I1124 11:10:38.158749 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:38 crc kubenswrapper[5072]: I1124 11:10:38.158817 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:38 crc kubenswrapper[5072]: I1124 11:10:38.158852 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:38 crc kubenswrapper[5072]: I1124 11:10:38.158874 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:38Z","lastTransitionTime":"2025-11-24T11:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:38 crc kubenswrapper[5072]: I1124 11:10:38.261571 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:38 crc kubenswrapper[5072]: I1124 11:10:38.261624 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:38 crc kubenswrapper[5072]: I1124 11:10:38.261635 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:38 crc kubenswrapper[5072]: I1124 11:10:38.261651 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:38 crc kubenswrapper[5072]: I1124 11:10:38.261684 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:38Z","lastTransitionTime":"2025-11-24T11:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:38 crc kubenswrapper[5072]: I1124 11:10:38.364234 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:38 crc kubenswrapper[5072]: I1124 11:10:38.364294 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:38 crc kubenswrapper[5072]: I1124 11:10:38.364311 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:38 crc kubenswrapper[5072]: I1124 11:10:38.364336 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:38 crc kubenswrapper[5072]: I1124 11:10:38.364352 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:38Z","lastTransitionTime":"2025-11-24T11:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:38 crc kubenswrapper[5072]: I1124 11:10:38.466931 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:38 crc kubenswrapper[5072]: I1124 11:10:38.467012 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:38 crc kubenswrapper[5072]: I1124 11:10:38.467036 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:38 crc kubenswrapper[5072]: I1124 11:10:38.467070 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:38 crc kubenswrapper[5072]: I1124 11:10:38.467095 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:38Z","lastTransitionTime":"2025-11-24T11:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:38 crc kubenswrapper[5072]: I1124 11:10:38.570188 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:38 crc kubenswrapper[5072]: I1124 11:10:38.570250 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:38 crc kubenswrapper[5072]: I1124 11:10:38.570268 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:38 crc kubenswrapper[5072]: I1124 11:10:38.570293 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:38 crc kubenswrapper[5072]: I1124 11:10:38.570311 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:38Z","lastTransitionTime":"2025-11-24T11:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:38 crc kubenswrapper[5072]: I1124 11:10:38.673055 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:38 crc kubenswrapper[5072]: I1124 11:10:38.673108 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:38 crc kubenswrapper[5072]: I1124 11:10:38.673125 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:38 crc kubenswrapper[5072]: I1124 11:10:38.673148 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:38 crc kubenswrapper[5072]: I1124 11:10:38.673164 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:38Z","lastTransitionTime":"2025-11-24T11:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:38 crc kubenswrapper[5072]: I1124 11:10:38.775214 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:38 crc kubenswrapper[5072]: I1124 11:10:38.775263 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:38 crc kubenswrapper[5072]: I1124 11:10:38.775279 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:38 crc kubenswrapper[5072]: I1124 11:10:38.775299 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:38 crc kubenswrapper[5072]: I1124 11:10:38.775316 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:38Z","lastTransitionTime":"2025-11-24T11:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:38 crc kubenswrapper[5072]: I1124 11:10:38.879111 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:38 crc kubenswrapper[5072]: I1124 11:10:38.879176 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:38 crc kubenswrapper[5072]: I1124 11:10:38.879197 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:38 crc kubenswrapper[5072]: I1124 11:10:38.879227 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:38 crc kubenswrapper[5072]: I1124 11:10:38.879248 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:38Z","lastTransitionTime":"2025-11-24T11:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:38 crc kubenswrapper[5072]: I1124 11:10:38.982472 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:38 crc kubenswrapper[5072]: I1124 11:10:38.982525 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:38 crc kubenswrapper[5072]: I1124 11:10:38.982541 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:38 crc kubenswrapper[5072]: I1124 11:10:38.982593 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:38 crc kubenswrapper[5072]: I1124 11:10:38.982610 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:38Z","lastTransitionTime":"2025-11-24T11:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:39 crc kubenswrapper[5072]: I1124 11:10:39.015953 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nnrv7" Nov 24 11:10:39 crc kubenswrapper[5072]: E1124 11:10:39.016133 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nnrv7" podUID="60100e7d-c8b1-4b18-8567-46e21096fa0f" Nov 24 11:10:39 crc kubenswrapper[5072]: I1124 11:10:39.041522 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qjsxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74eb978f-00ff-4ed3-a5da-8026a3211592\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a69b8017daa872327d88eab8150845309e30c5cf37b229292e7c8a80e5d599c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://911b5942d35c25032791bf5a43559a6234acf215f5d3f84a30e69aced0caecc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://911b5942d35c25032791bf5a43559a6234acf215f5d3f84a30e69aced0caecc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://829da19d26a0ee0192a826e0b355266bcc48c77cf7b1fcf97a9e56add5d48645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://829da19d26a0ee0192a826e0b355266bcc48c77cf7b1fcf97a9e56add5d48645\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5add393950b53ed615d28b3d65833ae6a5174616b7170577babf1f4b7b6a2336\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5add393950b53ed615d28b3d65833ae6a5174616b7170577babf1f4b7b6a2336\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4771d3054f62a25ec9be8b6628ead9e7eb99ad4ae545d803919cb0122343c0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4771d3054f62a25ec9be8b6628ead9e7eb99ad4ae545d803919cb0122343c0ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd19ed803c2b441c4dde807b4cd4461c581058658db24f32dea39ad73b9cef14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd19ed803c2b441c4dde807b4cd4461c581058658db24f32dea39ad73b9cef14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09dba82c18fac19ddd5bbbeecab58a5dc685dbda72e7570cde5d445990066d2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09dba82c18fac19ddd5bbbeecab58a5dc685dbda72e7570cde5d445990066d2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-br29d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qjsxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:39 crc kubenswrapper[5072]: I1124 11:10:39.058944 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wndk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c05ddf6-986e-4bd6-95f0-7d734bc59140\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://894e58e94d99e8ef26722db709e0135d59ac4847daf001e37ce266c9baf02e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gztmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea4b260f16a11dade8c8b120408cf2d167dd868a9b938f4231aa811546252c56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gztmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wndk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:39 crc kubenswrapper[5072]: I1124 11:10:39.072135 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nnrv7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60100e7d-c8b1-4b18-8567-46e21096fa0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbdfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbdfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:45Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nnrv7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:39 crc kubenswrapper[5072]: I1124 11:10:39.084806 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:39 crc kubenswrapper[5072]: I1124 11:10:39.084849 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:39 crc kubenswrapper[5072]: I1124 11:10:39.084863 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:39 crc kubenswrapper[5072]: I1124 11:10:39.084885 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:39 crc kubenswrapper[5072]: I1124 11:10:39.084900 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:39Z","lastTransitionTime":"2025-11-24T11:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:39 crc kubenswrapper[5072]: I1124 11:10:39.090793 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b45fbff892ae7b15dc056d52d6485a995bb8a62ae423498027fe4866ef51e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcaa27616bc15c5ce26c371eb8a8f155914434949662b30894cd1ef7aa8e04a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:39 crc kubenswrapper[5072]: I1124 11:10:39.105540 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3973b61727227663fde759ad817fc73088f78293c67fc1bbbf5d5543afa7bbb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:39 crc kubenswrapper[5072]: I1124 11:10:39.116398 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bkjf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"175fd540-009b-4cb4-9c3e-e2ebc7e787f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d000a9d98b0e3ed54c1cc50148360bb8103d332c45ee03e745f14929132d2c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcts8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bkjf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:39 crc kubenswrapper[5072]: I1124 11:10:39.138075 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t8b9x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a9fe7b3-71a3-4388-8ee4-7531ceef6049\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db181b35d5ddd8cb7ce31d9293b82a515a8889794cf9696c664b101693247cc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96637ece9dca11a6b9e2a8fff8e78ca37f48e9f86e3f076e80cbd56aa353ca74\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:10:18Z\\\",\\\"message\\\":\\\"2025-11-24T11:09:33+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_93e4312d-4a0d-4245-ac97-02477f03c30c\\\\n2025-11-24T11:09:33+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_93e4312d-4a0d-4245-ac97-02477f03c30c to /host/opt/cni/bin/\\\\n2025-11-24T11:09:33Z [verbose] multus-daemon started\\\\n2025-11-24T11:09:33Z [verbose] Readiness Indicator file check\\\\n2025-11-24T11:10:18Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmbvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t8b9x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:39 crc kubenswrapper[5072]: I1124 11:10:39.152745 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85ee6420-36f0-467c-acf4-ebea8b02c8d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21d57225dc522c1ee3621c75ac8f9f93c47d21afb8b0cb1aae2d6aea1d17a252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3509fd52379451e43594c096ef652d92778331f2aef6b689e547f35a384b976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jfxnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:39 crc kubenswrapper[5072]: I1124 11:10:39.162353 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jz4mm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19d555ef-9635-4aa7-bce1-7b1eb4805445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc7d5e96171aeadf92196d2b795c03ec634abd92814569a974200484569c145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8k8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jz4mm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:39 crc kubenswrapper[5072]: I1124 11:10:39.182713 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b946855f-8f8d-4423-bf5f-03d5f0dafb67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41099739f7a68ef18ea64b023b551a42670db1d9f80706439936aaf6942a38d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fc0def38d015fe99a0b28cb7d120f2057643bcb99bf6f3040e5edb22a436000\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f07b4fd90df5b04817aa5d8428f0790e1f543f9480016c9f260e26edd478db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e99413babe707e048ced5765f9107219351b2df100fa7f430edb844cc73eecd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://836caae6820dd3abcef209e4d66a7d64ba81ffe10c43494666a989cee7ee24ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbe0eb41ca08614efa2e3fa0af8362b0490a809470803a2e683711ac082dc7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbe0eb41ca08614efa2e3fa0af8362b0490a809470803a2e683711ac082dc7e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d299986df8243aa52e1ca08fff9cac0db589f25b646f32366e304cf4fc915214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d299986df8243aa52e1ca08fff9cac0db589f25b646f32366e304cf4fc915214\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4342dc1e79fedf172c723736a130039e76d481d9c04106a22ad25ab8e3c8cb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4342dc1e79fedf172c723736a130039e76d481d9c04106a22ad25ab8e3c8cb9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:39 crc kubenswrapper[5072]: I1124 11:10:39.193694 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:39 crc kubenswrapper[5072]: I1124 11:10:39.193782 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:39 crc kubenswrapper[5072]: I1124 11:10:39.193835 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:39 crc kubenswrapper[5072]: I1124 11:10:39.193862 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:39 crc kubenswrapper[5072]: I1124 11:10:39.193879 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:39Z","lastTransitionTime":"2025-11-24T11:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:39 crc kubenswrapper[5072]: I1124 11:10:39.198464 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9007e2c-ce36-49d5-ac3f-a2a0ced4e662\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://631c19835680cfbfc94d8d2864f79bb327a834aae717a2c9c525383029e44001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a299161b21fb4a4bc255d765f39eaafa3c87549cc62d458d28ff57fbb4b5fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25ce4f3c52e2096622385f0bd213a058de7ddd3967ed8ba918e79fc63b00429c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28c581f99dcf7d549d235350230e7c3ef380dfeb4fdff577353410642700cb1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:39 crc kubenswrapper[5072]: I1124 11:10:39.213344 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:39 crc kubenswrapper[5072]: I1124 11:10:39.235429 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a948c39e09b468da8df5726e7734af35e1d5324d44a6ad11f6e30031f27060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:39 crc kubenswrapper[5072]: I1124 11:10:39.254184 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:39 crc kubenswrapper[5072]: I1124 11:10:39.271971 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3de15bd-d863-49c9-a84d-44e5af94f01c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1845d620994797b0fad3550ee243fdb5719b076cd21e2cd9fbdbfd84d5afd805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://802b58c2bb92a1887147eee76414a66c948e077ad8a3835bccd344ae67562b89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24ca0cd9727c9f25252266ba758cfa75b6d48b1f683f97b36bc3a40d6e4d9346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91aa9d18d2efa1c3559a3a17858453a13c76b7567ffb215046c57556b661890c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91aa9d18d2efa1c3559a3a17858453a13c76b7567ffb215046c57556b661890c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:39 crc kubenswrapper[5072]: I1124 11:10:39.288096 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab9dca0d-8225-46fc-a6dd-894e7bb06f86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://473adc67bdfd905b16f570cb175b1e550ed0929162d0d6c9903c855e069fc30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9772df13d553a560593560db376cb84f9ea9cb3dac735b48d2adb290c3d0e76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9772df13d553a560593560db376cb84f9ea9cb3dac735b48d2adb290c3d0e76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:39 crc kubenswrapper[5072]: I1124 11:10:39.296436 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:39 crc kubenswrapper[5072]: I1124 11:10:39.296562 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:39 crc kubenswrapper[5072]: I1124 11:10:39.296590 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:39 crc kubenswrapper[5072]: I1124 11:10:39.296618 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:39 crc kubenswrapper[5072]: I1124 11:10:39.296638 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:39Z","lastTransitionTime":"2025-11-24T11:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:39 crc kubenswrapper[5072]: I1124 11:10:39.308653 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a60343a1-7193-420d-b6ef-81505cfad266\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6597a19c8ed876fea1aaa8077315a8f39d0a79dee6af94970a3abcd552d673e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89e652bfaac124e13e0b3dfd3f167688a6b417b3613fb94d5422e2134ad95a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c9b314ea6e67a2866adfd0dc2e429523b6db6dab450a1a95fe5528548a0fcb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5f54ddd554c2e52a492be6b3e237793c7b7bed201d942c23d11983e154863a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e03b85333c8be2e5efe40f082369652f009482373f8e230fd948b2dee4e2ee39\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:09:23Z\\\",\\\"message\\\":\\\"W1124 11:09:12.543261 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 11:09:12.543592 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763982552 cert, and key in /tmp/serving-cert-2249531990/serving-signer.crt, /tmp/serving-cert-2249531990/serving-signer.key\\\\nI1124 11:09:13.042739 1 observer_polling.go:159] Starting file observer\\\\nW1124 11:09:13.046128 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 11:09:13.046351 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:09:13.048981 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2249531990/tls.crt::/tmp/serving-cert-2249531990/tls.key\\\\\\\"\\\\nF1124 11:09:23.567420 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d2187669c4dc9aae8ca2f2141104aee1e20df96f0bccf45ecd4c8528f51d1af\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a6b0468c00ca40213d12dd7b80c9f0dcfb93509a44ae37414053672e674f9f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6b0468c00ca40213d12dd7b80c9f0dcfb93509a44ae37414053672e674f9f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:39 crc kubenswrapper[5072]: I1124 11:10:39.327060 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:39 crc kubenswrapper[5072]: I1124 11:10:39.357809 5072 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1421e4bd297d99e68c36da933221bbabf8d74aa5fbfa7cbfe831215de52d4790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c82cb1df0677da29463f84139b09b8ee263695e4c994ef7d17846556260b5c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89dd7133a078fe05808fdf20f22b6939004406ae85d3b6ef854a3e4031350491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f6526ffcce8bc139bd9442203e460c71b46e2e8cf9e1f0d03beb067f5dc1c39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98470930757c0529cc831f91feab9f4b004c808efbfdf40e3e95b12e6af1c6d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7621cb39fa8d0330ee899d4962150519618be95eabfc592e6678bb5f5fbbdbfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b30fc71ef9fdf26e114844d344131e79b2ea981d3e69760bb92b1279f0b3c434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b30fc71ef9fdf26e114844d344131e79b2ea981d3e69760bb92b1279f0b3c434\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:10:27Z\\\",\\\"message\\\":\\\"twork controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:27Z is after 2025-08-24T17:21:41Z]\\\\nI1124 11:10:27.057212 7115 services_controller.go:434] Service openshift-service-ca-operator/metrics retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{metrics openshift-service-ca-operator 9ab1e41d-7da1-46d4-b0d8-4395ba0a6601 4750 0 2025-02-23 05:12:18 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[app:service-ca-operator] map[include.release.openshift.io/hypershift:true include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-secret-name:serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0072d895f \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{S\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:10:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-n4qmw_openshift-ovn-kubernetes(80fda759-ddfd-438a-b5a2-cb775ee1bf7e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af4c3d6857b6aaa6a401604f5423cfb55488de707a08698b4cf9f420b9c07975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:09:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trpxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:09:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4qmw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:39 crc kubenswrapper[5072]: I1124 11:10:39.399454 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:39 crc kubenswrapper[5072]: I1124 11:10:39.399523 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:39 crc kubenswrapper[5072]: I1124 11:10:39.399542 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:39 crc kubenswrapper[5072]: I1124 11:10:39.399569 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:39 crc kubenswrapper[5072]: I1124 11:10:39.399589 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:39Z","lastTransitionTime":"2025-11-24T11:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:39 crc kubenswrapper[5072]: I1124 11:10:39.502727 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:39 crc kubenswrapper[5072]: I1124 11:10:39.502791 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:39 crc kubenswrapper[5072]: I1124 11:10:39.502811 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:39 crc kubenswrapper[5072]: I1124 11:10:39.502837 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:39 crc kubenswrapper[5072]: I1124 11:10:39.502859 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:39Z","lastTransitionTime":"2025-11-24T11:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:39 crc kubenswrapper[5072]: I1124 11:10:39.605831 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:39 crc kubenswrapper[5072]: I1124 11:10:39.605889 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:39 crc kubenswrapper[5072]: I1124 11:10:39.605907 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:39 crc kubenswrapper[5072]: I1124 11:10:39.605934 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:39 crc kubenswrapper[5072]: I1124 11:10:39.605952 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:39Z","lastTransitionTime":"2025-11-24T11:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:39 crc kubenswrapper[5072]: I1124 11:10:39.708931 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:39 crc kubenswrapper[5072]: I1124 11:10:39.708999 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:39 crc kubenswrapper[5072]: I1124 11:10:39.709016 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:39 crc kubenswrapper[5072]: I1124 11:10:39.709040 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:39 crc kubenswrapper[5072]: I1124 11:10:39.709057 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:39Z","lastTransitionTime":"2025-11-24T11:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:39 crc kubenswrapper[5072]: I1124 11:10:39.812290 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:39 crc kubenswrapper[5072]: I1124 11:10:39.812363 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:39 crc kubenswrapper[5072]: I1124 11:10:39.812391 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:39 crc kubenswrapper[5072]: I1124 11:10:39.812410 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:39 crc kubenswrapper[5072]: I1124 11:10:39.812424 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:39Z","lastTransitionTime":"2025-11-24T11:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:39 crc kubenswrapper[5072]: I1124 11:10:39.915796 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:39 crc kubenswrapper[5072]: I1124 11:10:39.915868 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:39 crc kubenswrapper[5072]: I1124 11:10:39.915891 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:39 crc kubenswrapper[5072]: I1124 11:10:39.915916 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:39 crc kubenswrapper[5072]: I1124 11:10:39.915936 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:39Z","lastTransitionTime":"2025-11-24T11:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:40 crc kubenswrapper[5072]: I1124 11:10:40.016126 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:10:40 crc kubenswrapper[5072]: I1124 11:10:40.016153 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:10:40 crc kubenswrapper[5072]: I1124 11:10:40.016173 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:10:40 crc kubenswrapper[5072]: E1124 11:10:40.016265 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:10:40 crc kubenswrapper[5072]: E1124 11:10:40.016476 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:10:40 crc kubenswrapper[5072]: E1124 11:10:40.016555 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:10:40 crc kubenswrapper[5072]: I1124 11:10:40.018088 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:40 crc kubenswrapper[5072]: I1124 11:10:40.018156 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:40 crc kubenswrapper[5072]: I1124 11:10:40.018181 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:40 crc kubenswrapper[5072]: I1124 11:10:40.018209 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:40 crc kubenswrapper[5072]: I1124 11:10:40.018235 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:40Z","lastTransitionTime":"2025-11-24T11:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:40 crc kubenswrapper[5072]: I1124 11:10:40.120509 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:40 crc kubenswrapper[5072]: I1124 11:10:40.120581 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:40 crc kubenswrapper[5072]: I1124 11:10:40.120592 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:40 crc kubenswrapper[5072]: I1124 11:10:40.120610 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:40 crc kubenswrapper[5072]: I1124 11:10:40.120640 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:40Z","lastTransitionTime":"2025-11-24T11:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:40 crc kubenswrapper[5072]: I1124 11:10:40.223759 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:40 crc kubenswrapper[5072]: I1124 11:10:40.223828 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:40 crc kubenswrapper[5072]: I1124 11:10:40.223861 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:40 crc kubenswrapper[5072]: I1124 11:10:40.223891 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:40 crc kubenswrapper[5072]: I1124 11:10:40.223912 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:40Z","lastTransitionTime":"2025-11-24T11:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:40 crc kubenswrapper[5072]: I1124 11:10:40.327073 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:40 crc kubenswrapper[5072]: I1124 11:10:40.327179 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:40 crc kubenswrapper[5072]: I1124 11:10:40.327197 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:40 crc kubenswrapper[5072]: I1124 11:10:40.327223 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:40 crc kubenswrapper[5072]: I1124 11:10:40.327240 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:40Z","lastTransitionTime":"2025-11-24T11:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:40 crc kubenswrapper[5072]: I1124 11:10:40.429694 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:40 crc kubenswrapper[5072]: I1124 11:10:40.429754 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:40 crc kubenswrapper[5072]: I1124 11:10:40.429770 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:40 crc kubenswrapper[5072]: I1124 11:10:40.429794 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:40 crc kubenswrapper[5072]: I1124 11:10:40.429811 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:40Z","lastTransitionTime":"2025-11-24T11:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:40 crc kubenswrapper[5072]: I1124 11:10:40.532184 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:40 crc kubenswrapper[5072]: I1124 11:10:40.532228 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:40 crc kubenswrapper[5072]: I1124 11:10:40.532239 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:40 crc kubenswrapper[5072]: I1124 11:10:40.532261 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:40 crc kubenswrapper[5072]: I1124 11:10:40.532273 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:40Z","lastTransitionTime":"2025-11-24T11:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:40 crc kubenswrapper[5072]: I1124 11:10:40.634890 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:40 crc kubenswrapper[5072]: I1124 11:10:40.634947 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:40 crc kubenswrapper[5072]: I1124 11:10:40.634957 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:40 crc kubenswrapper[5072]: I1124 11:10:40.634972 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:40 crc kubenswrapper[5072]: I1124 11:10:40.634983 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:40Z","lastTransitionTime":"2025-11-24T11:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:40 crc kubenswrapper[5072]: I1124 11:10:40.741474 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:40 crc kubenswrapper[5072]: I1124 11:10:40.742745 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:40 crc kubenswrapper[5072]: I1124 11:10:40.742804 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:40 crc kubenswrapper[5072]: I1124 11:10:40.742833 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:40 crc kubenswrapper[5072]: I1124 11:10:40.742852 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:40Z","lastTransitionTime":"2025-11-24T11:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:40 crc kubenswrapper[5072]: I1124 11:10:40.845949 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:40 crc kubenswrapper[5072]: I1124 11:10:40.845993 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:40 crc kubenswrapper[5072]: I1124 11:10:40.846009 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:40 crc kubenswrapper[5072]: I1124 11:10:40.846031 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:40 crc kubenswrapper[5072]: I1124 11:10:40.846047 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:40Z","lastTransitionTime":"2025-11-24T11:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:40 crc kubenswrapper[5072]: I1124 11:10:40.949011 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:40 crc kubenswrapper[5072]: I1124 11:10:40.949062 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:40 crc kubenswrapper[5072]: I1124 11:10:40.949078 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:40 crc kubenswrapper[5072]: I1124 11:10:40.949102 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:40 crc kubenswrapper[5072]: I1124 11:10:40.949120 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:40Z","lastTransitionTime":"2025-11-24T11:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:41 crc kubenswrapper[5072]: I1124 11:10:41.015463 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nnrv7" Nov 24 11:10:41 crc kubenswrapper[5072]: E1124 11:10:41.015656 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nnrv7" podUID="60100e7d-c8b1-4b18-8567-46e21096fa0f" Nov 24 11:10:41 crc kubenswrapper[5072]: I1124 11:10:41.051944 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:41 crc kubenswrapper[5072]: I1124 11:10:41.052271 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:41 crc kubenswrapper[5072]: I1124 11:10:41.052532 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:41 crc kubenswrapper[5072]: I1124 11:10:41.052761 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:41 crc kubenswrapper[5072]: I1124 11:10:41.053021 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:41Z","lastTransitionTime":"2025-11-24T11:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:41 crc kubenswrapper[5072]: I1124 11:10:41.156643 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:41 crc kubenswrapper[5072]: I1124 11:10:41.157615 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:41 crc kubenswrapper[5072]: I1124 11:10:41.157819 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:41 crc kubenswrapper[5072]: I1124 11:10:41.157985 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:41 crc kubenswrapper[5072]: I1124 11:10:41.158174 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:41Z","lastTransitionTime":"2025-11-24T11:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:41 crc kubenswrapper[5072]: I1124 11:10:41.230802 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:41 crc kubenswrapper[5072]: I1124 11:10:41.230839 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:41 crc kubenswrapper[5072]: I1124 11:10:41.230851 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:41 crc kubenswrapper[5072]: I1124 11:10:41.230867 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:41 crc kubenswrapper[5072]: I1124 11:10:41.230878 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:41Z","lastTransitionTime":"2025-11-24T11:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:41 crc kubenswrapper[5072]: E1124 11:10:41.249718 5072 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:10:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:10:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:10:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:10:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:10:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:10:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:10:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:10:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a41d3a9c-0834-482e-9391-dff98db0f196\\\",\\\"systemUUID\\\":\\\"d0383649-b062-48ed-9fc1-5e553cb9256a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:41Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:41 crc kubenswrapper[5072]: I1124 11:10:41.254487 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:41 crc kubenswrapper[5072]: I1124 11:10:41.254528 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:41 crc kubenswrapper[5072]: I1124 11:10:41.254536 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:41 crc kubenswrapper[5072]: I1124 11:10:41.254550 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:41 crc kubenswrapper[5072]: I1124 11:10:41.254559 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:41Z","lastTransitionTime":"2025-11-24T11:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:41 crc kubenswrapper[5072]: E1124 11:10:41.273362 5072 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:10:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:10:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:10:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:10:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:10:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:10:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:10:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:10:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a41d3a9c-0834-482e-9391-dff98db0f196\\\",\\\"systemUUID\\\":\\\"d0383649-b062-48ed-9fc1-5e553cb9256a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:41Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:41 crc kubenswrapper[5072]: I1124 11:10:41.279036 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:41 crc kubenswrapper[5072]: I1124 11:10:41.279104 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:41 crc kubenswrapper[5072]: I1124 11:10:41.279124 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:41 crc kubenswrapper[5072]: I1124 11:10:41.279148 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:41 crc kubenswrapper[5072]: I1124 11:10:41.279168 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:41Z","lastTransitionTime":"2025-11-24T11:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:41 crc kubenswrapper[5072]: E1124 11:10:41.298336 5072 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:10:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:10:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:10:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:10:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:10:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:10:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:10:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:10:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a41d3a9c-0834-482e-9391-dff98db0f196\\\",\\\"systemUUID\\\":\\\"d0383649-b062-48ed-9fc1-5e553cb9256a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:41Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:41 crc kubenswrapper[5072]: I1124 11:10:41.304074 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:41 crc kubenswrapper[5072]: I1124 11:10:41.304111 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:41 crc kubenswrapper[5072]: I1124 11:10:41.304127 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:41 crc kubenswrapper[5072]: I1124 11:10:41.304152 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:41 crc kubenswrapper[5072]: I1124 11:10:41.304167 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:41Z","lastTransitionTime":"2025-11-24T11:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:41 crc kubenswrapper[5072]: E1124 11:10:41.328104 5072 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:10:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:10:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:10:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:10:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:10:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:10:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:10:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:10:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a41d3a9c-0834-482e-9391-dff98db0f196\\\",\\\"systemUUID\\\":\\\"d0383649-b062-48ed-9fc1-5e553cb9256a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:41Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:41 crc kubenswrapper[5072]: I1124 11:10:41.332576 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:41 crc kubenswrapper[5072]: I1124 11:10:41.332644 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:41 crc kubenswrapper[5072]: I1124 11:10:41.332669 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:41 crc kubenswrapper[5072]: I1124 11:10:41.332696 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:41 crc kubenswrapper[5072]: I1124 11:10:41.332719 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:41Z","lastTransitionTime":"2025-11-24T11:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:41 crc kubenswrapper[5072]: E1124 11:10:41.354023 5072 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:10:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:10:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:10:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:10:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:10:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:10:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:10:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:10:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a41d3a9c-0834-482e-9391-dff98db0f196\\\",\\\"systemUUID\\\":\\\"d0383649-b062-48ed-9fc1-5e553cb9256a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:10:41Z is after 2025-08-24T17:21:41Z" Nov 24 11:10:41 crc kubenswrapper[5072]: E1124 11:10:41.354246 5072 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 24 11:10:41 crc kubenswrapper[5072]: I1124 11:10:41.356485 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:41 crc kubenswrapper[5072]: I1124 11:10:41.356551 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:41 crc kubenswrapper[5072]: I1124 11:10:41.356575 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:41 crc kubenswrapper[5072]: I1124 11:10:41.356604 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:41 crc kubenswrapper[5072]: I1124 11:10:41.356626 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:41Z","lastTransitionTime":"2025-11-24T11:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:41 crc kubenswrapper[5072]: I1124 11:10:41.459680 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:41 crc kubenswrapper[5072]: I1124 11:10:41.459766 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:41 crc kubenswrapper[5072]: I1124 11:10:41.459793 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:41 crc kubenswrapper[5072]: I1124 11:10:41.459824 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:41 crc kubenswrapper[5072]: I1124 11:10:41.459848 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:41Z","lastTransitionTime":"2025-11-24T11:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:41 crc kubenswrapper[5072]: I1124 11:10:41.563252 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:41 crc kubenswrapper[5072]: I1124 11:10:41.563325 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:41 crc kubenswrapper[5072]: I1124 11:10:41.563348 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:41 crc kubenswrapper[5072]: I1124 11:10:41.563385 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:41 crc kubenswrapper[5072]: I1124 11:10:41.563438 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:41Z","lastTransitionTime":"2025-11-24T11:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:42 crc kubenswrapper[5072]: I1124 11:10:42.098178 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:10:42 crc kubenswrapper[5072]: I1124 11:10:42.098335 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:10:42 crc kubenswrapper[5072]: E1124 11:10:42.098384 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:10:42 crc kubenswrapper[5072]: E1124 11:10:42.098594 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:10:42 crc kubenswrapper[5072]: I1124 11:10:42.098615 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:10:42 crc kubenswrapper[5072]: E1124 11:10:42.098934 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:10:42 crc kubenswrapper[5072]: I1124 11:10:42.101584 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:42 crc kubenswrapper[5072]: I1124 11:10:42.101616 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:42 crc kubenswrapper[5072]: I1124 11:10:42.101629 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:42 crc kubenswrapper[5072]: I1124 11:10:42.101648 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:42 crc kubenswrapper[5072]: I1124 11:10:42.101662 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:42Z","lastTransitionTime":"2025-11-24T11:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:42 crc kubenswrapper[5072]: I1124 11:10:42.203635 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:42 crc kubenswrapper[5072]: I1124 11:10:42.203712 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:42 crc kubenswrapper[5072]: I1124 11:10:42.203729 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:42 crc kubenswrapper[5072]: I1124 11:10:42.203758 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:42 crc kubenswrapper[5072]: I1124 11:10:42.203781 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:42Z","lastTransitionTime":"2025-11-24T11:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:42 crc kubenswrapper[5072]: I1124 11:10:42.311633 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:42 crc kubenswrapper[5072]: I1124 11:10:42.311712 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:42 crc kubenswrapper[5072]: I1124 11:10:42.311739 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:42 crc kubenswrapper[5072]: I1124 11:10:42.312009 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:42 crc kubenswrapper[5072]: I1124 11:10:42.312426 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:42Z","lastTransitionTime":"2025-11-24T11:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:42 crc kubenswrapper[5072]: I1124 11:10:42.414636 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:42 crc kubenswrapper[5072]: I1124 11:10:42.414692 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:42 crc kubenswrapper[5072]: I1124 11:10:42.414709 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:42 crc kubenswrapper[5072]: I1124 11:10:42.414732 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:42 crc kubenswrapper[5072]: I1124 11:10:42.414751 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:42Z","lastTransitionTime":"2025-11-24T11:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:42 crc kubenswrapper[5072]: I1124 11:10:42.517710 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:42 crc kubenswrapper[5072]: I1124 11:10:42.517788 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:42 crc kubenswrapper[5072]: I1124 11:10:42.517810 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:42 crc kubenswrapper[5072]: I1124 11:10:42.517839 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:42 crc kubenswrapper[5072]: I1124 11:10:42.517861 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:42Z","lastTransitionTime":"2025-11-24T11:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:42 crc kubenswrapper[5072]: I1124 11:10:42.620710 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:42 crc kubenswrapper[5072]: I1124 11:10:42.620744 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:42 crc kubenswrapper[5072]: I1124 11:10:42.620753 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:42 crc kubenswrapper[5072]: I1124 11:10:42.620767 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:42 crc kubenswrapper[5072]: I1124 11:10:42.620776 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:42Z","lastTransitionTime":"2025-11-24T11:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:42 crc kubenswrapper[5072]: I1124 11:10:42.723415 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:42 crc kubenswrapper[5072]: I1124 11:10:42.723475 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:42 crc kubenswrapper[5072]: I1124 11:10:42.723493 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:42 crc kubenswrapper[5072]: I1124 11:10:42.723516 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:42 crc kubenswrapper[5072]: I1124 11:10:42.723535 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:42Z","lastTransitionTime":"2025-11-24T11:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:42 crc kubenswrapper[5072]: I1124 11:10:42.826500 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:42 crc kubenswrapper[5072]: I1124 11:10:42.826552 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:42 crc kubenswrapper[5072]: I1124 11:10:42.826567 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:42 crc kubenswrapper[5072]: I1124 11:10:42.826588 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:42 crc kubenswrapper[5072]: I1124 11:10:42.826600 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:42Z","lastTransitionTime":"2025-11-24T11:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:42 crc kubenswrapper[5072]: I1124 11:10:42.929585 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:42 crc kubenswrapper[5072]: I1124 11:10:42.929652 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:42 crc kubenswrapper[5072]: I1124 11:10:42.929674 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:42 crc kubenswrapper[5072]: I1124 11:10:42.929704 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:42 crc kubenswrapper[5072]: I1124 11:10:42.929727 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:42Z","lastTransitionTime":"2025-11-24T11:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:43 crc kubenswrapper[5072]: I1124 11:10:43.015880 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nnrv7" Nov 24 11:10:43 crc kubenswrapper[5072]: E1124 11:10:43.016572 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nnrv7" podUID="60100e7d-c8b1-4b18-8567-46e21096fa0f" Nov 24 11:10:43 crc kubenswrapper[5072]: I1124 11:10:43.017039 5072 scope.go:117] "RemoveContainer" containerID="b30fc71ef9fdf26e114844d344131e79b2ea981d3e69760bb92b1279f0b3c434" Nov 24 11:10:43 crc kubenswrapper[5072]: E1124 11:10:43.017268 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-n4qmw_openshift-ovn-kubernetes(80fda759-ddfd-438a-b5a2-cb775ee1bf7e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" podUID="80fda759-ddfd-438a-b5a2-cb775ee1bf7e" Nov 24 11:10:43 crc kubenswrapper[5072]: I1124 11:10:43.032878 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:43 crc kubenswrapper[5072]: I1124 11:10:43.032917 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:43 crc kubenswrapper[5072]: I1124 11:10:43.032947 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:43 crc kubenswrapper[5072]: I1124 11:10:43.032963 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:43 crc kubenswrapper[5072]: I1124 11:10:43.032973 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:43Z","lastTransitionTime":"2025-11-24T11:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:43 crc kubenswrapper[5072]: I1124 11:10:43.135777 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:43 crc kubenswrapper[5072]: I1124 11:10:43.135810 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:43 crc kubenswrapper[5072]: I1124 11:10:43.135820 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:43 crc kubenswrapper[5072]: I1124 11:10:43.135836 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:43 crc kubenswrapper[5072]: I1124 11:10:43.135847 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:43Z","lastTransitionTime":"2025-11-24T11:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:43 crc kubenswrapper[5072]: I1124 11:10:43.238913 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:43 crc kubenswrapper[5072]: I1124 11:10:43.238970 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:43 crc kubenswrapper[5072]: I1124 11:10:43.238988 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:43 crc kubenswrapper[5072]: I1124 11:10:43.239011 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:43 crc kubenswrapper[5072]: I1124 11:10:43.239030 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:43Z","lastTransitionTime":"2025-11-24T11:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:43 crc kubenswrapper[5072]: I1124 11:10:43.342555 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:43 crc kubenswrapper[5072]: I1124 11:10:43.342618 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:43 crc kubenswrapper[5072]: I1124 11:10:43.342642 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:43 crc kubenswrapper[5072]: I1124 11:10:43.342672 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:43 crc kubenswrapper[5072]: I1124 11:10:43.342696 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:43Z","lastTransitionTime":"2025-11-24T11:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:43 crc kubenswrapper[5072]: I1124 11:10:43.446126 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:43 crc kubenswrapper[5072]: I1124 11:10:43.446187 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:43 crc kubenswrapper[5072]: I1124 11:10:43.446204 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:43 crc kubenswrapper[5072]: I1124 11:10:43.446231 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:43 crc kubenswrapper[5072]: I1124 11:10:43.446248 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:43Z","lastTransitionTime":"2025-11-24T11:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:43 crc kubenswrapper[5072]: I1124 11:10:43.549457 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:43 crc kubenswrapper[5072]: I1124 11:10:43.549517 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:43 crc kubenswrapper[5072]: I1124 11:10:43.549535 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:43 crc kubenswrapper[5072]: I1124 11:10:43.549562 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:43 crc kubenswrapper[5072]: I1124 11:10:43.549582 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:43Z","lastTransitionTime":"2025-11-24T11:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:43 crc kubenswrapper[5072]: I1124 11:10:43.652331 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:43 crc kubenswrapper[5072]: I1124 11:10:43.652422 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:43 crc kubenswrapper[5072]: I1124 11:10:43.652436 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:43 crc kubenswrapper[5072]: I1124 11:10:43.652455 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:43 crc kubenswrapper[5072]: I1124 11:10:43.652471 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:43Z","lastTransitionTime":"2025-11-24T11:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:43 crc kubenswrapper[5072]: I1124 11:10:43.755673 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:43 crc kubenswrapper[5072]: I1124 11:10:43.755722 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:43 crc kubenswrapper[5072]: I1124 11:10:43.755739 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:43 crc kubenswrapper[5072]: I1124 11:10:43.755763 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:43 crc kubenswrapper[5072]: I1124 11:10:43.755782 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:43Z","lastTransitionTime":"2025-11-24T11:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:43 crc kubenswrapper[5072]: I1124 11:10:43.858729 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:43 crc kubenswrapper[5072]: I1124 11:10:43.858788 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:43 crc kubenswrapper[5072]: I1124 11:10:43.858825 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:43 crc kubenswrapper[5072]: I1124 11:10:43.858855 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:43 crc kubenswrapper[5072]: I1124 11:10:43.858877 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:43Z","lastTransitionTime":"2025-11-24T11:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:43 crc kubenswrapper[5072]: I1124 11:10:43.961426 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:43 crc kubenswrapper[5072]: I1124 11:10:43.961477 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:43 crc kubenswrapper[5072]: I1124 11:10:43.961494 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:43 crc kubenswrapper[5072]: I1124 11:10:43.961513 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:43 crc kubenswrapper[5072]: I1124 11:10:43.961528 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:43Z","lastTransitionTime":"2025-11-24T11:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:44 crc kubenswrapper[5072]: I1124 11:10:44.016118 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:10:44 crc kubenswrapper[5072]: I1124 11:10:44.016216 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:10:44 crc kubenswrapper[5072]: I1124 11:10:44.016132 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:10:44 crc kubenswrapper[5072]: E1124 11:10:44.016348 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:10:44 crc kubenswrapper[5072]: E1124 11:10:44.016536 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:10:44 crc kubenswrapper[5072]: E1124 11:10:44.016690 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:10:44 crc kubenswrapper[5072]: I1124 11:10:44.064928 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:44 crc kubenswrapper[5072]: I1124 11:10:44.064986 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:44 crc kubenswrapper[5072]: I1124 11:10:44.065004 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:44 crc kubenswrapper[5072]: I1124 11:10:44.065041 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:44 crc kubenswrapper[5072]: I1124 11:10:44.065057 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:44Z","lastTransitionTime":"2025-11-24T11:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:44 crc kubenswrapper[5072]: I1124 11:10:44.168672 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:44 crc kubenswrapper[5072]: I1124 11:10:44.168745 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:44 crc kubenswrapper[5072]: I1124 11:10:44.168770 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:44 crc kubenswrapper[5072]: I1124 11:10:44.168801 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:44 crc kubenswrapper[5072]: I1124 11:10:44.168823 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:44Z","lastTransitionTime":"2025-11-24T11:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:44 crc kubenswrapper[5072]: I1124 11:10:44.272469 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:44 crc kubenswrapper[5072]: I1124 11:10:44.272926 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:44 crc kubenswrapper[5072]: I1124 11:10:44.272945 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:44 crc kubenswrapper[5072]: I1124 11:10:44.272968 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:44 crc kubenswrapper[5072]: I1124 11:10:44.272986 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:44Z","lastTransitionTime":"2025-11-24T11:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:44 crc kubenswrapper[5072]: I1124 11:10:44.376066 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:44 crc kubenswrapper[5072]: I1124 11:10:44.376120 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:44 crc kubenswrapper[5072]: I1124 11:10:44.376138 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:44 crc kubenswrapper[5072]: I1124 11:10:44.376165 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:44 crc kubenswrapper[5072]: I1124 11:10:44.376182 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:44Z","lastTransitionTime":"2025-11-24T11:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:44 crc kubenswrapper[5072]: I1124 11:10:44.479500 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:44 crc kubenswrapper[5072]: I1124 11:10:44.479558 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:44 crc kubenswrapper[5072]: I1124 11:10:44.479579 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:44 crc kubenswrapper[5072]: I1124 11:10:44.479605 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:44 crc kubenswrapper[5072]: I1124 11:10:44.479622 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:44Z","lastTransitionTime":"2025-11-24T11:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:44 crc kubenswrapper[5072]: I1124 11:10:44.582783 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:44 crc kubenswrapper[5072]: I1124 11:10:44.582872 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:44 crc kubenswrapper[5072]: I1124 11:10:44.582896 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:44 crc kubenswrapper[5072]: I1124 11:10:44.582926 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:44 crc kubenswrapper[5072]: I1124 11:10:44.582948 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:44Z","lastTransitionTime":"2025-11-24T11:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:44 crc kubenswrapper[5072]: I1124 11:10:44.685890 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:44 crc kubenswrapper[5072]: I1124 11:10:44.685945 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:44 crc kubenswrapper[5072]: I1124 11:10:44.685962 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:44 crc kubenswrapper[5072]: I1124 11:10:44.685985 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:44 crc kubenswrapper[5072]: I1124 11:10:44.686002 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:44Z","lastTransitionTime":"2025-11-24T11:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:44 crc kubenswrapper[5072]: I1124 11:10:44.789025 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:44 crc kubenswrapper[5072]: I1124 11:10:44.789093 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:44 crc kubenswrapper[5072]: I1124 11:10:44.789148 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:44 crc kubenswrapper[5072]: I1124 11:10:44.789188 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:44 crc kubenswrapper[5072]: I1124 11:10:44.789245 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:44Z","lastTransitionTime":"2025-11-24T11:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:44 crc kubenswrapper[5072]: I1124 11:10:44.892054 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:44 crc kubenswrapper[5072]: I1124 11:10:44.892114 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:44 crc kubenswrapper[5072]: I1124 11:10:44.892132 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:44 crc kubenswrapper[5072]: I1124 11:10:44.892157 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:44 crc kubenswrapper[5072]: I1124 11:10:44.892174 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:44Z","lastTransitionTime":"2025-11-24T11:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:44 crc kubenswrapper[5072]: I1124 11:10:44.995054 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:44 crc kubenswrapper[5072]: I1124 11:10:44.995110 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:44 crc kubenswrapper[5072]: I1124 11:10:44.995126 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:44 crc kubenswrapper[5072]: I1124 11:10:44.995150 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:44 crc kubenswrapper[5072]: I1124 11:10:44.995166 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:44Z","lastTransitionTime":"2025-11-24T11:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:45 crc kubenswrapper[5072]: I1124 11:10:45.015627 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nnrv7" Nov 24 11:10:45 crc kubenswrapper[5072]: E1124 11:10:45.016269 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nnrv7" podUID="60100e7d-c8b1-4b18-8567-46e21096fa0f" Nov 24 11:10:45 crc kubenswrapper[5072]: I1124 11:10:45.098482 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:45 crc kubenswrapper[5072]: I1124 11:10:45.098548 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:45 crc kubenswrapper[5072]: I1124 11:10:45.098570 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:45 crc kubenswrapper[5072]: I1124 11:10:45.098598 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:45 crc kubenswrapper[5072]: I1124 11:10:45.098619 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:45Z","lastTransitionTime":"2025-11-24T11:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:45 crc kubenswrapper[5072]: I1124 11:10:45.202205 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:45 crc kubenswrapper[5072]: I1124 11:10:45.202267 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:45 crc kubenswrapper[5072]: I1124 11:10:45.202288 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:45 crc kubenswrapper[5072]: I1124 11:10:45.202334 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:45 crc kubenswrapper[5072]: I1124 11:10:45.202355 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:45Z","lastTransitionTime":"2025-11-24T11:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:45 crc kubenswrapper[5072]: I1124 11:10:45.305908 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:45 crc kubenswrapper[5072]: I1124 11:10:45.305985 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:45 crc kubenswrapper[5072]: I1124 11:10:45.306006 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:45 crc kubenswrapper[5072]: I1124 11:10:45.306034 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:45 crc kubenswrapper[5072]: I1124 11:10:45.306056 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:45Z","lastTransitionTime":"2025-11-24T11:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:45 crc kubenswrapper[5072]: I1124 11:10:45.408848 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:45 crc kubenswrapper[5072]: I1124 11:10:45.408910 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:45 crc kubenswrapper[5072]: I1124 11:10:45.408928 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:45 crc kubenswrapper[5072]: I1124 11:10:45.408953 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:45 crc kubenswrapper[5072]: I1124 11:10:45.408979 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:45Z","lastTransitionTime":"2025-11-24T11:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:45 crc kubenswrapper[5072]: I1124 11:10:45.512440 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:45 crc kubenswrapper[5072]: I1124 11:10:45.512554 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:45 crc kubenswrapper[5072]: I1124 11:10:45.512576 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:45 crc kubenswrapper[5072]: I1124 11:10:45.512599 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:45 crc kubenswrapper[5072]: I1124 11:10:45.512618 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:45Z","lastTransitionTime":"2025-11-24T11:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:45 crc kubenswrapper[5072]: I1124 11:10:45.615505 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:45 crc kubenswrapper[5072]: I1124 11:10:45.615552 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:45 crc kubenswrapper[5072]: I1124 11:10:45.615562 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:45 crc kubenswrapper[5072]: I1124 11:10:45.615579 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:45 crc kubenswrapper[5072]: I1124 11:10:45.615589 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:45Z","lastTransitionTime":"2025-11-24T11:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:45 crc kubenswrapper[5072]: I1124 11:10:45.718174 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:45 crc kubenswrapper[5072]: I1124 11:10:45.718242 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:45 crc kubenswrapper[5072]: I1124 11:10:45.718261 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:45 crc kubenswrapper[5072]: I1124 11:10:45.718286 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:45 crc kubenswrapper[5072]: I1124 11:10:45.718304 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:45Z","lastTransitionTime":"2025-11-24T11:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:45 crc kubenswrapper[5072]: I1124 11:10:45.820347 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:45 crc kubenswrapper[5072]: I1124 11:10:45.820390 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:45 crc kubenswrapper[5072]: I1124 11:10:45.820398 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:45 crc kubenswrapper[5072]: I1124 11:10:45.820411 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:45 crc kubenswrapper[5072]: I1124 11:10:45.820421 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:45Z","lastTransitionTime":"2025-11-24T11:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:45 crc kubenswrapper[5072]: I1124 11:10:45.923788 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:45 crc kubenswrapper[5072]: I1124 11:10:45.923851 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:45 crc kubenswrapper[5072]: I1124 11:10:45.923867 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:45 crc kubenswrapper[5072]: I1124 11:10:45.923892 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:45 crc kubenswrapper[5072]: I1124 11:10:45.923910 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:45Z","lastTransitionTime":"2025-11-24T11:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:46 crc kubenswrapper[5072]: I1124 11:10:46.015689 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:10:46 crc kubenswrapper[5072]: I1124 11:10:46.015733 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:10:46 crc kubenswrapper[5072]: E1124 11:10:46.015884 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:10:46 crc kubenswrapper[5072]: I1124 11:10:46.015997 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:10:46 crc kubenswrapper[5072]: E1124 11:10:46.016181 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:10:46 crc kubenswrapper[5072]: E1124 11:10:46.016439 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:10:46 crc kubenswrapper[5072]: I1124 11:10:46.028207 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:46 crc kubenswrapper[5072]: I1124 11:10:46.028277 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:46 crc kubenswrapper[5072]: I1124 11:10:46.028292 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:46 crc kubenswrapper[5072]: I1124 11:10:46.028313 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:46 crc kubenswrapper[5072]: I1124 11:10:46.028326 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:46Z","lastTransitionTime":"2025-11-24T11:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:46 crc kubenswrapper[5072]: I1124 11:10:46.131973 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:46 crc kubenswrapper[5072]: I1124 11:10:46.132066 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:46 crc kubenswrapper[5072]: I1124 11:10:46.132084 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:46 crc kubenswrapper[5072]: I1124 11:10:46.132105 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:46 crc kubenswrapper[5072]: I1124 11:10:46.132122 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:46Z","lastTransitionTime":"2025-11-24T11:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:46 crc kubenswrapper[5072]: I1124 11:10:46.234841 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:46 crc kubenswrapper[5072]: I1124 11:10:46.234907 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:46 crc kubenswrapper[5072]: I1124 11:10:46.234927 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:46 crc kubenswrapper[5072]: I1124 11:10:46.235006 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:46 crc kubenswrapper[5072]: I1124 11:10:46.235028 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:46Z","lastTransitionTime":"2025-11-24T11:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:46 crc kubenswrapper[5072]: I1124 11:10:46.340905 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:46 crc kubenswrapper[5072]: I1124 11:10:46.340969 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:46 crc kubenswrapper[5072]: I1124 11:10:46.340990 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:46 crc kubenswrapper[5072]: I1124 11:10:46.341029 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:46 crc kubenswrapper[5072]: I1124 11:10:46.341085 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:46Z","lastTransitionTime":"2025-11-24T11:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:46 crc kubenswrapper[5072]: I1124 11:10:46.443641 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:46 crc kubenswrapper[5072]: I1124 11:10:46.443690 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:46 crc kubenswrapper[5072]: I1124 11:10:46.443707 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:46 crc kubenswrapper[5072]: I1124 11:10:46.443823 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:46 crc kubenswrapper[5072]: I1124 11:10:46.443895 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:46Z","lastTransitionTime":"2025-11-24T11:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:46 crc kubenswrapper[5072]: I1124 11:10:46.547877 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:46 crc kubenswrapper[5072]: I1124 11:10:46.548079 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:46 crc kubenswrapper[5072]: I1124 11:10:46.548098 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:46 crc kubenswrapper[5072]: I1124 11:10:46.548123 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:46 crc kubenswrapper[5072]: I1124 11:10:46.548142 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:46Z","lastTransitionTime":"2025-11-24T11:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:46 crc kubenswrapper[5072]: I1124 11:10:46.650891 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:46 crc kubenswrapper[5072]: I1124 11:10:46.650948 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:46 crc kubenswrapper[5072]: I1124 11:10:46.650964 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:46 crc kubenswrapper[5072]: I1124 11:10:46.650988 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:46 crc kubenswrapper[5072]: I1124 11:10:46.651007 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:46Z","lastTransitionTime":"2025-11-24T11:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:46 crc kubenswrapper[5072]: I1124 11:10:46.754102 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:46 crc kubenswrapper[5072]: I1124 11:10:46.754190 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:46 crc kubenswrapper[5072]: I1124 11:10:46.754212 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:46 crc kubenswrapper[5072]: I1124 11:10:46.754250 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:46 crc kubenswrapper[5072]: I1124 11:10:46.754272 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:46Z","lastTransitionTime":"2025-11-24T11:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:46 crc kubenswrapper[5072]: I1124 11:10:46.857499 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:46 crc kubenswrapper[5072]: I1124 11:10:46.857535 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:46 crc kubenswrapper[5072]: I1124 11:10:46.857543 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:46 crc kubenswrapper[5072]: I1124 11:10:46.857557 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:46 crc kubenswrapper[5072]: I1124 11:10:46.857568 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:46Z","lastTransitionTime":"2025-11-24T11:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:46 crc kubenswrapper[5072]: I1124 11:10:46.959725 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:46 crc kubenswrapper[5072]: I1124 11:10:46.959798 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:46 crc kubenswrapper[5072]: I1124 11:10:46.959815 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:46 crc kubenswrapper[5072]: I1124 11:10:46.959839 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:46 crc kubenswrapper[5072]: I1124 11:10:46.959857 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:46Z","lastTransitionTime":"2025-11-24T11:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:47 crc kubenswrapper[5072]: I1124 11:10:47.015934 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nnrv7" Nov 24 11:10:47 crc kubenswrapper[5072]: E1124 11:10:47.016061 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nnrv7" podUID="60100e7d-c8b1-4b18-8567-46e21096fa0f" Nov 24 11:10:47 crc kubenswrapper[5072]: I1124 11:10:47.062313 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:47 crc kubenswrapper[5072]: I1124 11:10:47.062365 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:47 crc kubenswrapper[5072]: I1124 11:10:47.062397 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:47 crc kubenswrapper[5072]: I1124 11:10:47.062416 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:47 crc kubenswrapper[5072]: I1124 11:10:47.062430 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:47Z","lastTransitionTime":"2025-11-24T11:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:47 crc kubenswrapper[5072]: I1124 11:10:47.164962 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:47 crc kubenswrapper[5072]: I1124 11:10:47.165046 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:47 crc kubenswrapper[5072]: I1124 11:10:47.165156 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:47 crc kubenswrapper[5072]: I1124 11:10:47.165415 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:47 crc kubenswrapper[5072]: I1124 11:10:47.165452 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:47Z","lastTransitionTime":"2025-11-24T11:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:47 crc kubenswrapper[5072]: I1124 11:10:47.268976 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:47 crc kubenswrapper[5072]: I1124 11:10:47.269063 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:47 crc kubenswrapper[5072]: I1124 11:10:47.269091 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:47 crc kubenswrapper[5072]: I1124 11:10:47.269164 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:47 crc kubenswrapper[5072]: I1124 11:10:47.269232 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:47Z","lastTransitionTime":"2025-11-24T11:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:47 crc kubenswrapper[5072]: I1124 11:10:47.371990 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:47 crc kubenswrapper[5072]: I1124 11:10:47.372050 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:47 crc kubenswrapper[5072]: I1124 11:10:47.372070 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:47 crc kubenswrapper[5072]: I1124 11:10:47.372104 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:47 crc kubenswrapper[5072]: I1124 11:10:47.372122 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:47Z","lastTransitionTime":"2025-11-24T11:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:47 crc kubenswrapper[5072]: I1124 11:10:47.474169 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:47 crc kubenswrapper[5072]: I1124 11:10:47.474215 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:47 crc kubenswrapper[5072]: I1124 11:10:47.474227 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:47 crc kubenswrapper[5072]: I1124 11:10:47.474245 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:47 crc kubenswrapper[5072]: I1124 11:10:47.474755 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:47Z","lastTransitionTime":"2025-11-24T11:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:47 crc kubenswrapper[5072]: I1124 11:10:47.577093 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:47 crc kubenswrapper[5072]: I1124 11:10:47.577139 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:47 crc kubenswrapper[5072]: I1124 11:10:47.577151 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:47 crc kubenswrapper[5072]: I1124 11:10:47.577170 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:47 crc kubenswrapper[5072]: I1124 11:10:47.577183 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:47Z","lastTransitionTime":"2025-11-24T11:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:47 crc kubenswrapper[5072]: I1124 11:10:47.679205 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:47 crc kubenswrapper[5072]: I1124 11:10:47.679250 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:47 crc kubenswrapper[5072]: I1124 11:10:47.679265 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:47 crc kubenswrapper[5072]: I1124 11:10:47.679282 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:47 crc kubenswrapper[5072]: I1124 11:10:47.679294 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:47Z","lastTransitionTime":"2025-11-24T11:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:47 crc kubenswrapper[5072]: I1124 11:10:47.782946 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:47 crc kubenswrapper[5072]: I1124 11:10:47.783065 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:47 crc kubenswrapper[5072]: I1124 11:10:47.783110 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:47 crc kubenswrapper[5072]: I1124 11:10:47.783144 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:47 crc kubenswrapper[5072]: I1124 11:10:47.783165 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:47Z","lastTransitionTime":"2025-11-24T11:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:47 crc kubenswrapper[5072]: I1124 11:10:47.885108 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:47 crc kubenswrapper[5072]: I1124 11:10:47.885132 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:47 crc kubenswrapper[5072]: I1124 11:10:47.885141 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:47 crc kubenswrapper[5072]: I1124 11:10:47.885154 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:47 crc kubenswrapper[5072]: I1124 11:10:47.885164 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:47Z","lastTransitionTime":"2025-11-24T11:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:47 crc kubenswrapper[5072]: I1124 11:10:47.988367 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:47 crc kubenswrapper[5072]: I1124 11:10:47.988467 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:47 crc kubenswrapper[5072]: I1124 11:10:47.988490 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:47 crc kubenswrapper[5072]: I1124 11:10:47.988518 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:47 crc kubenswrapper[5072]: I1124 11:10:47.988539 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:47Z","lastTransitionTime":"2025-11-24T11:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:48 crc kubenswrapper[5072]: I1124 11:10:48.016141 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:10:48 crc kubenswrapper[5072]: I1124 11:10:48.016202 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:10:48 crc kubenswrapper[5072]: E1124 11:10:48.016334 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:10:48 crc kubenswrapper[5072]: I1124 11:10:48.016441 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:10:48 crc kubenswrapper[5072]: E1124 11:10:48.016618 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:10:48 crc kubenswrapper[5072]: E1124 11:10:48.016712 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:10:48 crc kubenswrapper[5072]: I1124 11:10:48.090871 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:48 crc kubenswrapper[5072]: I1124 11:10:48.090930 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:48 crc kubenswrapper[5072]: I1124 11:10:48.090950 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:48 crc kubenswrapper[5072]: I1124 11:10:48.090974 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:48 crc kubenswrapper[5072]: I1124 11:10:48.091040 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:48Z","lastTransitionTime":"2025-11-24T11:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:48 crc kubenswrapper[5072]: I1124 11:10:48.194426 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:48 crc kubenswrapper[5072]: I1124 11:10:48.194553 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:48 crc kubenswrapper[5072]: I1124 11:10:48.194575 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:48 crc kubenswrapper[5072]: I1124 11:10:48.194598 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:48 crc kubenswrapper[5072]: I1124 11:10:48.194617 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:48Z","lastTransitionTime":"2025-11-24T11:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:48 crc kubenswrapper[5072]: I1124 11:10:48.297284 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:48 crc kubenswrapper[5072]: I1124 11:10:48.297349 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:48 crc kubenswrapper[5072]: I1124 11:10:48.297369 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:48 crc kubenswrapper[5072]: I1124 11:10:48.297423 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:48 crc kubenswrapper[5072]: I1124 11:10:48.297448 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:48Z","lastTransitionTime":"2025-11-24T11:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:48 crc kubenswrapper[5072]: I1124 11:10:48.400714 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:48 crc kubenswrapper[5072]: I1124 11:10:48.400804 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:48 crc kubenswrapper[5072]: I1124 11:10:48.400827 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:48 crc kubenswrapper[5072]: I1124 11:10:48.400862 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:48 crc kubenswrapper[5072]: I1124 11:10:48.400885 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:48Z","lastTransitionTime":"2025-11-24T11:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:48 crc kubenswrapper[5072]: I1124 11:10:48.504251 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:48 crc kubenswrapper[5072]: I1124 11:10:48.504301 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:48 crc kubenswrapper[5072]: I1124 11:10:48.504321 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:48 crc kubenswrapper[5072]: I1124 11:10:48.504343 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:48 crc kubenswrapper[5072]: I1124 11:10:48.504360 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:48Z","lastTransitionTime":"2025-11-24T11:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:48 crc kubenswrapper[5072]: I1124 11:10:48.607537 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:48 crc kubenswrapper[5072]: I1124 11:10:48.607591 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:48 crc kubenswrapper[5072]: I1124 11:10:48.607608 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:48 crc kubenswrapper[5072]: I1124 11:10:48.607632 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:48 crc kubenswrapper[5072]: I1124 11:10:48.607652 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:48Z","lastTransitionTime":"2025-11-24T11:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:48 crc kubenswrapper[5072]: I1124 11:10:48.710995 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:48 crc kubenswrapper[5072]: I1124 11:10:48.711023 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:48 crc kubenswrapper[5072]: I1124 11:10:48.711039 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:48 crc kubenswrapper[5072]: I1124 11:10:48.711052 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:48 crc kubenswrapper[5072]: I1124 11:10:48.711061 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:48Z","lastTransitionTime":"2025-11-24T11:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:48 crc kubenswrapper[5072]: I1124 11:10:48.813172 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:48 crc kubenswrapper[5072]: I1124 11:10:48.813206 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:48 crc kubenswrapper[5072]: I1124 11:10:48.813218 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:48 crc kubenswrapper[5072]: I1124 11:10:48.813237 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:48 crc kubenswrapper[5072]: I1124 11:10:48.813246 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:48Z","lastTransitionTime":"2025-11-24T11:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:48 crc kubenswrapper[5072]: I1124 11:10:48.916154 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:48 crc kubenswrapper[5072]: I1124 11:10:48.916210 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:48 crc kubenswrapper[5072]: I1124 11:10:48.916227 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:48 crc kubenswrapper[5072]: I1124 11:10:48.916252 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:48 crc kubenswrapper[5072]: I1124 11:10:48.916273 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:48Z","lastTransitionTime":"2025-11-24T11:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:49 crc kubenswrapper[5072]: I1124 11:10:49.016477 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nnrv7" Nov 24 11:10:49 crc kubenswrapper[5072]: E1124 11:10:49.016669 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nnrv7" podUID="60100e7d-c8b1-4b18-8567-46e21096fa0f" Nov 24 11:10:49 crc kubenswrapper[5072]: I1124 11:10:49.019271 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:49 crc kubenswrapper[5072]: I1124 11:10:49.019337 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:49 crc kubenswrapper[5072]: I1124 11:10:49.019361 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:49 crc kubenswrapper[5072]: I1124 11:10:49.019415 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:49 crc kubenswrapper[5072]: I1124 11:10:49.019437 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:49Z","lastTransitionTime":"2025-11-24T11:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:49 crc kubenswrapper[5072]: I1124 11:10:49.065129 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=14.065105013 podStartE2EDuration="14.065105013s" podCreationTimestamp="2025-11-24 11:10:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:10:49.06498871 +0000 UTC m=+100.776513226" watchObservedRunningTime="2025-11-24 11:10:49.065105013 +0000 UTC m=+100.776629529" Nov 24 11:10:49 crc kubenswrapper[5072]: I1124 11:10:49.107823 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=78.107794562 podStartE2EDuration="1m18.107794562s" podCreationTimestamp="2025-11-24 11:09:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:10:49.088047005 +0000 UTC m=+100.799571561" watchObservedRunningTime="2025-11-24 11:10:49.107794562 +0000 UTC m=+100.819319078" Nov 24 11:10:49 crc kubenswrapper[5072]: I1124 11:10:49.122840 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:49 crc kubenswrapper[5072]: I1124 11:10:49.123287 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:49 crc kubenswrapper[5072]: I1124 11:10:49.123535 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:49 crc kubenswrapper[5072]: I1124 11:10:49.123914 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:49 crc kubenswrapper[5072]: I1124 11:10:49.124501 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:49Z","lastTransitionTime":"2025-11-24T11:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:49 crc kubenswrapper[5072]: I1124 11:10:49.192671 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=51.192634079 podStartE2EDuration="51.192634079s" podCreationTimestamp="2025-11-24 11:09:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:10:49.183598935 +0000 UTC m=+100.895123451" watchObservedRunningTime="2025-11-24 11:10:49.192634079 +0000 UTC m=+100.904158595" Nov 24 11:10:49 crc kubenswrapper[5072]: I1124 11:10:49.203237 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=16.203211239 podStartE2EDuration="16.203211239s" podCreationTimestamp="2025-11-24 11:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:10:49.203113197 +0000 UTC m=+100.914637713" watchObservedRunningTime="2025-11-24 11:10:49.203211239 +0000 UTC m=+100.914735755" Nov 24 11:10:49 crc kubenswrapper[5072]: I1124 11:10:49.227948 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:49 crc kubenswrapper[5072]: I1124 11:10:49.227993 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:49 crc kubenswrapper[5072]: I1124 11:10:49.228004 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:49 crc kubenswrapper[5072]: I1124 11:10:49.228024 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:49 crc kubenswrapper[5072]: I1124 11:10:49.228036 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:49Z","lastTransitionTime":"2025-11-24T11:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:49 crc kubenswrapper[5072]: I1124 11:10:49.242926 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=80.242907088 podStartE2EDuration="1m20.242907088s" podCreationTimestamp="2025-11-24 11:09:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:10:49.242808556 +0000 UTC m=+100.954333062" watchObservedRunningTime="2025-11-24 11:10:49.242907088 +0000 UTC m=+100.954431574" Nov 24 11:10:49 crc kubenswrapper[5072]: I1124 11:10:49.330206 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:49 crc kubenswrapper[5072]: I1124 11:10:49.330245 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:49 crc kubenswrapper[5072]: I1124 11:10:49.330255 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:49 crc kubenswrapper[5072]: I1124 11:10:49.330268 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:49 crc kubenswrapper[5072]: I1124 11:10:49.330278 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:49Z","lastTransitionTime":"2025-11-24T11:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:49 crc kubenswrapper[5072]: I1124 11:10:49.342975 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-qjsxf" podStartSLOduration=78.342959415 podStartE2EDuration="1m18.342959415s" podCreationTimestamp="2025-11-24 11:09:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:10:49.333784548 +0000 UTC m=+101.045309024" watchObservedRunningTime="2025-11-24 11:10:49.342959415 +0000 UTC m=+101.054483891" Nov 24 11:10:49 crc kubenswrapper[5072]: I1124 11:10:49.354824 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-jz4mm" podStartSLOduration=78.354808305 podStartE2EDuration="1m18.354808305s" podCreationTimestamp="2025-11-24 11:09:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:10:49.343730853 +0000 UTC m=+101.055255329" watchObservedRunningTime="2025-11-24 11:10:49.354808305 +0000 UTC m=+101.066332781" Nov 24 11:10:49 crc kubenswrapper[5072]: I1124 11:10:49.366816 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wndk6" podStartSLOduration=78.366802989 podStartE2EDuration="1m18.366802989s" podCreationTimestamp="2025-11-24 11:09:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:10:49.355508382 +0000 UTC m=+101.067032858" watchObservedRunningTime="2025-11-24 11:10:49.366802989 +0000 UTC m=+101.078327465" Nov 24 11:10:49 crc kubenswrapper[5072]: I1124 11:10:49.403472 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-bkjf7" podStartSLOduration=80.403452776 podStartE2EDuration="1m20.403452776s" podCreationTimestamp="2025-11-24 11:09:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:10:49.402605306 +0000 UTC m=+101.114129802" watchObservedRunningTime="2025-11-24 11:10:49.403452776 +0000 UTC m=+101.114977272" Nov 24 11:10:49 crc kubenswrapper[5072]: I1124 11:10:49.417850 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-t8b9x" podStartSLOduration=78.417833806 podStartE2EDuration="1m18.417833806s" podCreationTimestamp="2025-11-24 11:09:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:10:49.417699353 +0000 UTC m=+101.129223879" watchObservedRunningTime="2025-11-24 11:10:49.417833806 +0000 UTC m=+101.129358292" Nov 24 11:10:49 crc kubenswrapper[5072]: I1124 11:10:49.432470 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podStartSLOduration=78.432454062 podStartE2EDuration="1m18.432454062s" podCreationTimestamp="2025-11-24 11:09:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:10:49.431683993 +0000 UTC m=+101.143208509" watchObservedRunningTime="2025-11-24 11:10:49.432454062 +0000 UTC m=+101.143978548" Nov 24 11:10:49 crc kubenswrapper[5072]: I1124 11:10:49.432895 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:49 crc kubenswrapper[5072]: I1124 11:10:49.432937 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:49 crc kubenswrapper[5072]: I1124 11:10:49.432949 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:49 crc kubenswrapper[5072]: I1124 11:10:49.432966 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:49 crc kubenswrapper[5072]: I1124 11:10:49.432978 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:49Z","lastTransitionTime":"2025-11-24T11:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:49 crc kubenswrapper[5072]: I1124 11:10:49.539336 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:49 crc kubenswrapper[5072]: I1124 11:10:49.539452 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:49 crc kubenswrapper[5072]: I1124 11:10:49.539477 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:49 crc kubenswrapper[5072]: I1124 11:10:49.539505 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:49 crc kubenswrapper[5072]: I1124 11:10:49.539534 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:49Z","lastTransitionTime":"2025-11-24T11:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:49 crc kubenswrapper[5072]: I1124 11:10:49.586850 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/60100e7d-c8b1-4b18-8567-46e21096fa0f-metrics-certs\") pod \"network-metrics-daemon-nnrv7\" (UID: \"60100e7d-c8b1-4b18-8567-46e21096fa0f\") " pod="openshift-multus/network-metrics-daemon-nnrv7" Nov 24 11:10:49 crc kubenswrapper[5072]: E1124 11:10:49.587095 5072 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 11:10:49 crc kubenswrapper[5072]: E1124 11:10:49.587179 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/60100e7d-c8b1-4b18-8567-46e21096fa0f-metrics-certs podName:60100e7d-c8b1-4b18-8567-46e21096fa0f nodeName:}" failed. No retries permitted until 2025-11-24 11:11:53.587161281 +0000 UTC m=+165.298685757 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/60100e7d-c8b1-4b18-8567-46e21096fa0f-metrics-certs") pod "network-metrics-daemon-nnrv7" (UID: "60100e7d-c8b1-4b18-8567-46e21096fa0f") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 11:10:49 crc kubenswrapper[5072]: I1124 11:10:49.642871 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:49 crc kubenswrapper[5072]: I1124 11:10:49.642934 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:49 crc kubenswrapper[5072]: I1124 11:10:49.642952 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:49 crc kubenswrapper[5072]: I1124 11:10:49.642976 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:49 crc kubenswrapper[5072]: I1124 11:10:49.642996 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:49Z","lastTransitionTime":"2025-11-24T11:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:49 crc kubenswrapper[5072]: I1124 11:10:49.746211 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:49 crc kubenswrapper[5072]: I1124 11:10:49.746277 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:49 crc kubenswrapper[5072]: I1124 11:10:49.746304 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:49 crc kubenswrapper[5072]: I1124 11:10:49.746350 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:49 crc kubenswrapper[5072]: I1124 11:10:49.746407 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:49Z","lastTransitionTime":"2025-11-24T11:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:49 crc kubenswrapper[5072]: I1124 11:10:49.849483 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:49 crc kubenswrapper[5072]: I1124 11:10:49.849543 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:49 crc kubenswrapper[5072]: I1124 11:10:49.849561 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:49 crc kubenswrapper[5072]: I1124 11:10:49.849587 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:49 crc kubenswrapper[5072]: I1124 11:10:49.849605 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:49Z","lastTransitionTime":"2025-11-24T11:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:49 crc kubenswrapper[5072]: I1124 11:10:49.952198 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:49 crc kubenswrapper[5072]: I1124 11:10:49.952574 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:49 crc kubenswrapper[5072]: I1124 11:10:49.952752 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:49 crc kubenswrapper[5072]: I1124 11:10:49.952909 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:49 crc kubenswrapper[5072]: I1124 11:10:49.953057 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:49Z","lastTransitionTime":"2025-11-24T11:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:50 crc kubenswrapper[5072]: I1124 11:10:50.015459 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:10:50 crc kubenswrapper[5072]: I1124 11:10:50.015550 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:10:50 crc kubenswrapper[5072]: I1124 11:10:50.015648 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:10:50 crc kubenswrapper[5072]: E1124 11:10:50.015909 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:10:50 crc kubenswrapper[5072]: E1124 11:10:50.016294 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:10:50 crc kubenswrapper[5072]: E1124 11:10:50.016825 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:10:50 crc kubenswrapper[5072]: I1124 11:10:50.056506 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:50 crc kubenswrapper[5072]: I1124 11:10:50.056713 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:50 crc kubenswrapper[5072]: I1124 11:10:50.056847 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:50 crc kubenswrapper[5072]: I1124 11:10:50.056982 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:50 crc kubenswrapper[5072]: I1124 11:10:50.057110 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:50Z","lastTransitionTime":"2025-11-24T11:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:50 crc kubenswrapper[5072]: I1124 11:10:50.160838 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:50 crc kubenswrapper[5072]: I1124 11:10:50.160893 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:50 crc kubenswrapper[5072]: I1124 11:10:50.160910 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:50 crc kubenswrapper[5072]: I1124 11:10:50.160940 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:50 crc kubenswrapper[5072]: I1124 11:10:50.160958 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:50Z","lastTransitionTime":"2025-11-24T11:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:50 crc kubenswrapper[5072]: I1124 11:10:50.263552 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:50 crc kubenswrapper[5072]: I1124 11:10:50.263607 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:50 crc kubenswrapper[5072]: I1124 11:10:50.263678 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:50 crc kubenswrapper[5072]: I1124 11:10:50.263709 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:50 crc kubenswrapper[5072]: I1124 11:10:50.263734 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:50Z","lastTransitionTime":"2025-11-24T11:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:50 crc kubenswrapper[5072]: I1124 11:10:50.366810 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:50 crc kubenswrapper[5072]: I1124 11:10:50.366876 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:50 crc kubenswrapper[5072]: I1124 11:10:50.366898 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:50 crc kubenswrapper[5072]: I1124 11:10:50.366922 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:50 crc kubenswrapper[5072]: I1124 11:10:50.366939 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:50Z","lastTransitionTime":"2025-11-24T11:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:50 crc kubenswrapper[5072]: I1124 11:10:50.469511 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:50 crc kubenswrapper[5072]: I1124 11:10:50.469568 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:50 crc kubenswrapper[5072]: I1124 11:10:50.469589 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:50 crc kubenswrapper[5072]: I1124 11:10:50.469616 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:50 crc kubenswrapper[5072]: I1124 11:10:50.469636 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:50Z","lastTransitionTime":"2025-11-24T11:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:50 crc kubenswrapper[5072]: I1124 11:10:50.573098 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:50 crc kubenswrapper[5072]: I1124 11:10:50.573169 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:50 crc kubenswrapper[5072]: I1124 11:10:50.573192 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:50 crc kubenswrapper[5072]: I1124 11:10:50.573216 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:50 crc kubenswrapper[5072]: I1124 11:10:50.573233 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:50Z","lastTransitionTime":"2025-11-24T11:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:50 crc kubenswrapper[5072]: I1124 11:10:50.675732 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:50 crc kubenswrapper[5072]: I1124 11:10:50.675813 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:50 crc kubenswrapper[5072]: I1124 11:10:50.675839 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:50 crc kubenswrapper[5072]: I1124 11:10:50.675865 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:50 crc kubenswrapper[5072]: I1124 11:10:50.675888 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:50Z","lastTransitionTime":"2025-11-24T11:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:50 crc kubenswrapper[5072]: I1124 11:10:50.779190 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:50 crc kubenswrapper[5072]: I1124 11:10:50.779254 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:50 crc kubenswrapper[5072]: I1124 11:10:50.779276 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:50 crc kubenswrapper[5072]: I1124 11:10:50.779304 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:50 crc kubenswrapper[5072]: I1124 11:10:50.779321 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:50Z","lastTransitionTime":"2025-11-24T11:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:50 crc kubenswrapper[5072]: I1124 11:10:50.882949 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:50 crc kubenswrapper[5072]: I1124 11:10:50.883017 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:50 crc kubenswrapper[5072]: I1124 11:10:50.883036 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:50 crc kubenswrapper[5072]: I1124 11:10:50.883062 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:50 crc kubenswrapper[5072]: I1124 11:10:50.883082 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:50Z","lastTransitionTime":"2025-11-24T11:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:50 crc kubenswrapper[5072]: I1124 11:10:50.986026 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:50 crc kubenswrapper[5072]: I1124 11:10:50.986082 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:50 crc kubenswrapper[5072]: I1124 11:10:50.986101 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:50 crc kubenswrapper[5072]: I1124 11:10:50.986124 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:50 crc kubenswrapper[5072]: I1124 11:10:50.986145 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:50Z","lastTransitionTime":"2025-11-24T11:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:51 crc kubenswrapper[5072]: I1124 11:10:51.016231 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nnrv7" Nov 24 11:10:51 crc kubenswrapper[5072]: E1124 11:10:51.016640 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nnrv7" podUID="60100e7d-c8b1-4b18-8567-46e21096fa0f" Nov 24 11:10:51 crc kubenswrapper[5072]: I1124 11:10:51.089080 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:51 crc kubenswrapper[5072]: I1124 11:10:51.089134 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:51 crc kubenswrapper[5072]: I1124 11:10:51.089146 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:51 crc kubenswrapper[5072]: I1124 11:10:51.089163 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:51 crc kubenswrapper[5072]: I1124 11:10:51.089175 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:51Z","lastTransitionTime":"2025-11-24T11:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:51 crc kubenswrapper[5072]: I1124 11:10:51.192312 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:51 crc kubenswrapper[5072]: I1124 11:10:51.192439 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:51 crc kubenswrapper[5072]: I1124 11:10:51.192460 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:51 crc kubenswrapper[5072]: I1124 11:10:51.192490 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:51 crc kubenswrapper[5072]: I1124 11:10:51.192548 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:51Z","lastTransitionTime":"2025-11-24T11:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:51 crc kubenswrapper[5072]: I1124 11:10:51.296291 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:51 crc kubenswrapper[5072]: I1124 11:10:51.296354 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:51 crc kubenswrapper[5072]: I1124 11:10:51.296397 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:51 crc kubenswrapper[5072]: I1124 11:10:51.296423 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:51 crc kubenswrapper[5072]: I1124 11:10:51.296441 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:51Z","lastTransitionTime":"2025-11-24T11:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:51 crc kubenswrapper[5072]: I1124 11:10:51.399655 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:51 crc kubenswrapper[5072]: I1124 11:10:51.399709 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:51 crc kubenswrapper[5072]: I1124 11:10:51.399725 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:51 crc kubenswrapper[5072]: I1124 11:10:51.399748 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:51 crc kubenswrapper[5072]: I1124 11:10:51.399766 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:51Z","lastTransitionTime":"2025-11-24T11:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:51 crc kubenswrapper[5072]: I1124 11:10:51.491913 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:10:51 crc kubenswrapper[5072]: I1124 11:10:51.491960 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:10:51 crc kubenswrapper[5072]: I1124 11:10:51.492044 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:10:51 crc kubenswrapper[5072]: I1124 11:10:51.492066 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:10:51 crc kubenswrapper[5072]: I1124 11:10:51.492082 5072 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:10:51Z","lastTransitionTime":"2025-11-24T11:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:10:51 crc kubenswrapper[5072]: I1124 11:10:51.560496 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-q88b9"] Nov 24 11:10:51 crc kubenswrapper[5072]: I1124 11:10:51.562301 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-q88b9" Nov 24 11:10:51 crc kubenswrapper[5072]: I1124 11:10:51.565214 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Nov 24 11:10:51 crc kubenswrapper[5072]: I1124 11:10:51.565777 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Nov 24 11:10:51 crc kubenswrapper[5072]: I1124 11:10:51.565916 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Nov 24 11:10:51 crc kubenswrapper[5072]: I1124 11:10:51.565970 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Nov 24 11:10:51 crc kubenswrapper[5072]: I1124 11:10:51.610014 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/57c21df7-4f5f-42ae-8736-708886727bb4-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-q88b9\" (UID: \"57c21df7-4f5f-42ae-8736-708886727bb4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-q88b9" Nov 24 11:10:51 crc kubenswrapper[5072]: I1124 11:10:51.610099 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/57c21df7-4f5f-42ae-8736-708886727bb4-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-q88b9\" (UID: \"57c21df7-4f5f-42ae-8736-708886727bb4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-q88b9" Nov 24 11:10:51 crc kubenswrapper[5072]: I1124 11:10:51.610148 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/57c21df7-4f5f-42ae-8736-708886727bb4-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-q88b9\" (UID: \"57c21df7-4f5f-42ae-8736-708886727bb4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-q88b9" Nov 24 11:10:51 crc kubenswrapper[5072]: I1124 11:10:51.610206 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/57c21df7-4f5f-42ae-8736-708886727bb4-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-q88b9\" (UID: \"57c21df7-4f5f-42ae-8736-708886727bb4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-q88b9" Nov 24 11:10:51 crc kubenswrapper[5072]: I1124 11:10:51.610452 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/57c21df7-4f5f-42ae-8736-708886727bb4-service-ca\") pod \"cluster-version-operator-5c965bbfc6-q88b9\" (UID: \"57c21df7-4f5f-42ae-8736-708886727bb4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-q88b9" Nov 24 11:10:51 crc kubenswrapper[5072]: I1124 11:10:51.712250 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/57c21df7-4f5f-42ae-8736-708886727bb4-service-ca\") pod \"cluster-version-operator-5c965bbfc6-q88b9\" (UID: \"57c21df7-4f5f-42ae-8736-708886727bb4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-q88b9" Nov 24 11:10:51 crc kubenswrapper[5072]: I1124 11:10:51.712359 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/57c21df7-4f5f-42ae-8736-708886727bb4-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-q88b9\" (UID: \"57c21df7-4f5f-42ae-8736-708886727bb4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-q88b9" Nov 24 11:10:51 crc kubenswrapper[5072]: I1124 11:10:51.712460 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/57c21df7-4f5f-42ae-8736-708886727bb4-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-q88b9\" (UID: \"57c21df7-4f5f-42ae-8736-708886727bb4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-q88b9" Nov 24 11:10:51 crc kubenswrapper[5072]: I1124 11:10:51.712536 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/57c21df7-4f5f-42ae-8736-708886727bb4-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-q88b9\" (UID: \"57c21df7-4f5f-42ae-8736-708886727bb4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-q88b9" Nov 24 11:10:51 crc kubenswrapper[5072]: I1124 11:10:51.712611 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/57c21df7-4f5f-42ae-8736-708886727bb4-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-q88b9\" (UID: \"57c21df7-4f5f-42ae-8736-708886727bb4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-q88b9" Nov 24 11:10:51 crc kubenswrapper[5072]: I1124 11:10:51.712635 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/57c21df7-4f5f-42ae-8736-708886727bb4-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-q88b9\" (UID: \"57c21df7-4f5f-42ae-8736-708886727bb4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-q88b9" Nov 24 11:10:51 crc kubenswrapper[5072]: I1124 11:10:51.712726 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/57c21df7-4f5f-42ae-8736-708886727bb4-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-q88b9\" (UID: \"57c21df7-4f5f-42ae-8736-708886727bb4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-q88b9" Nov 24 11:10:51 crc kubenswrapper[5072]: I1124 11:10:51.713335 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/57c21df7-4f5f-42ae-8736-708886727bb4-service-ca\") pod \"cluster-version-operator-5c965bbfc6-q88b9\" (UID: \"57c21df7-4f5f-42ae-8736-708886727bb4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-q88b9" Nov 24 11:10:51 crc kubenswrapper[5072]: I1124 11:10:51.721256 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/57c21df7-4f5f-42ae-8736-708886727bb4-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-q88b9\" (UID: \"57c21df7-4f5f-42ae-8736-708886727bb4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-q88b9" Nov 24 11:10:51 crc kubenswrapper[5072]: I1124 11:10:51.745656 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/57c21df7-4f5f-42ae-8736-708886727bb4-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-q88b9\" (UID: \"57c21df7-4f5f-42ae-8736-708886727bb4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-q88b9" Nov 24 11:10:51 crc kubenswrapper[5072]: I1124 11:10:51.884903 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-q88b9" Nov 24 11:10:51 crc kubenswrapper[5072]: W1124 11:10:51.911486 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57c21df7_4f5f_42ae_8736_708886727bb4.slice/crio-260de56fe8424fb988e8a55cd9c1762a1d18125f689de818b009e690bf68dbfa WatchSource:0}: Error finding container 260de56fe8424fb988e8a55cd9c1762a1d18125f689de818b009e690bf68dbfa: Status 404 returned error can't find the container with id 260de56fe8424fb988e8a55cd9c1762a1d18125f689de818b009e690bf68dbfa Nov 24 11:10:52 crc kubenswrapper[5072]: I1124 11:10:52.015399 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:10:52 crc kubenswrapper[5072]: I1124 11:10:52.015411 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:10:52 crc kubenswrapper[5072]: I1124 11:10:52.015413 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:10:52 crc kubenswrapper[5072]: E1124 11:10:52.015674 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:10:52 crc kubenswrapper[5072]: E1124 11:10:52.016251 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:10:52 crc kubenswrapper[5072]: E1124 11:10:52.016315 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:10:52 crc kubenswrapper[5072]: I1124 11:10:52.594174 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-q88b9" event={"ID":"57c21df7-4f5f-42ae-8736-708886727bb4","Type":"ContainerStarted","Data":"0398e93b96e4593173e909d7d44b9a8b89e9abcee2ccb5b76a5a5895b8674fc7"} Nov 24 11:10:52 crc kubenswrapper[5072]: I1124 11:10:52.594221 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-q88b9" event={"ID":"57c21df7-4f5f-42ae-8736-708886727bb4","Type":"ContainerStarted","Data":"260de56fe8424fb988e8a55cd9c1762a1d18125f689de818b009e690bf68dbfa"} Nov 24 11:10:53 crc kubenswrapper[5072]: I1124 11:10:53.015519 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nnrv7" Nov 24 11:10:53 crc kubenswrapper[5072]: E1124 11:10:53.015901 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nnrv7" podUID="60100e7d-c8b1-4b18-8567-46e21096fa0f" Nov 24 11:10:54 crc kubenswrapper[5072]: I1124 11:10:54.015947 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:10:54 crc kubenswrapper[5072]: I1124 11:10:54.016102 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:10:54 crc kubenswrapper[5072]: E1124 11:10:54.016332 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:10:54 crc kubenswrapper[5072]: I1124 11:10:54.016740 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:10:54 crc kubenswrapper[5072]: E1124 11:10:54.016935 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:10:54 crc kubenswrapper[5072]: E1124 11:10:54.017155 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:10:55 crc kubenswrapper[5072]: I1124 11:10:55.015987 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nnrv7" Nov 24 11:10:55 crc kubenswrapper[5072]: E1124 11:10:55.016182 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nnrv7" podUID="60100e7d-c8b1-4b18-8567-46e21096fa0f" Nov 24 11:10:56 crc kubenswrapper[5072]: I1124 11:10:56.015721 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:10:56 crc kubenswrapper[5072]: I1124 11:10:56.015826 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:10:56 crc kubenswrapper[5072]: I1124 11:10:56.015826 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:10:56 crc kubenswrapper[5072]: E1124 11:10:56.016120 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:10:56 crc kubenswrapper[5072]: E1124 11:10:56.016207 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:10:56 crc kubenswrapper[5072]: E1124 11:10:56.016420 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:10:57 crc kubenswrapper[5072]: I1124 11:10:57.016068 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nnrv7" Nov 24 11:10:57 crc kubenswrapper[5072]: E1124 11:10:57.016283 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nnrv7" podUID="60100e7d-c8b1-4b18-8567-46e21096fa0f" Nov 24 11:10:57 crc kubenswrapper[5072]: I1124 11:10:57.017102 5072 scope.go:117] "RemoveContainer" containerID="b30fc71ef9fdf26e114844d344131e79b2ea981d3e69760bb92b1279f0b3c434" Nov 24 11:10:57 crc kubenswrapper[5072]: E1124 11:10:57.017285 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-n4qmw_openshift-ovn-kubernetes(80fda759-ddfd-438a-b5a2-cb775ee1bf7e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" podUID="80fda759-ddfd-438a-b5a2-cb775ee1bf7e" Nov 24 11:10:58 crc kubenswrapper[5072]: I1124 11:10:58.016067 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:10:58 crc kubenswrapper[5072]: I1124 11:10:58.016127 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:10:58 crc kubenswrapper[5072]: E1124 11:10:58.016191 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:10:58 crc kubenswrapper[5072]: I1124 11:10:58.016071 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:10:58 crc kubenswrapper[5072]: E1124 11:10:58.016304 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:10:58 crc kubenswrapper[5072]: E1124 11:10:58.016497 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:10:59 crc kubenswrapper[5072]: I1124 11:10:59.015711 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nnrv7" Nov 24 11:10:59 crc kubenswrapper[5072]: E1124 11:10:59.016982 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nnrv7" podUID="60100e7d-c8b1-4b18-8567-46e21096fa0f" Nov 24 11:11:00 crc kubenswrapper[5072]: I1124 11:11:00.015699 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:11:00 crc kubenswrapper[5072]: I1124 11:11:00.015915 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:11:00 crc kubenswrapper[5072]: E1124 11:11:00.016118 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:11:00 crc kubenswrapper[5072]: E1124 11:11:00.016237 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:11:00 crc kubenswrapper[5072]: I1124 11:11:00.016927 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:11:00 crc kubenswrapper[5072]: E1124 11:11:00.017144 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:11:01 crc kubenswrapper[5072]: I1124 11:11:01.015490 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nnrv7" Nov 24 11:11:01 crc kubenswrapper[5072]: E1124 11:11:01.015697 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nnrv7" podUID="60100e7d-c8b1-4b18-8567-46e21096fa0f" Nov 24 11:11:02 crc kubenswrapper[5072]: I1124 11:11:02.015700 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:11:02 crc kubenswrapper[5072]: I1124 11:11:02.015752 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:11:02 crc kubenswrapper[5072]: E1124 11:11:02.015875 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:11:02 crc kubenswrapper[5072]: I1124 11:11:02.015934 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:11:02 crc kubenswrapper[5072]: E1124 11:11:02.016101 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:11:02 crc kubenswrapper[5072]: E1124 11:11:02.016221 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:11:03 crc kubenswrapper[5072]: I1124 11:11:03.016156 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nnrv7" Nov 24 11:11:03 crc kubenswrapper[5072]: E1124 11:11:03.016307 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nnrv7" podUID="60100e7d-c8b1-4b18-8567-46e21096fa0f" Nov 24 11:11:04 crc kubenswrapper[5072]: I1124 11:11:04.016149 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:11:04 crc kubenswrapper[5072]: I1124 11:11:04.016180 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:11:04 crc kubenswrapper[5072]: E1124 11:11:04.016549 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:11:04 crc kubenswrapper[5072]: I1124 11:11:04.016186 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:11:04 crc kubenswrapper[5072]: E1124 11:11:04.016355 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:11:04 crc kubenswrapper[5072]: E1124 11:11:04.016762 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:11:05 crc kubenswrapper[5072]: I1124 11:11:05.016369 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nnrv7" Nov 24 11:11:05 crc kubenswrapper[5072]: E1124 11:11:05.017021 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nnrv7" podUID="60100e7d-c8b1-4b18-8567-46e21096fa0f" Nov 24 11:11:05 crc kubenswrapper[5072]: I1124 11:11:05.642980 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-t8b9x_1a9fe7b3-71a3-4388-8ee4-7531ceef6049/kube-multus/1.log" Nov 24 11:11:05 crc kubenswrapper[5072]: I1124 11:11:05.643719 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-t8b9x_1a9fe7b3-71a3-4388-8ee4-7531ceef6049/kube-multus/0.log" Nov 24 11:11:05 crc kubenswrapper[5072]: I1124 11:11:05.643790 5072 generic.go:334] "Generic (PLEG): container finished" podID="1a9fe7b3-71a3-4388-8ee4-7531ceef6049" containerID="db181b35d5ddd8cb7ce31d9293b82a515a8889794cf9696c664b101693247cc6" exitCode=1 Nov 24 11:11:05 crc kubenswrapper[5072]: I1124 11:11:05.643827 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-t8b9x" event={"ID":"1a9fe7b3-71a3-4388-8ee4-7531ceef6049","Type":"ContainerDied","Data":"db181b35d5ddd8cb7ce31d9293b82a515a8889794cf9696c664b101693247cc6"} Nov 24 11:11:05 crc kubenswrapper[5072]: I1124 11:11:05.643862 5072 scope.go:117] "RemoveContainer" containerID="96637ece9dca11a6b9e2a8fff8e78ca37f48e9f86e3f076e80cbd56aa353ca74" Nov 24 11:11:05 crc kubenswrapper[5072]: I1124 11:11:05.645153 5072 scope.go:117] "RemoveContainer" containerID="db181b35d5ddd8cb7ce31d9293b82a515a8889794cf9696c664b101693247cc6" Nov 24 11:11:05 crc kubenswrapper[5072]: E1124 11:11:05.645928 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-t8b9x_openshift-multus(1a9fe7b3-71a3-4388-8ee4-7531ceef6049)\"" pod="openshift-multus/multus-t8b9x" podUID="1a9fe7b3-71a3-4388-8ee4-7531ceef6049" Nov 24 11:11:05 crc kubenswrapper[5072]: I1124 11:11:05.671882 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-q88b9" podStartSLOduration=94.671857229 podStartE2EDuration="1m34.671857229s" podCreationTimestamp="2025-11-24 11:09:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:10:52.61335902 +0000 UTC m=+104.324883496" watchObservedRunningTime="2025-11-24 11:11:05.671857229 +0000 UTC m=+117.383381745" Nov 24 11:11:06 crc kubenswrapper[5072]: I1124 11:11:06.015666 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:11:06 crc kubenswrapper[5072]: I1124 11:11:06.015795 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:11:06 crc kubenswrapper[5072]: I1124 11:11:06.015829 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:11:06 crc kubenswrapper[5072]: E1124 11:11:06.015993 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:11:06 crc kubenswrapper[5072]: E1124 11:11:06.016192 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:11:06 crc kubenswrapper[5072]: E1124 11:11:06.016310 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:11:06 crc kubenswrapper[5072]: I1124 11:11:06.649417 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-t8b9x_1a9fe7b3-71a3-4388-8ee4-7531ceef6049/kube-multus/1.log" Nov 24 11:11:07 crc kubenswrapper[5072]: I1124 11:11:07.016105 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nnrv7" Nov 24 11:11:07 crc kubenswrapper[5072]: E1124 11:11:07.016294 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nnrv7" podUID="60100e7d-c8b1-4b18-8567-46e21096fa0f" Nov 24 11:11:08 crc kubenswrapper[5072]: I1124 11:11:08.016224 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:11:08 crc kubenswrapper[5072]: I1124 11:11:08.016299 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:11:08 crc kubenswrapper[5072]: I1124 11:11:08.016224 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:11:08 crc kubenswrapper[5072]: E1124 11:11:08.016452 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:11:08 crc kubenswrapper[5072]: E1124 11:11:08.016578 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:11:08 crc kubenswrapper[5072]: E1124 11:11:08.016753 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:11:08 crc kubenswrapper[5072]: E1124 11:11:08.985299 5072 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Nov 24 11:11:09 crc kubenswrapper[5072]: I1124 11:11:09.015694 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nnrv7" Nov 24 11:11:09 crc kubenswrapper[5072]: E1124 11:11:09.017481 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nnrv7" podUID="60100e7d-c8b1-4b18-8567-46e21096fa0f" Nov 24 11:11:09 crc kubenswrapper[5072]: E1124 11:11:09.126425 5072 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 24 11:11:10 crc kubenswrapper[5072]: I1124 11:11:10.015420 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:11:10 crc kubenswrapper[5072]: I1124 11:11:10.015437 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:11:10 crc kubenswrapper[5072]: I1124 11:11:10.015598 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:11:10 crc kubenswrapper[5072]: E1124 11:11:10.015779 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:11:10 crc kubenswrapper[5072]: E1124 11:11:10.016102 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:11:10 crc kubenswrapper[5072]: E1124 11:11:10.016623 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:11:10 crc kubenswrapper[5072]: I1124 11:11:10.017078 5072 scope.go:117] "RemoveContainer" containerID="b30fc71ef9fdf26e114844d344131e79b2ea981d3e69760bb92b1279f0b3c434" Nov 24 11:11:10 crc kubenswrapper[5072]: I1124 11:11:10.662675 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n4qmw_80fda759-ddfd-438a-b5a2-cb775ee1bf7e/ovnkube-controller/3.log" Nov 24 11:11:10 crc kubenswrapper[5072]: I1124 11:11:10.666515 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" event={"ID":"80fda759-ddfd-438a-b5a2-cb775ee1bf7e","Type":"ContainerStarted","Data":"742ede6186d9ba2c21d0ef3f6150d749e4713eec1d303faa160b73247570dd93"} Nov 24 11:11:10 crc kubenswrapper[5072]: I1124 11:11:10.667064 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" Nov 24 11:11:10 crc kubenswrapper[5072]: I1124 11:11:10.711720 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" podStartSLOduration=99.711704447 podStartE2EDuration="1m39.711704447s" podCreationTimestamp="2025-11-24 11:09:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:11:10.711230455 +0000 UTC m=+122.422754931" watchObservedRunningTime="2025-11-24 11:11:10.711704447 +0000 UTC m=+122.423228923" Nov 24 11:11:11 crc kubenswrapper[5072]: I1124 11:11:11.016579 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nnrv7" Nov 24 11:11:11 crc kubenswrapper[5072]: E1124 11:11:11.016769 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nnrv7" podUID="60100e7d-c8b1-4b18-8567-46e21096fa0f" Nov 24 11:11:11 crc kubenswrapper[5072]: I1124 11:11:11.094420 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-nnrv7"] Nov 24 11:11:11 crc kubenswrapper[5072]: I1124 11:11:11.669442 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nnrv7" Nov 24 11:11:11 crc kubenswrapper[5072]: E1124 11:11:11.669569 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nnrv7" podUID="60100e7d-c8b1-4b18-8567-46e21096fa0f" Nov 24 11:11:12 crc kubenswrapper[5072]: I1124 11:11:12.015730 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:11:12 crc kubenswrapper[5072]: I1124 11:11:12.015767 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:11:12 crc kubenswrapper[5072]: E1124 11:11:12.015916 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:11:12 crc kubenswrapper[5072]: E1124 11:11:12.016128 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:11:12 crc kubenswrapper[5072]: I1124 11:11:12.015753 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:11:12 crc kubenswrapper[5072]: E1124 11:11:12.016565 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:11:14 crc kubenswrapper[5072]: I1124 11:11:14.016266 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:11:14 crc kubenswrapper[5072]: E1124 11:11:14.016734 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:11:14 crc kubenswrapper[5072]: I1124 11:11:14.016486 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:11:14 crc kubenswrapper[5072]: E1124 11:11:14.016853 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:11:14 crc kubenswrapper[5072]: I1124 11:11:14.016467 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:11:14 crc kubenswrapper[5072]: E1124 11:11:14.016943 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:11:14 crc kubenswrapper[5072]: I1124 11:11:14.016544 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nnrv7" Nov 24 11:11:14 crc kubenswrapper[5072]: E1124 11:11:14.017042 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nnrv7" podUID="60100e7d-c8b1-4b18-8567-46e21096fa0f" Nov 24 11:11:14 crc kubenswrapper[5072]: E1124 11:11:14.128047 5072 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 24 11:11:16 crc kubenswrapper[5072]: I1124 11:11:16.015995 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:11:16 crc kubenswrapper[5072]: I1124 11:11:16.016076 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:11:16 crc kubenswrapper[5072]: I1124 11:11:16.016071 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nnrv7" Nov 24 11:11:16 crc kubenswrapper[5072]: I1124 11:11:16.016235 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:11:16 crc kubenswrapper[5072]: I1124 11:11:16.016450 5072 scope.go:117] "RemoveContainer" containerID="db181b35d5ddd8cb7ce31d9293b82a515a8889794cf9696c664b101693247cc6" Nov 24 11:11:16 crc kubenswrapper[5072]: E1124 11:11:16.016594 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nnrv7" podUID="60100e7d-c8b1-4b18-8567-46e21096fa0f" Nov 24 11:11:16 crc kubenswrapper[5072]: E1124 11:11:16.016720 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:11:16 crc kubenswrapper[5072]: E1124 11:11:16.016806 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:11:16 crc kubenswrapper[5072]: E1124 11:11:16.016877 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:11:16 crc kubenswrapper[5072]: I1124 11:11:16.686363 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-t8b9x_1a9fe7b3-71a3-4388-8ee4-7531ceef6049/kube-multus/1.log" Nov 24 11:11:16 crc kubenswrapper[5072]: I1124 11:11:16.686633 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-t8b9x" event={"ID":"1a9fe7b3-71a3-4388-8ee4-7531ceef6049","Type":"ContainerStarted","Data":"bfd40dad8f619581f0615e6e2037e751d4dfed983e7bf4530c461175ff0bb47f"} Nov 24 11:11:18 crc kubenswrapper[5072]: I1124 11:11:18.015999 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:11:18 crc kubenswrapper[5072]: I1124 11:11:18.016143 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:11:18 crc kubenswrapper[5072]: I1124 11:11:18.016216 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:11:18 crc kubenswrapper[5072]: I1124 11:11:18.016251 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nnrv7" Nov 24 11:11:18 crc kubenswrapper[5072]: E1124 11:11:18.016154 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:11:18 crc kubenswrapper[5072]: E1124 11:11:18.016418 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:11:18 crc kubenswrapper[5072]: E1124 11:11:18.016552 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:11:18 crc kubenswrapper[5072]: E1124 11:11:18.016897 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nnrv7" podUID="60100e7d-c8b1-4b18-8567-46e21096fa0f" Nov 24 11:11:20 crc kubenswrapper[5072]: I1124 11:11:20.015804 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:11:20 crc kubenswrapper[5072]: I1124 11:11:20.016604 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nnrv7" Nov 24 11:11:20 crc kubenswrapper[5072]: I1124 11:11:20.016606 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:11:20 crc kubenswrapper[5072]: I1124 11:11:20.016960 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:11:20 crc kubenswrapper[5072]: I1124 11:11:20.018724 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Nov 24 11:11:20 crc kubenswrapper[5072]: I1124 11:11:20.018849 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Nov 24 11:11:20 crc kubenswrapper[5072]: I1124 11:11:20.021573 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Nov 24 11:11:20 crc kubenswrapper[5072]: I1124 11:11:20.021581 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Nov 24 11:11:20 crc kubenswrapper[5072]: I1124 11:11:20.023866 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Nov 24 11:11:20 crc kubenswrapper[5072]: I1124 11:11:20.024280 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.371480 5072 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.417157 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-4qrkp"] Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.418348 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-4qrkp" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.419810 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-dzh8r"] Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.420599 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-dzh8r" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.422064 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.422871 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-kcz78"] Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.422944 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.423930 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kcz78" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.424329 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-dqmfz"] Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.424646 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dqmfz" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.426127 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-km2xf"] Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.426993 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-km2xf" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.427798 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-rxs28"] Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.428188 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-rxs28" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.439601 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8rg9n"] Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.446455 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8rg9n" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.448295 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.456948 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.457127 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.458740 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.458969 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.459174 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.459338 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.459509 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.459692 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.459809 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.460002 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-ms2fp"] Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.460603 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-mzvpf"] Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.460905 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-fpxll"] Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.461014 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mzvpf" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.461056 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ms2fp" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.461638 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.461773 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-fpxll" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.462015 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.462297 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.462539 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.470948 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-798pd"] Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.471582 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-798pd" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.471967 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-rmzh4"] Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.472805 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-rmzh4" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.474166 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.475566 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bm2lw"] Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.475841 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.476109 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.476270 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bm2lw" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.476485 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.476817 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.476923 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-q8585"] Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.477291 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.477448 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.477568 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.477781 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-q8585" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.477824 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.478023 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.478435 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.478698 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.479014 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.479054 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.479301 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.480199 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.480425 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.480448 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-l28pf"] Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.481038 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nldcl"] Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.481605 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nldcl" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.481975 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4fg22"] Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.482128 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-l28pf" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.482966 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4fg22" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.483840 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.484051 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.484159 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.484270 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.484529 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.484675 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.488198 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.491049 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.500629 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.500821 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-qtf9d"] Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.501277 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9w2qz"] Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.501679 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-9w2qz" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.502614 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-qtf9d" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.503492 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-h6q9x"] Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.504033 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-h6q9x" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.516422 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-wxc9p"] Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.516996 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-vftrc"] Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.517720 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7bjm7"] Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.517884 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vftrc" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.518219 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-wxc9p" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.518295 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-ln5s8"] Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.518747 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ln5s8" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.518767 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7bjm7" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.518840 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t6876"] Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.519295 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t6876" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.521879 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-5d2ld"] Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.546365 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2jj65"] Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.549905 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5k5rr"] Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.550206 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5d2ld" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.550589 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2jj65" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.551300 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-m47n7"] Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.555547 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5k5rr" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.556149 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.556702 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.556970 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.558943 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-x6g8r"] Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.560286 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nwsjb"] Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.560702 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-m47n7" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.593595 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-x6g8r" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.595392 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d33a4711-23b8-41cb-bf35-708e252369ac-serving-cert\") pod \"authentication-operator-69f744f599-q8585\" (UID: \"d33a4711-23b8-41cb-bf35-708e252369ac\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-q8585" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.595436 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d33a4711-23b8-41cb-bf35-708e252369ac-service-ca-bundle\") pod \"authentication-operator-69f744f599-q8585\" (UID: \"d33a4711-23b8-41cb-bf35-708e252369ac\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-q8585" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.595461 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gs99l\" (UniqueName: \"kubernetes.io/projected/2b4f223b-f1f8-4e6b-ae06-519bc73d38ea-kube-api-access-gs99l\") pod \"apiserver-76f77b778f-4qrkp\" (UID: \"2b4f223b-f1f8-4e6b-ae06-519bc73d38ea\") " pod="openshift-apiserver/apiserver-76f77b778f-4qrkp" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.595486 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c77a843c-6b36-4143-aff0-f5e7d227c11d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-rxs28\" (UID: \"c77a843c-6b36-4143-aff0-f5e7d227c11d\") " pod="openshift-authentication/oauth-openshift-558db77b4-rxs28" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.595510 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2b4f223b-f1f8-4e6b-ae06-519bc73d38ea-audit-dir\") pod \"apiserver-76f77b778f-4qrkp\" (UID: \"2b4f223b-f1f8-4e6b-ae06-519bc73d38ea\") " pod="openshift-apiserver/apiserver-76f77b778f-4qrkp" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.595531 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nfcp\" (UniqueName: \"kubernetes.io/projected/b2182353-061f-40bf-8f81-1cb1aaaf1b97-kube-api-access-2nfcp\") pod \"cluster-samples-operator-665b6dd947-nldcl\" (UID: \"b2182353-061f-40bf-8f81-1cb1aaaf1b97\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nldcl" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.595553 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ca699c4e-ccec-4ff8-895f-109777beca4c-client-ca\") pod \"route-controller-manager-6576b87f9c-mzvpf\" (UID: \"ca699c4e-ccec-4ff8-895f-109777beca4c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mzvpf" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.595582 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwkb5\" (UniqueName: \"kubernetes.io/projected/24b0c90f-a223-41e9-beb5-619fdeaf49c1-kube-api-access-dwkb5\") pod \"dns-operator-744455d44c-rmzh4\" (UID: \"24b0c90f-a223-41e9-beb5-619fdeaf49c1\") " pod="openshift-dns-operator/dns-operator-744455d44c-rmzh4" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.595610 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca699c4e-ccec-4ff8-895f-109777beca4c-serving-cert\") pod \"route-controller-manager-6576b87f9c-mzvpf\" (UID: \"ca699c4e-ccec-4ff8-895f-109777beca4c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mzvpf" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.595630 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5354347e-2a7e-42d4-a13c-33daf97e79c0-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-kcz78\" (UID: \"5354347e-2a7e-42d4-a13c-33daf97e79c0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kcz78" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.595650 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c77a843c-6b36-4143-aff0-f5e7d227c11d-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-rxs28\" (UID: \"c77a843c-6b36-4143-aff0-f5e7d227c11d\") " pod="openshift-authentication/oauth-openshift-558db77b4-rxs28" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.595673 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7hl9\" (UniqueName: \"kubernetes.io/projected/60ed0c7a-5210-4706-b7b6-d989561edf26-kube-api-access-j7hl9\") pod \"machine-approver-56656f9798-dqmfz\" (UID: \"60ed0c7a-5210-4706-b7b6-d989561edf26\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dqmfz" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.595698 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/421f29d9-28d7-4e85-852e-d25b0529497a-client-ca\") pod \"controller-manager-879f6c89f-km2xf\" (UID: \"421f29d9-28d7-4e85-852e-d25b0529497a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-km2xf" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.595718 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c77a843c-6b36-4143-aff0-f5e7d227c11d-audit-policies\") pod \"oauth-openshift-558db77b4-rxs28\" (UID: \"c77a843c-6b36-4143-aff0-f5e7d227c11d\") " pod="openshift-authentication/oauth-openshift-558db77b4-rxs28" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.595737 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2b4f223b-f1f8-4e6b-ae06-519bc73d38ea-etcd-client\") pod \"apiserver-76f77b778f-4qrkp\" (UID: \"2b4f223b-f1f8-4e6b-ae06-519bc73d38ea\") " pod="openshift-apiserver/apiserver-76f77b778f-4qrkp" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.595756 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/2b4f223b-f1f8-4e6b-ae06-519bc73d38ea-etcd-serving-ca\") pod \"apiserver-76f77b778f-4qrkp\" (UID: \"2b4f223b-f1f8-4e6b-ae06-519bc73d38ea\") " pod="openshift-apiserver/apiserver-76f77b778f-4qrkp" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.595776 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d30ed7a-3577-40f4-8d32-eec9f851ab19-trusted-ca-bundle\") pod \"console-f9d7485db-798pd\" (UID: \"9d30ed7a-3577-40f4-8d32-eec9f851ab19\") " pod="openshift-console/console-f9d7485db-798pd" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.595801 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c77a843c-6b36-4143-aff0-f5e7d227c11d-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-rxs28\" (UID: \"c77a843c-6b36-4143-aff0-f5e7d227c11d\") " pod="openshift-authentication/oauth-openshift-558db77b4-rxs28" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.595822 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnqlw\" (UniqueName: \"kubernetes.io/projected/f62763cf-97b0-41ff-bac4-e4acd8060859-kube-api-access-cnqlw\") pod \"cluster-image-registry-operator-dc59b4c8b-4fg22\" (UID: \"f62763cf-97b0-41ff-bac4-e4acd8060859\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4fg22" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.595842 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9d30ed7a-3577-40f4-8d32-eec9f851ab19-service-ca\") pod \"console-f9d7485db-798pd\" (UID: \"9d30ed7a-3577-40f4-8d32-eec9f851ab19\") " pod="openshift-console/console-f9d7485db-798pd" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.595868 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c677e814-7e89-49be-a000-091b8e49d6b8-serving-cert\") pod \"console-operator-58897d9998-l28pf\" (UID: \"c677e814-7e89-49be-a000-091b8e49d6b8\") " pod="openshift-console-operator/console-operator-58897d9998-l28pf" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.595901 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca699c4e-ccec-4ff8-895f-109777beca4c-config\") pod \"route-controller-manager-6576b87f9c-mzvpf\" (UID: \"ca699c4e-ccec-4ff8-895f-109777beca4c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mzvpf" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.595923 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c77a843c-6b36-4143-aff0-f5e7d227c11d-audit-dir\") pod \"oauth-openshift-558db77b4-rxs28\" (UID: \"c77a843c-6b36-4143-aff0-f5e7d227c11d\") " pod="openshift-authentication/oauth-openshift-558db77b4-rxs28" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.595954 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/2b4f223b-f1f8-4e6b-ae06-519bc73d38ea-image-import-ca\") pod \"apiserver-76f77b778f-4qrkp\" (UID: \"2b4f223b-f1f8-4e6b-ae06-519bc73d38ea\") " pod="openshift-apiserver/apiserver-76f77b778f-4qrkp" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.595967 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ztvf4"] Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.596358 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j5sfl"] Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.596773 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399700-hnjjf"] Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.595977 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnr62\" (UniqueName: \"kubernetes.io/projected/bcbc6938-ae1b-4306-a73d-7f2c5dc64047-kube-api-access-pnr62\") pod \"machine-api-operator-5694c8668f-dzh8r\" (UID: \"bcbc6938-ae1b-4306-a73d-7f2c5dc64047\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dzh8r" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.597071 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9d30ed7a-3577-40f4-8d32-eec9f851ab19-oauth-serving-cert\") pod \"console-f9d7485db-798pd\" (UID: \"9d30ed7a-3577-40f4-8d32-eec9f851ab19\") " pod="openshift-console/console-f9d7485db-798pd" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.597099 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brlb2\" (UniqueName: \"kubernetes.io/projected/c677e814-7e89-49be-a000-091b8e49d6b8-kube-api-access-brlb2\") pod \"console-operator-58897d9998-l28pf\" (UID: \"c677e814-7e89-49be-a000-091b8e49d6b8\") " pod="openshift-console-operator/console-operator-58897d9998-l28pf" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.597120 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9188831-917b-434c-b118-24c7971f6381-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-8rg9n\" (UID: \"d9188831-917b-434c-b118-24c7971f6381\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8rg9n" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.597139 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5354347e-2a7e-42d4-a13c-33daf97e79c0-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-kcz78\" (UID: \"5354347e-2a7e-42d4-a13c-33daf97e79c0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kcz78" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.597155 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5354347e-2a7e-42d4-a13c-33daf97e79c0-encryption-config\") pod \"apiserver-7bbb656c7d-kcz78\" (UID: \"5354347e-2a7e-42d4-a13c-33daf97e79c0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kcz78" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.597180 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c77a843c-6b36-4143-aff0-f5e7d227c11d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-rxs28\" (UID: \"c77a843c-6b36-4143-aff0-f5e7d227c11d\") " pod="openshift-authentication/oauth-openshift-558db77b4-rxs28" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.597200 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bcbc6938-ae1b-4306-a73d-7f2c5dc64047-config\") pod \"machine-api-operator-5694c8668f-dzh8r\" (UID: \"bcbc6938-ae1b-4306-a73d-7f2c5dc64047\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dzh8r" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.597230 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c77a843c-6b36-4143-aff0-f5e7d227c11d-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-rxs28\" (UID: \"c77a843c-6b36-4143-aff0-f5e7d227c11d\") " pod="openshift-authentication/oauth-openshift-558db77b4-rxs28" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.597245 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399700-hnjjf" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.597315 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nwsjb" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.597579 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-ztvf4" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.597588 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c77a843c-6b36-4143-aff0-f5e7d227c11d-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-rxs28\" (UID: \"c77a843c-6b36-4143-aff0-f5e7d227c11d\") " pod="openshift-authentication/oauth-openshift-558db77b4-rxs28" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.597654 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c77a843c-6b36-4143-aff0-f5e7d227c11d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-rxs28\" (UID: \"c77a843c-6b36-4143-aff0-f5e7d227c11d\") " pod="openshift-authentication/oauth-openshift-558db77b4-rxs28" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.597672 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f62763cf-97b0-41ff-bac4-e4acd8060859-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-4fg22\" (UID: \"f62763cf-97b0-41ff-bac4-e4acd8060859\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4fg22" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.597687 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2b4f223b-f1f8-4e6b-ae06-519bc73d38ea-trusted-ca-bundle\") pod \"apiserver-76f77b778f-4qrkp\" (UID: \"2b4f223b-f1f8-4e6b-ae06-519bc73d38ea\") " pod="openshift-apiserver/apiserver-76f77b778f-4qrkp" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.597701 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5354347e-2a7e-42d4-a13c-33daf97e79c0-audit-policies\") pod \"apiserver-7bbb656c7d-kcz78\" (UID: \"5354347e-2a7e-42d4-a13c-33daf97e79c0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kcz78" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.597714 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5354347e-2a7e-42d4-a13c-33daf97e79c0-serving-cert\") pod \"apiserver-7bbb656c7d-kcz78\" (UID: \"5354347e-2a7e-42d4-a13c-33daf97e79c0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kcz78" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.597731 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f62763cf-97b0-41ff-bac4-e4acd8060859-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-4fg22\" (UID: \"f62763cf-97b0-41ff-bac4-e4acd8060859\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4fg22" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.597764 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d33a4711-23b8-41cb-bf35-708e252369ac-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-q8585\" (UID: \"d33a4711-23b8-41cb-bf35-708e252369ac\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-q8585" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.597780 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hwng\" (UniqueName: \"kubernetes.io/projected/ca699c4e-ccec-4ff8-895f-109777beca4c-kube-api-access-9hwng\") pod \"route-controller-manager-6576b87f9c-mzvpf\" (UID: \"ca699c4e-ccec-4ff8-895f-109777beca4c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mzvpf" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.597797 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b4f223b-f1f8-4e6b-ae06-519bc73d38ea-serving-cert\") pod \"apiserver-76f77b778f-4qrkp\" (UID: \"2b4f223b-f1f8-4e6b-ae06-519bc73d38ea\") " pod="openshift-apiserver/apiserver-76f77b778f-4qrkp" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.597813 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/b2182353-061f-40bf-8f81-1cb1aaaf1b97-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-nldcl\" (UID: \"b2182353-061f-40bf-8f81-1cb1aaaf1b97\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nldcl" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.597831 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b4f223b-f1f8-4e6b-ae06-519bc73d38ea-config\") pod \"apiserver-76f77b778f-4qrkp\" (UID: \"2b4f223b-f1f8-4e6b-ae06-519bc73d38ea\") " pod="openshift-apiserver/apiserver-76f77b778f-4qrkp" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.597847 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/60ed0c7a-5210-4706-b7b6-d989561edf26-auth-proxy-config\") pod \"machine-approver-56656f9798-dqmfz\" (UID: \"60ed0c7a-5210-4706-b7b6-d989561edf26\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dqmfz" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.597863 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a5c87ed3-ec26-42d1-99d0-37fd576f970d-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bm2lw\" (UID: \"a5c87ed3-ec26-42d1-99d0-37fd576f970d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bm2lw" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.597878 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgr77\" (UniqueName: \"kubernetes.io/projected/5354347e-2a7e-42d4-a13c-33daf97e79c0-kube-api-access-qgr77\") pod \"apiserver-7bbb656c7d-kcz78\" (UID: \"5354347e-2a7e-42d4-a13c-33daf97e79c0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kcz78" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.597925 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/f62763cf-97b0-41ff-bac4-e4acd8060859-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-4fg22\" (UID: \"f62763cf-97b0-41ff-bac4-e4acd8060859\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4fg22" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.597942 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9d30ed7a-3577-40f4-8d32-eec9f851ab19-console-serving-cert\") pod \"console-f9d7485db-798pd\" (UID: \"9d30ed7a-3577-40f4-8d32-eec9f851ab19\") " pod="openshift-console/console-f9d7485db-798pd" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.597957 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a5c87ed3-ec26-42d1-99d0-37fd576f970d-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bm2lw\" (UID: \"a5c87ed3-ec26-42d1-99d0-37fd576f970d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bm2lw" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.597975 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/042c5da0-34af-4413-af57-feb5f484bfc3-serving-cert\") pod \"openshift-config-operator-7777fb866f-ms2fp\" (UID: \"042c5da0-34af-4413-af57-feb5f484bfc3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-ms2fp" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.597994 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5c87ed3-ec26-42d1-99d0-37fd576f970d-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bm2lw\" (UID: \"a5c87ed3-ec26-42d1-99d0-37fd576f970d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bm2lw" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.598013 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4m8t\" (UniqueName: \"kubernetes.io/projected/d33a4711-23b8-41cb-bf35-708e252369ac-kube-api-access-k4m8t\") pod \"authentication-operator-69f744f599-q8585\" (UID: \"d33a4711-23b8-41cb-bf35-708e252369ac\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-q8585" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.598107 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5354347e-2a7e-42d4-a13c-33daf97e79c0-audit-dir\") pod \"apiserver-7bbb656c7d-kcz78\" (UID: \"5354347e-2a7e-42d4-a13c-33daf97e79c0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kcz78" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.598139 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9188831-917b-434c-b118-24c7971f6381-config\") pod \"openshift-apiserver-operator-796bbdcf4f-8rg9n\" (UID: \"d9188831-917b-434c-b118-24c7971f6381\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8rg9n" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.598215 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/421f29d9-28d7-4e85-852e-d25b0529497a-config\") pod \"controller-manager-879f6c89f-km2xf\" (UID: \"421f29d9-28d7-4e85-852e-d25b0529497a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-km2xf" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.598260 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/2b4f223b-f1f8-4e6b-ae06-519bc73d38ea-audit\") pod \"apiserver-76f77b778f-4qrkp\" (UID: \"2b4f223b-f1f8-4e6b-ae06-519bc73d38ea\") " pod="openshift-apiserver/apiserver-76f77b778f-4qrkp" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.598288 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/042c5da0-34af-4413-af57-feb5f484bfc3-available-featuregates\") pod \"openshift-config-operator-7777fb866f-ms2fp\" (UID: \"042c5da0-34af-4413-af57-feb5f484bfc3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-ms2fp" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.598312 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/421f29d9-28d7-4e85-852e-d25b0529497a-serving-cert\") pod \"controller-manager-879f6c89f-km2xf\" (UID: \"421f29d9-28d7-4e85-852e-d25b0529497a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-km2xf" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.598341 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c77a843c-6b36-4143-aff0-f5e7d227c11d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-rxs28\" (UID: \"c77a843c-6b36-4143-aff0-f5e7d227c11d\") " pod="openshift-authentication/oauth-openshift-558db77b4-rxs28" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.598405 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2b4f223b-f1f8-4e6b-ae06-519bc73d38ea-node-pullsecrets\") pod \"apiserver-76f77b778f-4qrkp\" (UID: \"2b4f223b-f1f8-4e6b-ae06-519bc73d38ea\") " pod="openshift-apiserver/apiserver-76f77b778f-4qrkp" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.598429 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxnz7\" (UniqueName: \"kubernetes.io/projected/9d30ed7a-3577-40f4-8d32-eec9f851ab19-kube-api-access-sxnz7\") pod \"console-f9d7485db-798pd\" (UID: \"9d30ed7a-3577-40f4-8d32-eec9f851ab19\") " pod="openshift-console/console-f9d7485db-798pd" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.598497 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60ed0c7a-5210-4706-b7b6-d989561edf26-config\") pod \"machine-approver-56656f9798-dqmfz\" (UID: \"60ed0c7a-5210-4706-b7b6-d989561edf26\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dqmfz" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.598548 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/24b0c90f-a223-41e9-beb5-619fdeaf49c1-metrics-tls\") pod \"dns-operator-744455d44c-rmzh4\" (UID: \"24b0c90f-a223-41e9-beb5-619fdeaf49c1\") " pod="openshift-dns-operator/dns-operator-744455d44c-rmzh4" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.598584 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bw648\" (UniqueName: \"kubernetes.io/projected/1cd359a9-17ba-43c9-8cb3-7c786777226b-kube-api-access-bw648\") pod \"downloads-7954f5f757-fpxll\" (UID: \"1cd359a9-17ba-43c9-8cb3-7c786777226b\") " pod="openshift-console/downloads-7954f5f757-fpxll" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.598623 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c77a843c-6b36-4143-aff0-f5e7d227c11d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-rxs28\" (UID: \"c77a843c-6b36-4143-aff0-f5e7d227c11d\") " pod="openshift-authentication/oauth-openshift-558db77b4-rxs28" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.598661 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d33a4711-23b8-41cb-bf35-708e252369ac-config\") pod \"authentication-operator-69f744f599-q8585\" (UID: \"d33a4711-23b8-41cb-bf35-708e252369ac\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-q8585" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.598702 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkwnb\" (UniqueName: \"kubernetes.io/projected/421f29d9-28d7-4e85-852e-d25b0529497a-kube-api-access-hkwnb\") pod \"controller-manager-879f6c89f-km2xf\" (UID: \"421f29d9-28d7-4e85-852e-d25b0529497a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-km2xf" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.598724 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c77a843c-6b36-4143-aff0-f5e7d227c11d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-rxs28\" (UID: \"c77a843c-6b36-4143-aff0-f5e7d227c11d\") " pod="openshift-authentication/oauth-openshift-558db77b4-rxs28" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.598743 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/bcbc6938-ae1b-4306-a73d-7f2c5dc64047-images\") pod \"machine-api-operator-5694c8668f-dzh8r\" (UID: \"bcbc6938-ae1b-4306-a73d-7f2c5dc64047\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dzh8r" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.598777 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/60ed0c7a-5210-4706-b7b6-d989561edf26-machine-approver-tls\") pod \"machine-approver-56656f9798-dqmfz\" (UID: \"60ed0c7a-5210-4706-b7b6-d989561edf26\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dqmfz" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.598812 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k89fv\" (UniqueName: \"kubernetes.io/projected/042c5da0-34af-4413-af57-feb5f484bfc3-kube-api-access-k89fv\") pod \"openshift-config-operator-7777fb866f-ms2fp\" (UID: \"042c5da0-34af-4413-af57-feb5f484bfc3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-ms2fp" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.598841 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/2b4f223b-f1f8-4e6b-ae06-519bc73d38ea-encryption-config\") pod \"apiserver-76f77b778f-4qrkp\" (UID: \"2b4f223b-f1f8-4e6b-ae06-519bc73d38ea\") " pod="openshift-apiserver/apiserver-76f77b778f-4qrkp" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.598860 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9d30ed7a-3577-40f4-8d32-eec9f851ab19-console-oauth-config\") pod \"console-f9d7485db-798pd\" (UID: \"9d30ed7a-3577-40f4-8d32-eec9f851ab19\") " pod="openshift-console/console-f9d7485db-798pd" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.598875 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c677e814-7e89-49be-a000-091b8e49d6b8-config\") pod \"console-operator-58897d9998-l28pf\" (UID: \"c677e814-7e89-49be-a000-091b8e49d6b8\") " pod="openshift-console-operator/console-operator-58897d9998-l28pf" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.598918 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/421f29d9-28d7-4e85-852e-d25b0529497a-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-km2xf\" (UID: \"421f29d9-28d7-4e85-852e-d25b0529497a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-km2xf" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.598949 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/bcbc6938-ae1b-4306-a73d-7f2c5dc64047-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-dzh8r\" (UID: \"bcbc6938-ae1b-4306-a73d-7f2c5dc64047\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dzh8r" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.598967 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5354347e-2a7e-42d4-a13c-33daf97e79c0-etcd-client\") pod \"apiserver-7bbb656c7d-kcz78\" (UID: \"5354347e-2a7e-42d4-a13c-33daf97e79c0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kcz78" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.598996 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c77a843c-6b36-4143-aff0-f5e7d227c11d-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-rxs28\" (UID: \"c77a843c-6b36-4143-aff0-f5e7d227c11d\") " pod="openshift-authentication/oauth-openshift-558db77b4-rxs28" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.599014 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9d30ed7a-3577-40f4-8d32-eec9f851ab19-console-config\") pod \"console-f9d7485db-798pd\" (UID: \"9d30ed7a-3577-40f4-8d32-eec9f851ab19\") " pod="openshift-console/console-f9d7485db-798pd" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.599029 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pv7nw\" (UniqueName: \"kubernetes.io/projected/d9188831-917b-434c-b118-24c7971f6381-kube-api-access-pv7nw\") pod \"openshift-apiserver-operator-796bbdcf4f-8rg9n\" (UID: \"d9188831-917b-434c-b118-24c7971f6381\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8rg9n" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.599055 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqzwr\" (UniqueName: \"kubernetes.io/projected/c77a843c-6b36-4143-aff0-f5e7d227c11d-kube-api-access-vqzwr\") pod \"oauth-openshift-558db77b4-rxs28\" (UID: \"c77a843c-6b36-4143-aff0-f5e7d227c11d\") " pod="openshift-authentication/oauth-openshift-558db77b4-rxs28" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.599069 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c677e814-7e89-49be-a000-091b8e49d6b8-trusted-ca\") pod \"console-operator-58897d9998-l28pf\" (UID: \"c677e814-7e89-49be-a000-091b8e49d6b8\") " pod="openshift-console-operator/console-operator-58897d9998-l28pf" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.599323 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j5sfl" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.601674 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.603336 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8hq7n"] Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.603964 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8hq7n" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.604050 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-fvnl4"] Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.604784 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fvnl4" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.605713 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9x4dl"] Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.606337 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9x4dl" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.607647 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.609298 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.609567 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.609430 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.609476 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.609990 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.610053 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.610100 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.610276 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.610514 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.611359 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.612023 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.612618 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.612729 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.612820 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.612905 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.612993 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.613068 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.613139 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.614021 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.614411 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.614498 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.614568 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.614653 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.614715 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.614996 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.615102 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.615191 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.615257 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.615274 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.615314 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.615333 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.615348 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.615417 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.615429 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.615501 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.615510 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.615576 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.615580 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.615591 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.615659 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.615725 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.615795 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.615978 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.616063 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.616146 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.616267 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.616359 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.620398 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.617211 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.619027 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-hv7lg"] Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.617572 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.618093 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.621096 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-km2xf"] Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.621165 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-hv7lg" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.621524 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2jj65"] Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.622859 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-dzh8r"] Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.623018 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.636525 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.636985 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.637699 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7bjm7"] Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.643946 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nldcl"] Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.645054 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.645453 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-rmzh4"] Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.650731 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.658741 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.659015 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.662105 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.662453 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.662548 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.662725 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.664989 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-ms2fp"] Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.665308 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.668013 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-kcz78"] Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.670624 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t6876"] Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.672648 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-vftrc"] Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.674962 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9w2qz"] Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.674994 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-798pd"] Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.675144 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-cztzr"] Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.675953 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-cztzr" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.676856 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-fpxll"] Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.680210 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-f8msc"] Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.680774 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-f8msc" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.681812 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5k5rr"] Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.682580 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.682917 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-x6g8r"] Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.684131 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-ln5s8"] Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.685300 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-rxs28"] Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.686866 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-l28pf"] Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.687978 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-5d2ld"] Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.689126 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-qtf9d"] Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.690277 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ztvf4"] Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.691238 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-h6q9x"] Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.692526 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8hq7n"] Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.693744 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-m47n7"] Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.695327 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4fg22"] Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.695791 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8rg9n"] Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.697357 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bm2lw"] Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.697864 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-q8585"] Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.699188 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-mzvpf"] Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.700073 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nwsjb"] Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.700183 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5354347e-2a7e-42d4-a13c-33daf97e79c0-audit-dir\") pod \"apiserver-7bbb656c7d-kcz78\" (UID: \"5354347e-2a7e-42d4-a13c-33daf97e79c0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kcz78" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.700214 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9188831-917b-434c-b118-24c7971f6381-config\") pod \"openshift-apiserver-operator-796bbdcf4f-8rg9n\" (UID: \"d9188831-917b-434c-b118-24c7971f6381\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8rg9n" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.700236 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/421f29d9-28d7-4e85-852e-d25b0529497a-config\") pod \"controller-manager-879f6c89f-km2xf\" (UID: \"421f29d9-28d7-4e85-852e-d25b0529497a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-km2xf" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.700256 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/042c5da0-34af-4413-af57-feb5f484bfc3-available-featuregates\") pod \"openshift-config-operator-7777fb866f-ms2fp\" (UID: \"042c5da0-34af-4413-af57-feb5f484bfc3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-ms2fp" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.700275 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/421f29d9-28d7-4e85-852e-d25b0529497a-serving-cert\") pod \"controller-manager-879f6c89f-km2xf\" (UID: \"421f29d9-28d7-4e85-852e-d25b0529497a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-km2xf" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.700291 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c77a843c-6b36-4143-aff0-f5e7d227c11d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-rxs28\" (UID: \"c77a843c-6b36-4143-aff0-f5e7d227c11d\") " pod="openshift-authentication/oauth-openshift-558db77b4-rxs28" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.700274 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5354347e-2a7e-42d4-a13c-33daf97e79c0-audit-dir\") pod \"apiserver-7bbb656c7d-kcz78\" (UID: \"5354347e-2a7e-42d4-a13c-33daf97e79c0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kcz78" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.700314 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/2b4f223b-f1f8-4e6b-ae06-519bc73d38ea-audit\") pod \"apiserver-76f77b778f-4qrkp\" (UID: \"2b4f223b-f1f8-4e6b-ae06-519bc73d38ea\") " pod="openshift-apiserver/apiserver-76f77b778f-4qrkp" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.700416 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2b4f223b-f1f8-4e6b-ae06-519bc73d38ea-node-pullsecrets\") pod \"apiserver-76f77b778f-4qrkp\" (UID: \"2b4f223b-f1f8-4e6b-ae06-519bc73d38ea\") " pod="openshift-apiserver/apiserver-76f77b778f-4qrkp" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.700437 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxnz7\" (UniqueName: \"kubernetes.io/projected/9d30ed7a-3577-40f4-8d32-eec9f851ab19-kube-api-access-sxnz7\") pod \"console-f9d7485db-798pd\" (UID: \"9d30ed7a-3577-40f4-8d32-eec9f851ab19\") " pod="openshift-console/console-f9d7485db-798pd" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.700454 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60ed0c7a-5210-4706-b7b6-d989561edf26-config\") pod \"machine-approver-56656f9798-dqmfz\" (UID: \"60ed0c7a-5210-4706-b7b6-d989561edf26\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dqmfz" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.700470 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/24b0c90f-a223-41e9-beb5-619fdeaf49c1-metrics-tls\") pod \"dns-operator-744455d44c-rmzh4\" (UID: \"24b0c90f-a223-41e9-beb5-619fdeaf49c1\") " pod="openshift-dns-operator/dns-operator-744455d44c-rmzh4" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.700535 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bw648\" (UniqueName: \"kubernetes.io/projected/1cd359a9-17ba-43c9-8cb3-7c786777226b-kube-api-access-bw648\") pod \"downloads-7954f5f757-fpxll\" (UID: \"1cd359a9-17ba-43c9-8cb3-7c786777226b\") " pod="openshift-console/downloads-7954f5f757-fpxll" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.700553 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d33a4711-23b8-41cb-bf35-708e252369ac-config\") pod \"authentication-operator-69f744f599-q8585\" (UID: \"d33a4711-23b8-41cb-bf35-708e252369ac\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-q8585" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.700657 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c77a843c-6b36-4143-aff0-f5e7d227c11d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-rxs28\" (UID: \"c77a843c-6b36-4143-aff0-f5e7d227c11d\") " pod="openshift-authentication/oauth-openshift-558db77b4-rxs28" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.700699 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkwnb\" (UniqueName: \"kubernetes.io/projected/421f29d9-28d7-4e85-852e-d25b0529497a-kube-api-access-hkwnb\") pod \"controller-manager-879f6c89f-km2xf\" (UID: \"421f29d9-28d7-4e85-852e-d25b0529497a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-km2xf" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.700717 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c77a843c-6b36-4143-aff0-f5e7d227c11d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-rxs28\" (UID: \"c77a843c-6b36-4143-aff0-f5e7d227c11d\") " pod="openshift-authentication/oauth-openshift-558db77b4-rxs28" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.700735 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/bcbc6938-ae1b-4306-a73d-7f2c5dc64047-images\") pod \"machine-api-operator-5694c8668f-dzh8r\" (UID: \"bcbc6938-ae1b-4306-a73d-7f2c5dc64047\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dzh8r" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.700770 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/60ed0c7a-5210-4706-b7b6-d989561edf26-machine-approver-tls\") pod \"machine-approver-56656f9798-dqmfz\" (UID: \"60ed0c7a-5210-4706-b7b6-d989561edf26\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dqmfz" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.700792 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k89fv\" (UniqueName: \"kubernetes.io/projected/042c5da0-34af-4413-af57-feb5f484bfc3-kube-api-access-k89fv\") pod \"openshift-config-operator-7777fb866f-ms2fp\" (UID: \"042c5da0-34af-4413-af57-feb5f484bfc3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-ms2fp" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.700843 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/2b4f223b-f1f8-4e6b-ae06-519bc73d38ea-encryption-config\") pod \"apiserver-76f77b778f-4qrkp\" (UID: \"2b4f223b-f1f8-4e6b-ae06-519bc73d38ea\") " pod="openshift-apiserver/apiserver-76f77b778f-4qrkp" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.700861 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9d30ed7a-3577-40f4-8d32-eec9f851ab19-console-oauth-config\") pod \"console-f9d7485db-798pd\" (UID: \"9d30ed7a-3577-40f4-8d32-eec9f851ab19\") " pod="openshift-console/console-f9d7485db-798pd" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.700877 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c677e814-7e89-49be-a000-091b8e49d6b8-config\") pod \"console-operator-58897d9998-l28pf\" (UID: \"c677e814-7e89-49be-a000-091b8e49d6b8\") " pod="openshift-console-operator/console-operator-58897d9998-l28pf" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.700904 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/421f29d9-28d7-4e85-852e-d25b0529497a-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-km2xf\" (UID: \"421f29d9-28d7-4e85-852e-d25b0529497a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-km2xf" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.700922 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5354347e-2a7e-42d4-a13c-33daf97e79c0-etcd-client\") pod \"apiserver-7bbb656c7d-kcz78\" (UID: \"5354347e-2a7e-42d4-a13c-33daf97e79c0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kcz78" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.700945 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c77a843c-6b36-4143-aff0-f5e7d227c11d-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-rxs28\" (UID: \"c77a843c-6b36-4143-aff0-f5e7d227c11d\") " pod="openshift-authentication/oauth-openshift-558db77b4-rxs28" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.700963 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/bcbc6938-ae1b-4306-a73d-7f2c5dc64047-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-dzh8r\" (UID: \"bcbc6938-ae1b-4306-a73d-7f2c5dc64047\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dzh8r" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.700978 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pv7nw\" (UniqueName: \"kubernetes.io/projected/d9188831-917b-434c-b118-24c7971f6381-kube-api-access-pv7nw\") pod \"openshift-apiserver-operator-796bbdcf4f-8rg9n\" (UID: \"d9188831-917b-434c-b118-24c7971f6381\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8rg9n" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.700998 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9d30ed7a-3577-40f4-8d32-eec9f851ab19-console-config\") pod \"console-f9d7485db-798pd\" (UID: \"9d30ed7a-3577-40f4-8d32-eec9f851ab19\") " pod="openshift-console/console-f9d7485db-798pd" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.701017 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c677e814-7e89-49be-a000-091b8e49d6b8-trusted-ca\") pod \"console-operator-58897d9998-l28pf\" (UID: \"c677e814-7e89-49be-a000-091b8e49d6b8\") " pod="openshift-console-operator/console-operator-58897d9998-l28pf" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.701036 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqzwr\" (UniqueName: \"kubernetes.io/projected/c77a843c-6b36-4143-aff0-f5e7d227c11d-kube-api-access-vqzwr\") pod \"oauth-openshift-558db77b4-rxs28\" (UID: \"c77a843c-6b36-4143-aff0-f5e7d227c11d\") " pod="openshift-authentication/oauth-openshift-558db77b4-rxs28" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.701082 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d33a4711-23b8-41cb-bf35-708e252369ac-serving-cert\") pod \"authentication-operator-69f744f599-q8585\" (UID: \"d33a4711-23b8-41cb-bf35-708e252369ac\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-q8585" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.701098 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d33a4711-23b8-41cb-bf35-708e252369ac-service-ca-bundle\") pod \"authentication-operator-69f744f599-q8585\" (UID: \"d33a4711-23b8-41cb-bf35-708e252369ac\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-q8585" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.701115 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gs99l\" (UniqueName: \"kubernetes.io/projected/2b4f223b-f1f8-4e6b-ae06-519bc73d38ea-kube-api-access-gs99l\") pod \"apiserver-76f77b778f-4qrkp\" (UID: \"2b4f223b-f1f8-4e6b-ae06-519bc73d38ea\") " pod="openshift-apiserver/apiserver-76f77b778f-4qrkp" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.701134 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2b4f223b-f1f8-4e6b-ae06-519bc73d38ea-audit-dir\") pod \"apiserver-76f77b778f-4qrkp\" (UID: \"2b4f223b-f1f8-4e6b-ae06-519bc73d38ea\") " pod="openshift-apiserver/apiserver-76f77b778f-4qrkp" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.701153 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nfcp\" (UniqueName: \"kubernetes.io/projected/b2182353-061f-40bf-8f81-1cb1aaaf1b97-kube-api-access-2nfcp\") pod \"cluster-samples-operator-665b6dd947-nldcl\" (UID: \"b2182353-061f-40bf-8f81-1cb1aaaf1b97\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nldcl" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.701178 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ca699c4e-ccec-4ff8-895f-109777beca4c-client-ca\") pod \"route-controller-manager-6576b87f9c-mzvpf\" (UID: \"ca699c4e-ccec-4ff8-895f-109777beca4c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mzvpf" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.701194 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwkb5\" (UniqueName: \"kubernetes.io/projected/24b0c90f-a223-41e9-beb5-619fdeaf49c1-kube-api-access-dwkb5\") pod \"dns-operator-744455d44c-rmzh4\" (UID: \"24b0c90f-a223-41e9-beb5-619fdeaf49c1\") " pod="openshift-dns-operator/dns-operator-744455d44c-rmzh4" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.701212 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c77a843c-6b36-4143-aff0-f5e7d227c11d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-rxs28\" (UID: \"c77a843c-6b36-4143-aff0-f5e7d227c11d\") " pod="openshift-authentication/oauth-openshift-558db77b4-rxs28" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.701226 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5354347e-2a7e-42d4-a13c-33daf97e79c0-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-kcz78\" (UID: \"5354347e-2a7e-42d4-a13c-33daf97e79c0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kcz78" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.701242 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c77a843c-6b36-4143-aff0-f5e7d227c11d-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-rxs28\" (UID: \"c77a843c-6b36-4143-aff0-f5e7d227c11d\") " pod="openshift-authentication/oauth-openshift-558db77b4-rxs28" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.701257 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7hl9\" (UniqueName: \"kubernetes.io/projected/60ed0c7a-5210-4706-b7b6-d989561edf26-kube-api-access-j7hl9\") pod \"machine-approver-56656f9798-dqmfz\" (UID: \"60ed0c7a-5210-4706-b7b6-d989561edf26\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dqmfz" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.701271 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca699c4e-ccec-4ff8-895f-109777beca4c-serving-cert\") pod \"route-controller-manager-6576b87f9c-mzvpf\" (UID: \"ca699c4e-ccec-4ff8-895f-109777beca4c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mzvpf" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.701285 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/421f29d9-28d7-4e85-852e-d25b0529497a-client-ca\") pod \"controller-manager-879f6c89f-km2xf\" (UID: \"421f29d9-28d7-4e85-852e-d25b0529497a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-km2xf" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.701304 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c77a843c-6b36-4143-aff0-f5e7d227c11d-audit-policies\") pod \"oauth-openshift-558db77b4-rxs28\" (UID: \"c77a843c-6b36-4143-aff0-f5e7d227c11d\") " pod="openshift-authentication/oauth-openshift-558db77b4-rxs28" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.701339 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2b4f223b-f1f8-4e6b-ae06-519bc73d38ea-etcd-client\") pod \"apiserver-76f77b778f-4qrkp\" (UID: \"2b4f223b-f1f8-4e6b-ae06-519bc73d38ea\") " pod="openshift-apiserver/apiserver-76f77b778f-4qrkp" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.701343 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60ed0c7a-5210-4706-b7b6-d989561edf26-config\") pod \"machine-approver-56656f9798-dqmfz\" (UID: \"60ed0c7a-5210-4706-b7b6-d989561edf26\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dqmfz" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.701355 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/2b4f223b-f1f8-4e6b-ae06-519bc73d38ea-etcd-serving-ca\") pod \"apiserver-76f77b778f-4qrkp\" (UID: \"2b4f223b-f1f8-4e6b-ae06-519bc73d38ea\") " pod="openshift-apiserver/apiserver-76f77b778f-4qrkp" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.701384 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d30ed7a-3577-40f4-8d32-eec9f851ab19-trusted-ca-bundle\") pod \"console-f9d7485db-798pd\" (UID: \"9d30ed7a-3577-40f4-8d32-eec9f851ab19\") " pod="openshift-console/console-f9d7485db-798pd" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.701404 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c77a843c-6b36-4143-aff0-f5e7d227c11d-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-rxs28\" (UID: \"c77a843c-6b36-4143-aff0-f5e7d227c11d\") " pod="openshift-authentication/oauth-openshift-558db77b4-rxs28" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.701422 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnqlw\" (UniqueName: \"kubernetes.io/projected/f62763cf-97b0-41ff-bac4-e4acd8060859-kube-api-access-cnqlw\") pod \"cluster-image-registry-operator-dc59b4c8b-4fg22\" (UID: \"f62763cf-97b0-41ff-bac4-e4acd8060859\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4fg22" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.701456 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9d30ed7a-3577-40f4-8d32-eec9f851ab19-service-ca\") pod \"console-f9d7485db-798pd\" (UID: \"9d30ed7a-3577-40f4-8d32-eec9f851ab19\") " pod="openshift-console/console-f9d7485db-798pd" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.701471 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c677e814-7e89-49be-a000-091b8e49d6b8-serving-cert\") pod \"console-operator-58897d9998-l28pf\" (UID: \"c677e814-7e89-49be-a000-091b8e49d6b8\") " pod="openshift-console-operator/console-operator-58897d9998-l28pf" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.701495 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c77a843c-6b36-4143-aff0-f5e7d227c11d-audit-dir\") pod \"oauth-openshift-558db77b4-rxs28\" (UID: \"c77a843c-6b36-4143-aff0-f5e7d227c11d\") " pod="openshift-authentication/oauth-openshift-558db77b4-rxs28" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.701526 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/2b4f223b-f1f8-4e6b-ae06-519bc73d38ea-image-import-ca\") pod \"apiserver-76f77b778f-4qrkp\" (UID: \"2b4f223b-f1f8-4e6b-ae06-519bc73d38ea\") " pod="openshift-apiserver/apiserver-76f77b778f-4qrkp" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.701578 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca699c4e-ccec-4ff8-895f-109777beca4c-config\") pod \"route-controller-manager-6576b87f9c-mzvpf\" (UID: \"ca699c4e-ccec-4ff8-895f-109777beca4c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mzvpf" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.701598 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnr62\" (UniqueName: \"kubernetes.io/projected/bcbc6938-ae1b-4306-a73d-7f2c5dc64047-kube-api-access-pnr62\") pod \"machine-api-operator-5694c8668f-dzh8r\" (UID: \"bcbc6938-ae1b-4306-a73d-7f2c5dc64047\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dzh8r" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.701612 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9d30ed7a-3577-40f4-8d32-eec9f851ab19-oauth-serving-cert\") pod \"console-f9d7485db-798pd\" (UID: \"9d30ed7a-3577-40f4-8d32-eec9f851ab19\") " pod="openshift-console/console-f9d7485db-798pd" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.701630 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brlb2\" (UniqueName: \"kubernetes.io/projected/c677e814-7e89-49be-a000-091b8e49d6b8-kube-api-access-brlb2\") pod \"console-operator-58897d9998-l28pf\" (UID: \"c677e814-7e89-49be-a000-091b8e49d6b8\") " pod="openshift-console-operator/console-operator-58897d9998-l28pf" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.701645 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9188831-917b-434c-b118-24c7971f6381-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-8rg9n\" (UID: \"d9188831-917b-434c-b118-24c7971f6381\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8rg9n" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.701660 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5354347e-2a7e-42d4-a13c-33daf97e79c0-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-kcz78\" (UID: \"5354347e-2a7e-42d4-a13c-33daf97e79c0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kcz78" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.701676 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5354347e-2a7e-42d4-a13c-33daf97e79c0-encryption-config\") pod \"apiserver-7bbb656c7d-kcz78\" (UID: \"5354347e-2a7e-42d4-a13c-33daf97e79c0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kcz78" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.701719 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c77a843c-6b36-4143-aff0-f5e7d227c11d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-rxs28\" (UID: \"c77a843c-6b36-4143-aff0-f5e7d227c11d\") " pod="openshift-authentication/oauth-openshift-558db77b4-rxs28" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.701735 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bcbc6938-ae1b-4306-a73d-7f2c5dc64047-config\") pod \"machine-api-operator-5694c8668f-dzh8r\" (UID: \"bcbc6938-ae1b-4306-a73d-7f2c5dc64047\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dzh8r" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.701760 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c77a843c-6b36-4143-aff0-f5e7d227c11d-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-rxs28\" (UID: \"c77a843c-6b36-4143-aff0-f5e7d227c11d\") " pod="openshift-authentication/oauth-openshift-558db77b4-rxs28" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.701776 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c77a843c-6b36-4143-aff0-f5e7d227c11d-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-rxs28\" (UID: \"c77a843c-6b36-4143-aff0-f5e7d227c11d\") " pod="openshift-authentication/oauth-openshift-558db77b4-rxs28" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.701792 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c77a843c-6b36-4143-aff0-f5e7d227c11d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-rxs28\" (UID: \"c77a843c-6b36-4143-aff0-f5e7d227c11d\") " pod="openshift-authentication/oauth-openshift-558db77b4-rxs28" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.701808 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f62763cf-97b0-41ff-bac4-e4acd8060859-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-4fg22\" (UID: \"f62763cf-97b0-41ff-bac4-e4acd8060859\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4fg22" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.701823 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2b4f223b-f1f8-4e6b-ae06-519bc73d38ea-trusted-ca-bundle\") pod \"apiserver-76f77b778f-4qrkp\" (UID: \"2b4f223b-f1f8-4e6b-ae06-519bc73d38ea\") " pod="openshift-apiserver/apiserver-76f77b778f-4qrkp" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.701837 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5354347e-2a7e-42d4-a13c-33daf97e79c0-audit-policies\") pod \"apiserver-7bbb656c7d-kcz78\" (UID: \"5354347e-2a7e-42d4-a13c-33daf97e79c0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kcz78" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.701852 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5354347e-2a7e-42d4-a13c-33daf97e79c0-serving-cert\") pod \"apiserver-7bbb656c7d-kcz78\" (UID: \"5354347e-2a7e-42d4-a13c-33daf97e79c0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kcz78" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.701869 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d33a4711-23b8-41cb-bf35-708e252369ac-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-q8585\" (UID: \"d33a4711-23b8-41cb-bf35-708e252369ac\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-q8585" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.701884 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hwng\" (UniqueName: \"kubernetes.io/projected/ca699c4e-ccec-4ff8-895f-109777beca4c-kube-api-access-9hwng\") pod \"route-controller-manager-6576b87f9c-mzvpf\" (UID: \"ca699c4e-ccec-4ff8-895f-109777beca4c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mzvpf" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.701903 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f62763cf-97b0-41ff-bac4-e4acd8060859-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-4fg22\" (UID: \"f62763cf-97b0-41ff-bac4-e4acd8060859\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4fg22" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.701919 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b4f223b-f1f8-4e6b-ae06-519bc73d38ea-serving-cert\") pod \"apiserver-76f77b778f-4qrkp\" (UID: \"2b4f223b-f1f8-4e6b-ae06-519bc73d38ea\") " pod="openshift-apiserver/apiserver-76f77b778f-4qrkp" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.701955 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/b2182353-061f-40bf-8f81-1cb1aaaf1b97-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-nldcl\" (UID: \"b2182353-061f-40bf-8f81-1cb1aaaf1b97\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nldcl" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.701979 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b4f223b-f1f8-4e6b-ae06-519bc73d38ea-config\") pod \"apiserver-76f77b778f-4qrkp\" (UID: \"2b4f223b-f1f8-4e6b-ae06-519bc73d38ea\") " pod="openshift-apiserver/apiserver-76f77b778f-4qrkp" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.702003 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a5c87ed3-ec26-42d1-99d0-37fd576f970d-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bm2lw\" (UID: \"a5c87ed3-ec26-42d1-99d0-37fd576f970d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bm2lw" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.702025 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/60ed0c7a-5210-4706-b7b6-d989561edf26-auth-proxy-config\") pod \"machine-approver-56656f9798-dqmfz\" (UID: \"60ed0c7a-5210-4706-b7b6-d989561edf26\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dqmfz" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.702048 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/f62763cf-97b0-41ff-bac4-e4acd8060859-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-4fg22\" (UID: \"f62763cf-97b0-41ff-bac4-e4acd8060859\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4fg22" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.702067 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgr77\" (UniqueName: \"kubernetes.io/projected/5354347e-2a7e-42d4-a13c-33daf97e79c0-kube-api-access-qgr77\") pod \"apiserver-7bbb656c7d-kcz78\" (UID: \"5354347e-2a7e-42d4-a13c-33daf97e79c0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kcz78" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.702083 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a5c87ed3-ec26-42d1-99d0-37fd576f970d-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bm2lw\" (UID: \"a5c87ed3-ec26-42d1-99d0-37fd576f970d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bm2lw" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.702101 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/042c5da0-34af-4413-af57-feb5f484bfc3-serving-cert\") pod \"openshift-config-operator-7777fb866f-ms2fp\" (UID: \"042c5da0-34af-4413-af57-feb5f484bfc3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-ms2fp" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.702124 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9d30ed7a-3577-40f4-8d32-eec9f851ab19-console-serving-cert\") pod \"console-f9d7485db-798pd\" (UID: \"9d30ed7a-3577-40f4-8d32-eec9f851ab19\") " pod="openshift-console/console-f9d7485db-798pd" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.702146 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5c87ed3-ec26-42d1-99d0-37fd576f970d-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bm2lw\" (UID: \"a5c87ed3-ec26-42d1-99d0-37fd576f970d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bm2lw" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.702201 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4m8t\" (UniqueName: \"kubernetes.io/projected/d33a4711-23b8-41cb-bf35-708e252369ac-kube-api-access-k4m8t\") pod \"authentication-operator-69f744f599-q8585\" (UID: \"d33a4711-23b8-41cb-bf35-708e252369ac\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-q8585" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.702295 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/2b4f223b-f1f8-4e6b-ae06-519bc73d38ea-audit\") pod \"apiserver-76f77b778f-4qrkp\" (UID: \"2b4f223b-f1f8-4e6b-ae06-519bc73d38ea\") " pod="openshift-apiserver/apiserver-76f77b778f-4qrkp" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.702381 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/421f29d9-28d7-4e85-852e-d25b0529497a-config\") pod \"controller-manager-879f6c89f-km2xf\" (UID: \"421f29d9-28d7-4e85-852e-d25b0529497a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-km2xf" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.702422 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-hv7lg"] Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.702757 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9x4dl"] Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.702900 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c77a843c-6b36-4143-aff0-f5e7d227c11d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-rxs28\" (UID: \"c77a843c-6b36-4143-aff0-f5e7d227c11d\") " pod="openshift-authentication/oauth-openshift-558db77b4-rxs28" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.703118 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9d30ed7a-3577-40f4-8d32-eec9f851ab19-service-ca\") pod \"console-f9d7485db-798pd\" (UID: \"9d30ed7a-3577-40f4-8d32-eec9f851ab19\") " pod="openshift-console/console-f9d7485db-798pd" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.703223 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2b4f223b-f1f8-4e6b-ae06-519bc73d38ea-audit-dir\") pod \"apiserver-76f77b778f-4qrkp\" (UID: \"2b4f223b-f1f8-4e6b-ae06-519bc73d38ea\") " pod="openshift-apiserver/apiserver-76f77b778f-4qrkp" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.703693 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d33a4711-23b8-41cb-bf35-708e252369ac-config\") pod \"authentication-operator-69f744f599-q8585\" (UID: \"d33a4711-23b8-41cb-bf35-708e252369ac\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-q8585" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.703952 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5354347e-2a7e-42d4-a13c-33daf97e79c0-audit-policies\") pod \"apiserver-7bbb656c7d-kcz78\" (UID: \"5354347e-2a7e-42d4-a13c-33daf97e79c0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kcz78" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.703971 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ca699c4e-ccec-4ff8-895f-109777beca4c-client-ca\") pod \"route-controller-manager-6576b87f9c-mzvpf\" (UID: \"ca699c4e-ccec-4ff8-895f-109777beca4c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mzvpf" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.704070 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.700977 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2b4f223b-f1f8-4e6b-ae06-519bc73d38ea-node-pullsecrets\") pod \"apiserver-76f77b778f-4qrkp\" (UID: \"2b4f223b-f1f8-4e6b-ae06-519bc73d38ea\") " pod="openshift-apiserver/apiserver-76f77b778f-4qrkp" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.704843 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c77a843c-6b36-4143-aff0-f5e7d227c11d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-rxs28\" (UID: \"c77a843c-6b36-4143-aff0-f5e7d227c11d\") " pod="openshift-authentication/oauth-openshift-558db77b4-rxs28" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.705068 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5354347e-2a7e-42d4-a13c-33daf97e79c0-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-kcz78\" (UID: \"5354347e-2a7e-42d4-a13c-33daf97e79c0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kcz78" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.705007 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9d30ed7a-3577-40f4-8d32-eec9f851ab19-console-config\") pod \"console-f9d7485db-798pd\" (UID: \"9d30ed7a-3577-40f4-8d32-eec9f851ab19\") " pod="openshift-console/console-f9d7485db-798pd" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.705880 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2b4f223b-f1f8-4e6b-ae06-519bc73d38ea-trusted-ca-bundle\") pod \"apiserver-76f77b778f-4qrkp\" (UID: \"2b4f223b-f1f8-4e6b-ae06-519bc73d38ea\") " pod="openshift-apiserver/apiserver-76f77b778f-4qrkp" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.706086 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/042c5da0-34af-4413-af57-feb5f484bfc3-available-featuregates\") pod \"openshift-config-operator-7777fb866f-ms2fp\" (UID: \"042c5da0-34af-4413-af57-feb5f484bfc3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-ms2fp" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.706123 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c677e814-7e89-49be-a000-091b8e49d6b8-config\") pod \"console-operator-58897d9998-l28pf\" (UID: \"c677e814-7e89-49be-a000-091b8e49d6b8\") " pod="openshift-console-operator/console-operator-58897d9998-l28pf" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.706653 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/bcbc6938-ae1b-4306-a73d-7f2c5dc64047-images\") pod \"machine-api-operator-5694c8668f-dzh8r\" (UID: \"bcbc6938-ae1b-4306-a73d-7f2c5dc64047\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dzh8r" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.706827 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j5sfl"] Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.706866 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-9prrw"] Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.706951 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c77a843c-6b36-4143-aff0-f5e7d227c11d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-rxs28\" (UID: \"c77a843c-6b36-4143-aff0-f5e7d227c11d\") " pod="openshift-authentication/oauth-openshift-558db77b4-rxs28" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.707241 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5354347e-2a7e-42d4-a13c-33daf97e79c0-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-kcz78\" (UID: \"5354347e-2a7e-42d4-a13c-33daf97e79c0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kcz78" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.707668 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/421f29d9-28d7-4e85-852e-d25b0529497a-serving-cert\") pod \"controller-manager-879f6c89f-km2xf\" (UID: \"421f29d9-28d7-4e85-852e-d25b0529497a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-km2xf" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.707926 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca699c4e-ccec-4ff8-895f-109777beca4c-config\") pod \"route-controller-manager-6576b87f9c-mzvpf\" (UID: \"ca699c4e-ccec-4ff8-895f-109777beca4c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mzvpf" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.707980 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-9prrw" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.708179 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-dxmxv"] Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.708505 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/24b0c90f-a223-41e9-beb5-619fdeaf49c1-metrics-tls\") pod \"dns-operator-744455d44c-rmzh4\" (UID: \"24b0c90f-a223-41e9-beb5-619fdeaf49c1\") " pod="openshift-dns-operator/dns-operator-744455d44c-rmzh4" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.708704 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c677e814-7e89-49be-a000-091b8e49d6b8-trusted-ca\") pod \"console-operator-58897d9998-l28pf\" (UID: \"c677e814-7e89-49be-a000-091b8e49d6b8\") " pod="openshift-console-operator/console-operator-58897d9998-l28pf" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.708739 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9188831-917b-434c-b118-24c7971f6381-config\") pod \"openshift-apiserver-operator-796bbdcf4f-8rg9n\" (UID: \"d9188831-917b-434c-b118-24c7971f6381\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8rg9n" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.708813 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-dxmxv" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.709767 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9188831-917b-434c-b118-24c7971f6381-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-8rg9n\" (UID: \"d9188831-917b-434c-b118-24c7971f6381\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8rg9n" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.710021 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a5c87ed3-ec26-42d1-99d0-37fd576f970d-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bm2lw\" (UID: \"a5c87ed3-ec26-42d1-99d0-37fd576f970d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bm2lw" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.710313 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5354347e-2a7e-42d4-a13c-33daf97e79c0-serving-cert\") pod \"apiserver-7bbb656c7d-kcz78\" (UID: \"5354347e-2a7e-42d4-a13c-33daf97e79c0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kcz78" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.710459 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bcbc6938-ae1b-4306-a73d-7f2c5dc64047-config\") pod \"machine-api-operator-5694c8668f-dzh8r\" (UID: \"bcbc6938-ae1b-4306-a73d-7f2c5dc64047\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dzh8r" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.711061 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9d30ed7a-3577-40f4-8d32-eec9f851ab19-console-oauth-config\") pod \"console-f9d7485db-798pd\" (UID: \"9d30ed7a-3577-40f4-8d32-eec9f851ab19\") " pod="openshift-console/console-f9d7485db-798pd" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.711766 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f62763cf-97b0-41ff-bac4-e4acd8060859-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-4fg22\" (UID: \"f62763cf-97b0-41ff-bac4-e4acd8060859\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4fg22" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.711882 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-dxmxv"] Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.713653 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d33a4711-23b8-41cb-bf35-708e252369ac-service-ca-bundle\") pod \"authentication-operator-69f744f599-q8585\" (UID: \"d33a4711-23b8-41cb-bf35-708e252369ac\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-q8585" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.713967 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c77a843c-6b36-4143-aff0-f5e7d227c11d-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-rxs28\" (UID: \"c77a843c-6b36-4143-aff0-f5e7d227c11d\") " pod="openshift-authentication/oauth-openshift-558db77b4-rxs28" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.714114 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c77a843c-6b36-4143-aff0-f5e7d227c11d-audit-dir\") pod \"oauth-openshift-558db77b4-rxs28\" (UID: \"c77a843c-6b36-4143-aff0-f5e7d227c11d\") " pod="openshift-authentication/oauth-openshift-558db77b4-rxs28" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.715257 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399700-hnjjf"] Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.715519 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/421f29d9-28d7-4e85-852e-d25b0529497a-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-km2xf\" (UID: \"421f29d9-28d7-4e85-852e-d25b0529497a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-km2xf" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.716667 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/421f29d9-28d7-4e85-852e-d25b0529497a-client-ca\") pod \"controller-manager-879f6c89f-km2xf\" (UID: \"421f29d9-28d7-4e85-852e-d25b0529497a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-km2xf" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.716777 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d33a4711-23b8-41cb-bf35-708e252369ac-serving-cert\") pod \"authentication-operator-69f744f599-q8585\" (UID: \"d33a4711-23b8-41cb-bf35-708e252369ac\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-q8585" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.717038 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c77a843c-6b36-4143-aff0-f5e7d227c11d-audit-policies\") pod \"oauth-openshift-558db77b4-rxs28\" (UID: \"c77a843c-6b36-4143-aff0-f5e7d227c11d\") " pod="openshift-authentication/oauth-openshift-558db77b4-rxs28" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.717280 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/bcbc6938-ae1b-4306-a73d-7f2c5dc64047-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-dzh8r\" (UID: \"bcbc6938-ae1b-4306-a73d-7f2c5dc64047\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dzh8r" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.717477 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-f8msc"] Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.719310 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/60ed0c7a-5210-4706-b7b6-d989561edf26-auth-proxy-config\") pod \"machine-approver-56656f9798-dqmfz\" (UID: \"60ed0c7a-5210-4706-b7b6-d989561edf26\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dqmfz" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.719891 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c677e814-7e89-49be-a000-091b8e49d6b8-serving-cert\") pod \"console-operator-58897d9998-l28pf\" (UID: \"c677e814-7e89-49be-a000-091b8e49d6b8\") " pod="openshift-console-operator/console-operator-58897d9998-l28pf" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.720548 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.720834 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b4f223b-f1f8-4e6b-ae06-519bc73d38ea-config\") pod \"apiserver-76f77b778f-4qrkp\" (UID: \"2b4f223b-f1f8-4e6b-ae06-519bc73d38ea\") " pod="openshift-apiserver/apiserver-76f77b778f-4qrkp" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.721006 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d33a4711-23b8-41cb-bf35-708e252369ac-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-q8585\" (UID: \"d33a4711-23b8-41cb-bf35-708e252369ac\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-q8585" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.721018 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2b4f223b-f1f8-4e6b-ae06-519bc73d38ea-etcd-client\") pod \"apiserver-76f77b778f-4qrkp\" (UID: \"2b4f223b-f1f8-4e6b-ae06-519bc73d38ea\") " pod="openshift-apiserver/apiserver-76f77b778f-4qrkp" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.721251 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c77a843c-6b36-4143-aff0-f5e7d227c11d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-rxs28\" (UID: \"c77a843c-6b36-4143-aff0-f5e7d227c11d\") " pod="openshift-authentication/oauth-openshift-558db77b4-rxs28" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.721440 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c77a843c-6b36-4143-aff0-f5e7d227c11d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-rxs28\" (UID: \"c77a843c-6b36-4143-aff0-f5e7d227c11d\") " pod="openshift-authentication/oauth-openshift-558db77b4-rxs28" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.721811 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5c87ed3-ec26-42d1-99d0-37fd576f970d-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bm2lw\" (UID: \"a5c87ed3-ec26-42d1-99d0-37fd576f970d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bm2lw" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.721816 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b4f223b-f1f8-4e6b-ae06-519bc73d38ea-serving-cert\") pod \"apiserver-76f77b778f-4qrkp\" (UID: \"2b4f223b-f1f8-4e6b-ae06-519bc73d38ea\") " pod="openshift-apiserver/apiserver-76f77b778f-4qrkp" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.721836 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-cztzr"] Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.722139 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c77a843c-6b36-4143-aff0-f5e7d227c11d-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-rxs28\" (UID: \"c77a843c-6b36-4143-aff0-f5e7d227c11d\") " pod="openshift-authentication/oauth-openshift-558db77b4-rxs28" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.722729 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9d30ed7a-3577-40f4-8d32-eec9f851ab19-oauth-serving-cert\") pod \"console-f9d7485db-798pd\" (UID: \"9d30ed7a-3577-40f4-8d32-eec9f851ab19\") " pod="openshift-console/console-f9d7485db-798pd" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.722948 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5354347e-2a7e-42d4-a13c-33daf97e79c0-encryption-config\") pod \"apiserver-7bbb656c7d-kcz78\" (UID: \"5354347e-2a7e-42d4-a13c-33daf97e79c0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kcz78" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.723179 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c77a843c-6b36-4143-aff0-f5e7d227c11d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-rxs28\" (UID: \"c77a843c-6b36-4143-aff0-f5e7d227c11d\") " pod="openshift-authentication/oauth-openshift-558db77b4-rxs28" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.723748 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d30ed7a-3577-40f4-8d32-eec9f851ab19-trusted-ca-bundle\") pod \"console-f9d7485db-798pd\" (UID: \"9d30ed7a-3577-40f4-8d32-eec9f851ab19\") " pod="openshift-console/console-f9d7485db-798pd" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.723995 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c77a843c-6b36-4143-aff0-f5e7d227c11d-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-rxs28\" (UID: \"c77a843c-6b36-4143-aff0-f5e7d227c11d\") " pod="openshift-authentication/oauth-openshift-558db77b4-rxs28" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.724084 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c77a843c-6b36-4143-aff0-f5e7d227c11d-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-rxs28\" (UID: \"c77a843c-6b36-4143-aff0-f5e7d227c11d\") " pod="openshift-authentication/oauth-openshift-558db77b4-rxs28" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.724127 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5354347e-2a7e-42d4-a13c-33daf97e79c0-etcd-client\") pod \"apiserver-7bbb656c7d-kcz78\" (UID: \"5354347e-2a7e-42d4-a13c-33daf97e79c0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kcz78" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.724234 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/2b4f223b-f1f8-4e6b-ae06-519bc73d38ea-image-import-ca\") pod \"apiserver-76f77b778f-4qrkp\" (UID: \"2b4f223b-f1f8-4e6b-ae06-519bc73d38ea\") " pod="openshift-apiserver/apiserver-76f77b778f-4qrkp" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.724661 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/b2182353-061f-40bf-8f81-1cb1aaaf1b97-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-nldcl\" (UID: \"b2182353-061f-40bf-8f81-1cb1aaaf1b97\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nldcl" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.724723 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c77a843c-6b36-4143-aff0-f5e7d227c11d-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-rxs28\" (UID: \"c77a843c-6b36-4143-aff0-f5e7d227c11d\") " pod="openshift-authentication/oauth-openshift-558db77b4-rxs28" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.724893 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9d30ed7a-3577-40f4-8d32-eec9f851ab19-console-serving-cert\") pod \"console-f9d7485db-798pd\" (UID: \"9d30ed7a-3577-40f4-8d32-eec9f851ab19\") " pod="openshift-console/console-f9d7485db-798pd" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.724910 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/f62763cf-97b0-41ff-bac4-e4acd8060859-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-4fg22\" (UID: \"f62763cf-97b0-41ff-bac4-e4acd8060859\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4fg22" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.725743 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-fvnl4"] Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.725928 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca699c4e-ccec-4ff8-895f-109777beca4c-serving-cert\") pod \"route-controller-manager-6576b87f9c-mzvpf\" (UID: \"ca699c4e-ccec-4ff8-895f-109777beca4c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mzvpf" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.726468 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/042c5da0-34af-4413-af57-feb5f484bfc3-serving-cert\") pod \"openshift-config-operator-7777fb866f-ms2fp\" (UID: \"042c5da0-34af-4413-af57-feb5f484bfc3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-ms2fp" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.726995 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/60ed0c7a-5210-4706-b7b6-d989561edf26-machine-approver-tls\") pod \"machine-approver-56656f9798-dqmfz\" (UID: \"60ed0c7a-5210-4706-b7b6-d989561edf26\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dqmfz" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.728782 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-4qrkp"] Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.731093 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/2b4f223b-f1f8-4e6b-ae06-519bc73d38ea-encryption-config\") pod \"apiserver-76f77b778f-4qrkp\" (UID: \"2b4f223b-f1f8-4e6b-ae06-519bc73d38ea\") " pod="openshift-apiserver/apiserver-76f77b778f-4qrkp" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.733832 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/2b4f223b-f1f8-4e6b-ae06-519bc73d38ea-etcd-serving-ca\") pod \"apiserver-76f77b778f-4qrkp\" (UID: \"2b4f223b-f1f8-4e6b-ae06-519bc73d38ea\") " pod="openshift-apiserver/apiserver-76f77b778f-4qrkp" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.740415 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.760593 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.780866 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.801089 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.821136 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.840657 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.861243 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.881403 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.901726 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.920759 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.941745 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.962159 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Nov 24 11:11:22 crc kubenswrapper[5072]: I1124 11:11:22.981317 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Nov 24 11:11:23 crc kubenswrapper[5072]: I1124 11:11:23.001419 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Nov 24 11:11:23 crc kubenswrapper[5072]: I1124 11:11:23.029306 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Nov 24 11:11:23 crc kubenswrapper[5072]: I1124 11:11:23.041449 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Nov 24 11:11:23 crc kubenswrapper[5072]: I1124 11:11:23.061396 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Nov 24 11:11:23 crc kubenswrapper[5072]: I1124 11:11:23.081917 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Nov 24 11:11:23 crc kubenswrapper[5072]: I1124 11:11:23.102647 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Nov 24 11:11:23 crc kubenswrapper[5072]: I1124 11:11:23.121787 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Nov 24 11:11:23 crc kubenswrapper[5072]: I1124 11:11:23.141164 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Nov 24 11:11:23 crc kubenswrapper[5072]: I1124 11:11:23.161365 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Nov 24 11:11:23 crc kubenswrapper[5072]: I1124 11:11:23.182288 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Nov 24 11:11:23 crc kubenswrapper[5072]: I1124 11:11:23.201103 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Nov 24 11:11:23 crc kubenswrapper[5072]: I1124 11:11:23.222502 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Nov 24 11:11:23 crc kubenswrapper[5072]: I1124 11:11:23.241427 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Nov 24 11:11:23 crc kubenswrapper[5072]: I1124 11:11:23.261734 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Nov 24 11:11:23 crc kubenswrapper[5072]: I1124 11:11:23.280747 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Nov 24 11:11:23 crc kubenswrapper[5072]: I1124 11:11:23.301785 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Nov 24 11:11:23 crc kubenswrapper[5072]: I1124 11:11:23.322255 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Nov 24 11:11:23 crc kubenswrapper[5072]: I1124 11:11:23.342233 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Nov 24 11:11:23 crc kubenswrapper[5072]: I1124 11:11:23.361419 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Nov 24 11:11:23 crc kubenswrapper[5072]: I1124 11:11:23.381479 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Nov 24 11:11:23 crc kubenswrapper[5072]: I1124 11:11:23.401152 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Nov 24 11:11:23 crc kubenswrapper[5072]: I1124 11:11:23.421331 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Nov 24 11:11:23 crc kubenswrapper[5072]: I1124 11:11:23.441503 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Nov 24 11:11:23 crc kubenswrapper[5072]: I1124 11:11:23.460950 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Nov 24 11:11:23 crc kubenswrapper[5072]: I1124 11:11:23.481411 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Nov 24 11:11:23 crc kubenswrapper[5072]: I1124 11:11:23.501532 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Nov 24 11:11:23 crc kubenswrapper[5072]: I1124 11:11:23.521080 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Nov 24 11:11:23 crc kubenswrapper[5072]: I1124 11:11:23.540608 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Nov 24 11:11:23 crc kubenswrapper[5072]: I1124 11:11:23.559137 5072 request.go:700] Waited for 1.003208286s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dpackageserver-service-cert&limit=500&resourceVersion=0 Nov 24 11:11:23 crc kubenswrapper[5072]: I1124 11:11:23.560502 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Nov 24 11:11:23 crc kubenswrapper[5072]: I1124 11:11:23.581310 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Nov 24 11:11:23 crc kubenswrapper[5072]: I1124 11:11:23.602722 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Nov 24 11:11:23 crc kubenswrapper[5072]: I1124 11:11:23.622015 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Nov 24 11:11:23 crc kubenswrapper[5072]: I1124 11:11:23.641112 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Nov 24 11:11:23 crc kubenswrapper[5072]: I1124 11:11:23.661303 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Nov 24 11:11:23 crc kubenswrapper[5072]: I1124 11:11:23.681748 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Nov 24 11:11:23 crc kubenswrapper[5072]: I1124 11:11:23.701555 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Nov 24 11:11:23 crc kubenswrapper[5072]: I1124 11:11:23.741842 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 24 11:11:23 crc kubenswrapper[5072]: I1124 11:11:23.761113 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Nov 24 11:11:23 crc kubenswrapper[5072]: I1124 11:11:23.781335 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 11:11:23 crc kubenswrapper[5072]: I1124 11:11:23.801073 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Nov 24 11:11:23 crc kubenswrapper[5072]: I1124 11:11:23.821874 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Nov 24 11:11:23 crc kubenswrapper[5072]: I1124 11:11:23.841264 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Nov 24 11:11:23 crc kubenswrapper[5072]: I1124 11:11:23.861100 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Nov 24 11:11:23 crc kubenswrapper[5072]: I1124 11:11:23.881776 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Nov 24 11:11:23 crc kubenswrapper[5072]: I1124 11:11:23.911728 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Nov 24 11:11:23 crc kubenswrapper[5072]: I1124 11:11:23.921637 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Nov 24 11:11:23 crc kubenswrapper[5072]: I1124 11:11:23.941417 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Nov 24 11:11:23 crc kubenswrapper[5072]: I1124 11:11:23.961806 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Nov 24 11:11:23 crc kubenswrapper[5072]: I1124 11:11:23.981208 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.004470 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.021780 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.041878 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.063527 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.081676 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.121817 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.141622 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.161941 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.180839 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.202130 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.221509 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.240842 5072 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.261785 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.281416 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.301461 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.321134 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.341605 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.390565 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bw648\" (UniqueName: \"kubernetes.io/projected/1cd359a9-17ba-43c9-8cb3-7c786777226b-kube-api-access-bw648\") pod \"downloads-7954f5f757-fpxll\" (UID: \"1cd359a9-17ba-43c9-8cb3-7c786777226b\") " pod="openshift-console/downloads-7954f5f757-fpxll" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.411184 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxnz7\" (UniqueName: \"kubernetes.io/projected/9d30ed7a-3577-40f4-8d32-eec9f851ab19-kube-api-access-sxnz7\") pod \"console-f9d7485db-798pd\" (UID: \"9d30ed7a-3577-40f4-8d32-eec9f851ab19\") " pod="openshift-console/console-f9d7485db-798pd" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.430620 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4m8t\" (UniqueName: \"kubernetes.io/projected/d33a4711-23b8-41cb-bf35-708e252369ac-kube-api-access-k4m8t\") pod \"authentication-operator-69f744f599-q8585\" (UID: \"d33a4711-23b8-41cb-bf35-708e252369ac\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-q8585" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.449610 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkwnb\" (UniqueName: \"kubernetes.io/projected/421f29d9-28d7-4e85-852e-d25b0529497a-kube-api-access-hkwnb\") pod \"controller-manager-879f6c89f-km2xf\" (UID: \"421f29d9-28d7-4e85-852e-d25b0529497a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-km2xf" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.463645 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-fpxll" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.471597 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k89fv\" (UniqueName: \"kubernetes.io/projected/042c5da0-34af-4413-af57-feb5f484bfc3-kube-api-access-k89fv\") pod \"openshift-config-operator-7777fb866f-ms2fp\" (UID: \"042c5da0-34af-4413-af57-feb5f484bfc3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-ms2fp" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.499743 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-798pd" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.499868 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-q8585" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.510207 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nfcp\" (UniqueName: \"kubernetes.io/projected/b2182353-061f-40bf-8f81-1cb1aaaf1b97-kube-api-access-2nfcp\") pod \"cluster-samples-operator-665b6dd947-nldcl\" (UID: \"b2182353-061f-40bf-8f81-1cb1aaaf1b97\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nldcl" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.521251 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.523525 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwkb5\" (UniqueName: \"kubernetes.io/projected/24b0c90f-a223-41e9-beb5-619fdeaf49c1-kube-api-access-dwkb5\") pod \"dns-operator-744455d44c-rmzh4\" (UID: \"24b0c90f-a223-41e9-beb5-619fdeaf49c1\") " pod="openshift-dns-operator/dns-operator-744455d44c-rmzh4" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.533567 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pv7nw\" (UniqueName: \"kubernetes.io/projected/d9188831-917b-434c-b118-24c7971f6381-kube-api-access-pv7nw\") pod \"openshift-apiserver-operator-796bbdcf4f-8rg9n\" (UID: \"d9188831-917b-434c-b118-24c7971f6381\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8rg9n" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.541864 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.560091 5072 request.go:700] Waited for 1.851790797s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-server-dockercfg-qx5rd&limit=500&resourceVersion=0 Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.562074 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.602900 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.604509 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqzwr\" (UniqueName: \"kubernetes.io/projected/c77a843c-6b36-4143-aff0-f5e7d227c11d-kube-api-access-vqzwr\") pod \"oauth-openshift-558db77b4-rxs28\" (UID: \"c77a843c-6b36-4143-aff0-f5e7d227c11d\") " pod="openshift-authentication/oauth-openshift-558db77b4-rxs28" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.621732 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.651714 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.654618 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-km2xf" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.685405 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gs99l\" (UniqueName: \"kubernetes.io/projected/2b4f223b-f1f8-4e6b-ae06-519bc73d38ea-kube-api-access-gs99l\") pod \"apiserver-76f77b778f-4qrkp\" (UID: \"2b4f223b-f1f8-4e6b-ae06-519bc73d38ea\") " pod="openshift-apiserver/apiserver-76f77b778f-4qrkp" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.694881 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7hl9\" (UniqueName: \"kubernetes.io/projected/60ed0c7a-5210-4706-b7b6-d989561edf26-kube-api-access-j7hl9\") pod \"machine-approver-56656f9798-dqmfz\" (UID: \"60ed0c7a-5210-4706-b7b6-d989561edf26\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dqmfz" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.697552 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-rxs28" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.716436 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8rg9n" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.723605 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f62763cf-97b0-41ff-bac4-e4acd8060859-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-4fg22\" (UID: \"f62763cf-97b0-41ff-bac4-e4acd8060859\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4fg22" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.737744 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hwng\" (UniqueName: \"kubernetes.io/projected/ca699c4e-ccec-4ff8-895f-109777beca4c-kube-api-access-9hwng\") pod \"route-controller-manager-6576b87f9c-mzvpf\" (UID: \"ca699c4e-ccec-4ff8-895f-109777beca4c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mzvpf" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.747662 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ms2fp" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.750471 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-q8585"] Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.763916 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a5c87ed3-ec26-42d1-99d0-37fd576f970d-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bm2lw\" (UID: \"a5c87ed3-ec26-42d1-99d0-37fd576f970d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bm2lw" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.774674 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-rmzh4" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.779395 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-798pd"] Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.787985 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bm2lw" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.794029 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgr77\" (UniqueName: \"kubernetes.io/projected/5354347e-2a7e-42d4-a13c-33daf97e79c0-kube-api-access-qgr77\") pod \"apiserver-7bbb656c7d-kcz78\" (UID: \"5354347e-2a7e-42d4-a13c-33daf97e79c0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kcz78" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.800209 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnr62\" (UniqueName: \"kubernetes.io/projected/bcbc6938-ae1b-4306-a73d-7f2c5dc64047-kube-api-access-pnr62\") pod \"machine-api-operator-5694c8668f-dzh8r\" (UID: \"bcbc6938-ae1b-4306-a73d-7f2c5dc64047\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dzh8r" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.800403 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nldcl" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.814777 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-fpxll"] Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.823004 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnqlw\" (UniqueName: \"kubernetes.io/projected/f62763cf-97b0-41ff-bac4-e4acd8060859-kube-api-access-cnqlw\") pod \"cluster-image-registry-operator-dc59b4c8b-4fg22\" (UID: \"f62763cf-97b0-41ff-bac4-e4acd8060859\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4fg22" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.841209 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brlb2\" (UniqueName: \"kubernetes.io/projected/c677e814-7e89-49be-a000-091b8e49d6b8-kube-api-access-brlb2\") pod \"console-operator-58897d9998-l28pf\" (UID: \"c677e814-7e89-49be-a000-091b8e49d6b8\") " pod="openshift-console-operator/console-operator-58897d9998-l28pf" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.844422 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-km2xf"] Nov 24 11:11:24 crc kubenswrapper[5072]: W1124 11:11:24.849503 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d30ed7a_3577_40f4_8d32_eec9f851ab19.slice/crio-16b8bb70a3c0c6a3aa3cde9816118e6c8174c822fe59fe7d3a2903f6c558076d WatchSource:0}: Error finding container 16b8bb70a3c0c6a3aa3cde9816118e6c8174c822fe59fe7d3a2903f6c558076d: Status 404 returned error can't find the container with id 16b8bb70a3c0c6a3aa3cde9816118e6c8174c822fe59fe7d3a2903f6c558076d Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.862485 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-l28pf" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.865498 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-4qrkp" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.866744 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4fg22" Nov 24 11:11:24 crc kubenswrapper[5072]: W1124 11:11:24.872778 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1cd359a9_17ba_43c9_8cb3_7c786777226b.slice/crio-f1fdfd6115c9d7e442c4faf4f23bcbcad233c9442bf7541cc55eb8622f868a34 WatchSource:0}: Error finding container f1fdfd6115c9d7e442c4faf4f23bcbcad233c9442bf7541cc55eb8622f868a34: Status 404 returned error can't find the container with id f1fdfd6115c9d7e442c4faf4f23bcbcad233c9442bf7541cc55eb8622f868a34 Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.885523 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-dzh8r" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.902013 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kcz78" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.916990 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dqmfz" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.931558 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8rg9n"] Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.939640 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69a7724d-41d5-4946-81d6-d43497db7319-config\") pod \"kube-apiserver-operator-766d6c64bb-t6876\" (UID: \"69a7724d-41d5-4946-81d6-d43497db7319\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t6876" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.939683 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kf56j\" (UniqueName: \"kubernetes.io/projected/f662a10c-20f8-49b5-9a41-6a17e156038b-kube-api-access-kf56j\") pod \"machine-config-operator-74547568cd-ln5s8\" (UID: \"f662a10c-20f8-49b5-9a41-6a17e156038b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ln5s8" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.939707 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2493e834-4bc7-43eb-a2c3-942598904f3a-webhook-cert\") pod \"packageserver-d55dfcdfc-5k5rr\" (UID: \"2493e834-4bc7-43eb-a2c3-942598904f3a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5k5rr" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.939744 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d68516ef-c18f-4d3f-bc80-71739e73cee1-registry-tls\") pod \"image-registry-697d97f7c8-9w2qz\" (UID: \"d68516ef-c18f-4d3f-bc80-71739e73cee1\") " pod="openshift-image-registry/image-registry-697d97f7c8-9w2qz" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.939771 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/9bdad3dd-22a5-46d4-be89-9f5f98da1738-etcd-service-ca\") pod \"etcd-operator-b45778765-qtf9d\" (UID: \"9bdad3dd-22a5-46d4-be89-9f5f98da1738\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qtf9d" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.939796 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f662a10c-20f8-49b5-9a41-6a17e156038b-auth-proxy-config\") pod \"machine-config-operator-74547568cd-ln5s8\" (UID: \"f662a10c-20f8-49b5-9a41-6a17e156038b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ln5s8" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.939839 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9bdad3dd-22a5-46d4-be89-9f5f98da1738-etcd-client\") pod \"etcd-operator-b45778765-qtf9d\" (UID: \"9bdad3dd-22a5-46d4-be89-9f5f98da1738\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qtf9d" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.939871 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/164c7d70-1b80-415a-8a7b-fbb1001b1286-bound-sa-token\") pod \"ingress-operator-5b745b69d9-vftrc\" (UID: \"164c7d70-1b80-415a-8a7b-fbb1001b1286\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vftrc" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.939908 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ml6b\" (UniqueName: \"kubernetes.io/projected/164c7d70-1b80-415a-8a7b-fbb1001b1286-kube-api-access-4ml6b\") pod \"ingress-operator-5b745b69d9-vftrc\" (UID: \"164c7d70-1b80-415a-8a7b-fbb1001b1286\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vftrc" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.939939 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/164c7d70-1b80-415a-8a7b-fbb1001b1286-metrics-tls\") pod \"ingress-operator-5b745b69d9-vftrc\" (UID: \"164c7d70-1b80-415a-8a7b-fbb1001b1286\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vftrc" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.939971 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d68516ef-c18f-4d3f-bc80-71739e73cee1-trusted-ca\") pod \"image-registry-697d97f7c8-9w2qz\" (UID: \"d68516ef-c18f-4d3f-bc80-71739e73cee1\") " pod="openshift-image-registry/image-registry-697d97f7c8-9w2qz" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.939984 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9bdad3dd-22a5-46d4-be89-9f5f98da1738-config\") pod \"etcd-operator-b45778765-qtf9d\" (UID: \"9bdad3dd-22a5-46d4-be89-9f5f98da1738\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qtf9d" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.940010 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxkh6\" (UniqueName: \"kubernetes.io/projected/9bdad3dd-22a5-46d4-be89-9f5f98da1738-kube-api-access-kxkh6\") pod \"etcd-operator-b45778765-qtf9d\" (UID: \"9bdad3dd-22a5-46d4-be89-9f5f98da1738\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qtf9d" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.940026 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2493e834-4bc7-43eb-a2c3-942598904f3a-apiservice-cert\") pod \"packageserver-d55dfcdfc-5k5rr\" (UID: \"2493e834-4bc7-43eb-a2c3-942598904f3a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5k5rr" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.940049 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9w2qz\" (UID: \"d68516ef-c18f-4d3f-bc80-71739e73cee1\") " pod="openshift-image-registry/image-registry-697d97f7c8-9w2qz" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.940066 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9bdad3dd-22a5-46d4-be89-9f5f98da1738-serving-cert\") pod \"etcd-operator-b45778765-qtf9d\" (UID: \"9bdad3dd-22a5-46d4-be89-9f5f98da1738\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qtf9d" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.940115 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fb23ad0-2566-4f2c-8a33-97e253539289-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-h6q9x\" (UID: \"1fb23ad0-2566-4f2c-8a33-97e253539289\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-h6q9x" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.940129 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f662a10c-20f8-49b5-9a41-6a17e156038b-proxy-tls\") pod \"machine-config-operator-74547568cd-ln5s8\" (UID: \"f662a10c-20f8-49b5-9a41-6a17e156038b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ln5s8" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.940153 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d68516ef-c18f-4d3f-bc80-71739e73cee1-ca-trust-extracted\") pod \"image-registry-697d97f7c8-9w2qz\" (UID: \"d68516ef-c18f-4d3f-bc80-71739e73cee1\") " pod="openshift-image-registry/image-registry-697d97f7c8-9w2qz" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.940167 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8ef682f0-d784-48ac-83f3-4c718f34edaf-metrics-certs\") pod \"router-default-5444994796-wxc9p\" (UID: \"8ef682f0-d784-48ac-83f3-4c718f34edaf\") " pod="openshift-ingress/router-default-5444994796-wxc9p" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.940216 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/119d4f92-5b02-4cc7-bb41-adcc78ccb157-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-7bjm7\" (UID: \"119d4f92-5b02-4cc7-bb41-adcc78ccb157\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7bjm7" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.940279 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/2493e834-4bc7-43eb-a2c3-942598904f3a-tmpfs\") pod \"packageserver-d55dfcdfc-5k5rr\" (UID: \"2493e834-4bc7-43eb-a2c3-942598904f3a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5k5rr" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.940345 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/119d4f92-5b02-4cc7-bb41-adcc78ccb157-config\") pod \"kube-controller-manager-operator-78b949d7b-7bjm7\" (UID: \"119d4f92-5b02-4cc7-bb41-adcc78ccb157\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7bjm7" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.940382 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gqpd\" (UniqueName: \"kubernetes.io/projected/2493e834-4bc7-43eb-a2c3-942598904f3a-kube-api-access-7gqpd\") pod \"packageserver-d55dfcdfc-5k5rr\" (UID: \"2493e834-4bc7-43eb-a2c3-942598904f3a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5k5rr" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.940399 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48xbw\" (UniqueName: \"kubernetes.io/projected/d68516ef-c18f-4d3f-bc80-71739e73cee1-kube-api-access-48xbw\") pod \"image-registry-697d97f7c8-9w2qz\" (UID: \"d68516ef-c18f-4d3f-bc80-71739e73cee1\") " pod="openshift-image-registry/image-registry-697d97f7c8-9w2qz" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.940442 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgp6s\" (UniqueName: \"kubernetes.io/projected/613216b8-2838-4eb4-8635-9aa0e797d101-kube-api-access-cgp6s\") pod \"service-ca-operator-777779d784-x6g8r\" (UID: \"613216b8-2838-4eb4-8635-9aa0e797d101\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-x6g8r" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.940457 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hcl9\" (UniqueName: \"kubernetes.io/projected/2837271a-7003-4e16-aa64-432493decb73-kube-api-access-8hcl9\") pod \"catalog-operator-68c6474976-2jj65\" (UID: \"2837271a-7003-4e16-aa64-432493decb73\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2jj65" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.940482 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/164c7d70-1b80-415a-8a7b-fbb1001b1286-trusted-ca\") pod \"ingress-operator-5b745b69d9-vftrc\" (UID: \"164c7d70-1b80-415a-8a7b-fbb1001b1286\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vftrc" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.940497 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f662a10c-20f8-49b5-9a41-6a17e156038b-images\") pod \"machine-config-operator-74547568cd-ln5s8\" (UID: \"f662a10c-20f8-49b5-9a41-6a17e156038b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ln5s8" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.940511 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2837271a-7003-4e16-aa64-432493decb73-srv-cert\") pod \"catalog-operator-68c6474976-2jj65\" (UID: \"2837271a-7003-4e16-aa64-432493decb73\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2jj65" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.940533 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r26gl\" (UniqueName: \"kubernetes.io/projected/1fb23ad0-2566-4f2c-8a33-97e253539289-kube-api-access-r26gl\") pod \"openshift-controller-manager-operator-756b6f6bc6-h6q9x\" (UID: \"1fb23ad0-2566-4f2c-8a33-97e253539289\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-h6q9x" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.940597 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/7b8bcc47-53bd-45a5-937f-b515a314f662-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-nwsjb\" (UID: \"7b8bcc47-53bd-45a5-937f-b515a314f662\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nwsjb" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.940653 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1fb23ad0-2566-4f2c-8a33-97e253539289-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-h6q9x\" (UID: \"1fb23ad0-2566-4f2c-8a33-97e253539289\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-h6q9x" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.940668 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/69a7724d-41d5-4946-81d6-d43497db7319-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-t6876\" (UID: \"69a7724d-41d5-4946-81d6-d43497db7319\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t6876" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.940721 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/8ef682f0-d784-48ac-83f3-4c718f34edaf-default-certificate\") pod \"router-default-5444994796-wxc9p\" (UID: \"8ef682f0-d784-48ac-83f3-4c718f34edaf\") " pod="openshift-ingress/router-default-5444994796-wxc9p" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.940738 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwnnl\" (UniqueName: \"kubernetes.io/projected/d5fa82d2-0cf9-46d0-b319-45a36d14a3af-kube-api-access-gwnnl\") pod \"multus-admission-controller-857f4d67dd-m47n7\" (UID: \"d5fa82d2-0cf9-46d0-b319-45a36d14a3af\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-m47n7" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.940756 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/9bdad3dd-22a5-46d4-be89-9f5f98da1738-etcd-ca\") pod \"etcd-operator-b45778765-qtf9d\" (UID: \"9bdad3dd-22a5-46d4-be89-9f5f98da1738\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qtf9d" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.940772 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69a7724d-41d5-4946-81d6-d43497db7319-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-t6876\" (UID: \"69a7724d-41d5-4946-81d6-d43497db7319\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t6876" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.940807 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d68516ef-c18f-4d3f-bc80-71739e73cee1-installation-pull-secrets\") pod \"image-registry-697d97f7c8-9w2qz\" (UID: \"d68516ef-c18f-4d3f-bc80-71739e73cee1\") " pod="openshift-image-registry/image-registry-697d97f7c8-9w2qz" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.940822 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/613216b8-2838-4eb4-8635-9aa0e797d101-serving-cert\") pod \"service-ca-operator-777779d784-x6g8r\" (UID: \"613216b8-2838-4eb4-8635-9aa0e797d101\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-x6g8r" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.940845 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8ef682f0-d784-48ac-83f3-4c718f34edaf-service-ca-bundle\") pod \"router-default-5444994796-wxc9p\" (UID: \"8ef682f0-d784-48ac-83f3-4c718f34edaf\") " pod="openshift-ingress/router-default-5444994796-wxc9p" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.940860 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dt9rp\" (UniqueName: \"kubernetes.io/projected/815768ad-2984-4e34-afb0-4e98c3f0373f-kube-api-access-dt9rp\") pod \"migrator-59844c95c7-5d2ld\" (UID: \"815768ad-2984-4e34-afb0-4e98c3f0373f\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5d2ld" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.940885 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqbkl\" (UniqueName: \"kubernetes.io/projected/7b8bcc47-53bd-45a5-937f-b515a314f662-kube-api-access-sqbkl\") pod \"control-plane-machine-set-operator-78cbb6b69f-nwsjb\" (UID: \"7b8bcc47-53bd-45a5-937f-b515a314f662\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nwsjb" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.940936 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d68516ef-c18f-4d3f-bc80-71739e73cee1-registry-certificates\") pod \"image-registry-697d97f7c8-9w2qz\" (UID: \"d68516ef-c18f-4d3f-bc80-71739e73cee1\") " pod="openshift-image-registry/image-registry-697d97f7c8-9w2qz" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.940951 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/613216b8-2838-4eb4-8635-9aa0e797d101-config\") pod \"service-ca-operator-777779d784-x6g8r\" (UID: \"613216b8-2838-4eb4-8635-9aa0e797d101\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-x6g8r" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.940966 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tn5kv\" (UniqueName: \"kubernetes.io/projected/8ef682f0-d784-48ac-83f3-4c718f34edaf-kube-api-access-tn5kv\") pod \"router-default-5444994796-wxc9p\" (UID: \"8ef682f0-d784-48ac-83f3-4c718f34edaf\") " pod="openshift-ingress/router-default-5444994796-wxc9p" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.940994 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/119d4f92-5b02-4cc7-bb41-adcc78ccb157-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-7bjm7\" (UID: \"119d4f92-5b02-4cc7-bb41-adcc78ccb157\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7bjm7" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.941019 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d68516ef-c18f-4d3f-bc80-71739e73cee1-bound-sa-token\") pod \"image-registry-697d97f7c8-9w2qz\" (UID: \"d68516ef-c18f-4d3f-bc80-71739e73cee1\") " pod="openshift-image-registry/image-registry-697d97f7c8-9w2qz" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.941051 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5fa82d2-0cf9-46d0-b319-45a36d14a3af-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-m47n7\" (UID: \"d5fa82d2-0cf9-46d0-b319-45a36d14a3af\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-m47n7" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.941096 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/8ef682f0-d784-48ac-83f3-4c718f34edaf-stats-auth\") pod \"router-default-5444994796-wxc9p\" (UID: \"8ef682f0-d784-48ac-83f3-4c718f34edaf\") " pod="openshift-ingress/router-default-5444994796-wxc9p" Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.941110 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/2837271a-7003-4e16-aa64-432493decb73-profile-collector-cert\") pod \"catalog-operator-68c6474976-2jj65\" (UID: \"2837271a-7003-4e16-aa64-432493decb73\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2jj65" Nov 24 11:11:24 crc kubenswrapper[5072]: E1124 11:11:24.953291 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:11:25.45327086 +0000 UTC m=+137.164795346 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9w2qz" (UID: "d68516ef-c18f-4d3f-bc80-71739e73cee1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:11:24 crc kubenswrapper[5072]: I1124 11:11:24.987629 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mzvpf" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.041729 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.041978 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7v48\" (UniqueName: \"kubernetes.io/projected/8aabc0b3-9299-4b7b-8d00-310cad0b4d63-kube-api-access-b7v48\") pod \"ingress-canary-f8msc\" (UID: \"8aabc0b3-9299-4b7b-8d00-310cad0b4d63\") " pod="openshift-ingress-canary/ingress-canary-f8msc" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.042014 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7fd28f12-f21e-4050-9102-45579a294fac-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-fvnl4\" (UID: \"7fd28f12-f21e-4050-9102-45579a294fac\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fvnl4" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.042044 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9bdad3dd-22a5-46d4-be89-9f5f98da1738-etcd-client\") pod \"etcd-operator-b45778765-qtf9d\" (UID: \"9bdad3dd-22a5-46d4-be89-9f5f98da1738\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qtf9d" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.042073 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/164c7d70-1b80-415a-8a7b-fbb1001b1286-bound-sa-token\") pod \"ingress-operator-5b745b69d9-vftrc\" (UID: \"164c7d70-1b80-415a-8a7b-fbb1001b1286\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vftrc" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.042097 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/c0a68115-9754-4071-b421-d9627182ff91-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-9x4dl\" (UID: \"c0a68115-9754-4071-b421-d9627182ff91\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9x4dl" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.042121 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ml6b\" (UniqueName: \"kubernetes.io/projected/164c7d70-1b80-415a-8a7b-fbb1001b1286-kube-api-access-4ml6b\") pod \"ingress-operator-5b745b69d9-vftrc\" (UID: \"164c7d70-1b80-415a-8a7b-fbb1001b1286\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vftrc" Nov 24 11:11:25 crc kubenswrapper[5072]: E1124 11:11:25.042153 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:11:25.542134637 +0000 UTC m=+137.253659113 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.042181 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/04426b83-61f0-4c87-b0e7-f175836692df-registration-dir\") pod \"csi-hostpathplugin-cztzr\" (UID: \"04426b83-61f0-4c87-b0e7-f175836692df\") " pod="hostpath-provisioner/csi-hostpathplugin-cztzr" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.042217 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/164c7d70-1b80-415a-8a7b-fbb1001b1286-metrics-tls\") pod \"ingress-operator-5b745b69d9-vftrc\" (UID: \"164c7d70-1b80-415a-8a7b-fbb1001b1286\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vftrc" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.042237 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d68516ef-c18f-4d3f-bc80-71739e73cee1-trusted-ca\") pod \"image-registry-697d97f7c8-9w2qz\" (UID: \"d68516ef-c18f-4d3f-bc80-71739e73cee1\") " pod="openshift-image-registry/image-registry-697d97f7c8-9w2qz" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.042253 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9bdad3dd-22a5-46d4-be89-9f5f98da1738-config\") pod \"etcd-operator-b45778765-qtf9d\" (UID: \"9bdad3dd-22a5-46d4-be89-9f5f98da1738\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qtf9d" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.042268 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxkh6\" (UniqueName: \"kubernetes.io/projected/9bdad3dd-22a5-46d4-be89-9f5f98da1738-kube-api-access-kxkh6\") pod \"etcd-operator-b45778765-qtf9d\" (UID: \"9bdad3dd-22a5-46d4-be89-9f5f98da1738\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qtf9d" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.042286 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2493e834-4bc7-43eb-a2c3-942598904f3a-apiservice-cert\") pod \"packageserver-d55dfcdfc-5k5rr\" (UID: \"2493e834-4bc7-43eb-a2c3-942598904f3a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5k5rr" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.042305 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9w2qz\" (UID: \"d68516ef-c18f-4d3f-bc80-71739e73cee1\") " pod="openshift-image-registry/image-registry-697d97f7c8-9w2qz" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.042320 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9bdad3dd-22a5-46d4-be89-9f5f98da1738-serving-cert\") pod \"etcd-operator-b45778765-qtf9d\" (UID: \"9bdad3dd-22a5-46d4-be89-9f5f98da1738\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qtf9d" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.042336 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ff258f9c-6ace-46bf-8228-05668edcbdd6-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-ztvf4\" (UID: \"ff258f9c-6ace-46bf-8228-05668edcbdd6\") " pod="openshift-marketplace/marketplace-operator-79b997595-ztvf4" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.042350 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbh28\" (UniqueName: \"kubernetes.io/projected/ff258f9c-6ace-46bf-8228-05668edcbdd6-kube-api-access-hbh28\") pod \"marketplace-operator-79b997595-ztvf4\" (UID: \"ff258f9c-6ace-46bf-8228-05668edcbdd6\") " pod="openshift-marketplace/marketplace-operator-79b997595-ztvf4" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.042387 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fb23ad0-2566-4f2c-8a33-97e253539289-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-h6q9x\" (UID: \"1fb23ad0-2566-4f2c-8a33-97e253539289\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-h6q9x" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.042402 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f662a10c-20f8-49b5-9a41-6a17e156038b-proxy-tls\") pod \"machine-config-operator-74547568cd-ln5s8\" (UID: \"f662a10c-20f8-49b5-9a41-6a17e156038b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ln5s8" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.042426 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d68516ef-c18f-4d3f-bc80-71739e73cee1-ca-trust-extracted\") pod \"image-registry-697d97f7c8-9w2qz\" (UID: \"d68516ef-c18f-4d3f-bc80-71739e73cee1\") " pod="openshift-image-registry/image-registry-697d97f7c8-9w2qz" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.042440 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8ef682f0-d784-48ac-83f3-4c718f34edaf-metrics-certs\") pod \"router-default-5444994796-wxc9p\" (UID: \"8ef682f0-d784-48ac-83f3-4c718f34edaf\") " pod="openshift-ingress/router-default-5444994796-wxc9p" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.042465 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/119d4f92-5b02-4cc7-bb41-adcc78ccb157-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-7bjm7\" (UID: \"119d4f92-5b02-4cc7-bb41-adcc78ccb157\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7bjm7" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.042481 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pr4m\" (UniqueName: \"kubernetes.io/projected/70a53cfd-05d8-426e-9b52-55af67b9c200-kube-api-access-9pr4m\") pod \"olm-operator-6b444d44fb-j5sfl\" (UID: \"70a53cfd-05d8-426e-9b52-55af67b9c200\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j5sfl" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.042496 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/04426b83-61f0-4c87-b0e7-f175836692df-plugins-dir\") pod \"csi-hostpathplugin-cztzr\" (UID: \"04426b83-61f0-4c87-b0e7-f175836692df\") " pod="hostpath-provisioner/csi-hostpathplugin-cztzr" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.042510 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcbdg\" (UniqueName: \"kubernetes.io/projected/c0a68115-9754-4071-b421-d9627182ff91-kube-api-access-zcbdg\") pod \"package-server-manager-789f6589d5-9x4dl\" (UID: \"c0a68115-9754-4071-b421-d9627182ff91\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9x4dl" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.042528 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blzpj\" (UniqueName: \"kubernetes.io/projected/311af931-95d6-429a-a86a-f54ab066747f-kube-api-access-blzpj\") pod \"kube-storage-version-migrator-operator-b67b599dd-8hq7n\" (UID: \"311af931-95d6-429a-a86a-f54ab066747f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8hq7n" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.042548 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/2493e834-4bc7-43eb-a2c3-942598904f3a-tmpfs\") pod \"packageserver-d55dfcdfc-5k5rr\" (UID: \"2493e834-4bc7-43eb-a2c3-942598904f3a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5k5rr" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.042562 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/311af931-95d6-429a-a86a-f54ab066747f-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-8hq7n\" (UID: \"311af931-95d6-429a-a86a-f54ab066747f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8hq7n" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.042579 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/04426b83-61f0-4c87-b0e7-f175836692df-socket-dir\") pod \"csi-hostpathplugin-cztzr\" (UID: \"04426b83-61f0-4c87-b0e7-f175836692df\") " pod="hostpath-provisioner/csi-hostpathplugin-cztzr" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.042596 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/96be0671-6ddf-4af0-8989-da8c4a4dcfa7-config-volume\") pod \"collect-profiles-29399700-hnjjf\" (UID: \"96be0671-6ddf-4af0-8989-da8c4a4dcfa7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399700-hnjjf" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.042615 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/119d4f92-5b02-4cc7-bb41-adcc78ccb157-config\") pod \"kube-controller-manager-operator-78b949d7b-7bjm7\" (UID: \"119d4f92-5b02-4cc7-bb41-adcc78ccb157\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7bjm7" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.042636 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gqpd\" (UniqueName: \"kubernetes.io/projected/2493e834-4bc7-43eb-a2c3-942598904f3a-kube-api-access-7gqpd\") pod \"packageserver-d55dfcdfc-5k5rr\" (UID: \"2493e834-4bc7-43eb-a2c3-942598904f3a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5k5rr" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.042650 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48xbw\" (UniqueName: \"kubernetes.io/projected/d68516ef-c18f-4d3f-bc80-71739e73cee1-kube-api-access-48xbw\") pod \"image-registry-697d97f7c8-9w2qz\" (UID: \"d68516ef-c18f-4d3f-bc80-71739e73cee1\") " pod="openshift-image-registry/image-registry-697d97f7c8-9w2qz" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.042669 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgp6s\" (UniqueName: \"kubernetes.io/projected/613216b8-2838-4eb4-8635-9aa0e797d101-kube-api-access-cgp6s\") pod \"service-ca-operator-777779d784-x6g8r\" (UID: \"613216b8-2838-4eb4-8635-9aa0e797d101\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-x6g8r" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.042686 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hcl9\" (UniqueName: \"kubernetes.io/projected/2837271a-7003-4e16-aa64-432493decb73-kube-api-access-8hcl9\") pod \"catalog-operator-68c6474976-2jj65\" (UID: \"2837271a-7003-4e16-aa64-432493decb73\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2jj65" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.042711 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/164c7d70-1b80-415a-8a7b-fbb1001b1286-trusted-ca\") pod \"ingress-operator-5b745b69d9-vftrc\" (UID: \"164c7d70-1b80-415a-8a7b-fbb1001b1286\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vftrc" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.042726 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f662a10c-20f8-49b5-9a41-6a17e156038b-images\") pod \"machine-config-operator-74547568cd-ln5s8\" (UID: \"f662a10c-20f8-49b5-9a41-6a17e156038b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ln5s8" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.042740 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2837271a-7003-4e16-aa64-432493decb73-srv-cert\") pod \"catalog-operator-68c6474976-2jj65\" (UID: \"2837271a-7003-4e16-aa64-432493decb73\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2jj65" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.042765 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r26gl\" (UniqueName: \"kubernetes.io/projected/1fb23ad0-2566-4f2c-8a33-97e253539289-kube-api-access-r26gl\") pod \"openshift-controller-manager-operator-756b6f6bc6-h6q9x\" (UID: \"1fb23ad0-2566-4f2c-8a33-97e253539289\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-h6q9x" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.042784 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/70a53cfd-05d8-426e-9b52-55af67b9c200-srv-cert\") pod \"olm-operator-6b444d44fb-j5sfl\" (UID: \"70a53cfd-05d8-426e-9b52-55af67b9c200\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j5sfl" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.042808 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/7b8bcc47-53bd-45a5-937f-b515a314f662-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-nwsjb\" (UID: \"7b8bcc47-53bd-45a5-937f-b515a314f662\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nwsjb" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.042826 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/d2ba5157-b42f-477c-9db4-84b325960b47-signing-key\") pod \"service-ca-9c57cc56f-hv7lg\" (UID: \"d2ba5157-b42f-477c-9db4-84b325960b47\") " pod="openshift-service-ca/service-ca-9c57cc56f-hv7lg" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.042843 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cjpn\" (UniqueName: \"kubernetes.io/projected/d2ba5157-b42f-477c-9db4-84b325960b47-kube-api-access-8cjpn\") pod \"service-ca-9c57cc56f-hv7lg\" (UID: \"d2ba5157-b42f-477c-9db4-84b325960b47\") " pod="openshift-service-ca/service-ca-9c57cc56f-hv7lg" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.042860 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1fb23ad0-2566-4f2c-8a33-97e253539289-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-h6q9x\" (UID: \"1fb23ad0-2566-4f2c-8a33-97e253539289\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-h6q9x" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.042877 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/69a7724d-41d5-4946-81d6-d43497db7319-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-t6876\" (UID: \"69a7724d-41d5-4946-81d6-d43497db7319\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t6876" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.042893 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8mw9\" (UniqueName: \"kubernetes.io/projected/91d52696-3096-4d21-b1b5-8e0abab2b1ba-kube-api-access-g8mw9\") pod \"dns-default-dxmxv\" (UID: \"91d52696-3096-4d21-b1b5-8e0abab2b1ba\") " pod="openshift-dns/dns-default-dxmxv" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.042914 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/311af931-95d6-429a-a86a-f54ab066747f-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-8hq7n\" (UID: \"311af931-95d6-429a-a86a-f54ab066747f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8hq7n" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.042930 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/8ef682f0-d784-48ac-83f3-4c718f34edaf-default-certificate\") pod \"router-default-5444994796-wxc9p\" (UID: \"8ef682f0-d784-48ac-83f3-4c718f34edaf\") " pod="openshift-ingress/router-default-5444994796-wxc9p" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.042945 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwnnl\" (UniqueName: \"kubernetes.io/projected/d5fa82d2-0cf9-46d0-b319-45a36d14a3af-kube-api-access-gwnnl\") pod \"multus-admission-controller-857f4d67dd-m47n7\" (UID: \"d5fa82d2-0cf9-46d0-b319-45a36d14a3af\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-m47n7" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.042960 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/9bdad3dd-22a5-46d4-be89-9f5f98da1738-etcd-ca\") pod \"etcd-operator-b45778765-qtf9d\" (UID: \"9bdad3dd-22a5-46d4-be89-9f5f98da1738\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qtf9d" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.042976 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69a7724d-41d5-4946-81d6-d43497db7319-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-t6876\" (UID: \"69a7724d-41d5-4946-81d6-d43497db7319\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t6876" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.042991 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/04426b83-61f0-4c87-b0e7-f175836692df-csi-data-dir\") pod \"csi-hostpathplugin-cztzr\" (UID: \"04426b83-61f0-4c87-b0e7-f175836692df\") " pod="hostpath-provisioner/csi-hostpathplugin-cztzr" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.043008 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d68516ef-c18f-4d3f-bc80-71739e73cee1-installation-pull-secrets\") pod \"image-registry-697d97f7c8-9w2qz\" (UID: \"d68516ef-c18f-4d3f-bc80-71739e73cee1\") " pod="openshift-image-registry/image-registry-697d97f7c8-9w2qz" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.043023 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/613216b8-2838-4eb4-8635-9aa0e797d101-serving-cert\") pod \"service-ca-operator-777779d784-x6g8r\" (UID: \"613216b8-2838-4eb4-8635-9aa0e797d101\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-x6g8r" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.043037 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7fd28f12-f21e-4050-9102-45579a294fac-proxy-tls\") pod \"machine-config-controller-84d6567774-fvnl4\" (UID: \"7fd28f12-f21e-4050-9102-45579a294fac\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fvnl4" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.043055 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8ef682f0-d784-48ac-83f3-4c718f34edaf-service-ca-bundle\") pod \"router-default-5444994796-wxc9p\" (UID: \"8ef682f0-d784-48ac-83f3-4c718f34edaf\") " pod="openshift-ingress/router-default-5444994796-wxc9p" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.043072 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dt9rp\" (UniqueName: \"kubernetes.io/projected/815768ad-2984-4e34-afb0-4e98c3f0373f-kube-api-access-dt9rp\") pod \"migrator-59844c95c7-5d2ld\" (UID: \"815768ad-2984-4e34-afb0-4e98c3f0373f\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5d2ld" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.043092 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/91d52696-3096-4d21-b1b5-8e0abab2b1ba-config-volume\") pod \"dns-default-dxmxv\" (UID: \"91d52696-3096-4d21-b1b5-8e0abab2b1ba\") " pod="openshift-dns/dns-default-dxmxv" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.043109 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqbkl\" (UniqueName: \"kubernetes.io/projected/7b8bcc47-53bd-45a5-937f-b515a314f662-kube-api-access-sqbkl\") pod \"control-plane-machine-set-operator-78cbb6b69f-nwsjb\" (UID: \"7b8bcc47-53bd-45a5-937f-b515a314f662\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nwsjb" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.043124 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/d2ba5157-b42f-477c-9db4-84b325960b47-signing-cabundle\") pod \"service-ca-9c57cc56f-hv7lg\" (UID: \"d2ba5157-b42f-477c-9db4-84b325960b47\") " pod="openshift-service-ca/service-ca-9c57cc56f-hv7lg" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.043141 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92wlj\" (UniqueName: \"kubernetes.io/projected/e58dd08a-2f64-4b2f-8779-3ea2e4088142-kube-api-access-92wlj\") pod \"machine-config-server-9prrw\" (UID: \"e58dd08a-2f64-4b2f-8779-3ea2e4088142\") " pod="openshift-machine-config-operator/machine-config-server-9prrw" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.043160 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/91d52696-3096-4d21-b1b5-8e0abab2b1ba-metrics-tls\") pod \"dns-default-dxmxv\" (UID: \"91d52696-3096-4d21-b1b5-8e0abab2b1ba\") " pod="openshift-dns/dns-default-dxmxv" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.043179 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d68516ef-c18f-4d3f-bc80-71739e73cee1-registry-certificates\") pod \"image-registry-697d97f7c8-9w2qz\" (UID: \"d68516ef-c18f-4d3f-bc80-71739e73cee1\") " pod="openshift-image-registry/image-registry-697d97f7c8-9w2qz" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.043195 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/613216b8-2838-4eb4-8635-9aa0e797d101-config\") pod \"service-ca-operator-777779d784-x6g8r\" (UID: \"613216b8-2838-4eb4-8635-9aa0e797d101\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-x6g8r" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.043211 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tn5kv\" (UniqueName: \"kubernetes.io/projected/8ef682f0-d784-48ac-83f3-4c718f34edaf-kube-api-access-tn5kv\") pod \"router-default-5444994796-wxc9p\" (UID: \"8ef682f0-d784-48ac-83f3-4c718f34edaf\") " pod="openshift-ingress/router-default-5444994796-wxc9p" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.043225 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/96be0671-6ddf-4af0-8989-da8c4a4dcfa7-secret-volume\") pod \"collect-profiles-29399700-hnjjf\" (UID: \"96be0671-6ddf-4af0-8989-da8c4a4dcfa7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399700-hnjjf" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.043241 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/119d4f92-5b02-4cc7-bb41-adcc78ccb157-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-7bjm7\" (UID: \"119d4f92-5b02-4cc7-bb41-adcc78ccb157\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7bjm7" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.043256 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d68516ef-c18f-4d3f-bc80-71739e73cee1-bound-sa-token\") pod \"image-registry-697d97f7c8-9w2qz\" (UID: \"d68516ef-c18f-4d3f-bc80-71739e73cee1\") " pod="openshift-image-registry/image-registry-697d97f7c8-9w2qz" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.043274 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5fa82d2-0cf9-46d0-b319-45a36d14a3af-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-m47n7\" (UID: \"d5fa82d2-0cf9-46d0-b319-45a36d14a3af\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-m47n7" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.043288 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/70a53cfd-05d8-426e-9b52-55af67b9c200-profile-collector-cert\") pod \"olm-operator-6b444d44fb-j5sfl\" (UID: \"70a53cfd-05d8-426e-9b52-55af67b9c200\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j5sfl" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.043306 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47cz7\" (UniqueName: \"kubernetes.io/projected/7fd28f12-f21e-4050-9102-45579a294fac-kube-api-access-47cz7\") pod \"machine-config-controller-84d6567774-fvnl4\" (UID: \"7fd28f12-f21e-4050-9102-45579a294fac\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fvnl4" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.043320 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8aabc0b3-9299-4b7b-8d00-310cad0b4d63-cert\") pod \"ingress-canary-f8msc\" (UID: \"8aabc0b3-9299-4b7b-8d00-310cad0b4d63\") " pod="openshift-ingress-canary/ingress-canary-f8msc" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.043338 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/8ef682f0-d784-48ac-83f3-4c718f34edaf-stats-auth\") pod \"router-default-5444994796-wxc9p\" (UID: \"8ef682f0-d784-48ac-83f3-4c718f34edaf\") " pod="openshift-ingress/router-default-5444994796-wxc9p" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.043354 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/2837271a-7003-4e16-aa64-432493decb73-profile-collector-cert\") pod \"catalog-operator-68c6474976-2jj65\" (UID: \"2837271a-7003-4e16-aa64-432493decb73\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2jj65" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.043382 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/e58dd08a-2f64-4b2f-8779-3ea2e4088142-certs\") pod \"machine-config-server-9prrw\" (UID: \"e58dd08a-2f64-4b2f-8779-3ea2e4088142\") " pod="openshift-machine-config-operator/machine-config-server-9prrw" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.043401 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69a7724d-41d5-4946-81d6-d43497db7319-config\") pod \"kube-apiserver-operator-766d6c64bb-t6876\" (UID: \"69a7724d-41d5-4946-81d6-d43497db7319\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t6876" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.043420 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kf56j\" (UniqueName: \"kubernetes.io/projected/f662a10c-20f8-49b5-9a41-6a17e156038b-kube-api-access-kf56j\") pod \"machine-config-operator-74547568cd-ln5s8\" (UID: \"f662a10c-20f8-49b5-9a41-6a17e156038b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ln5s8" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.043434 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2493e834-4bc7-43eb-a2c3-942598904f3a-webhook-cert\") pod \"packageserver-d55dfcdfc-5k5rr\" (UID: \"2493e834-4bc7-43eb-a2c3-942598904f3a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5k5rr" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.043449 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/04426b83-61f0-4c87-b0e7-f175836692df-mountpoint-dir\") pod \"csi-hostpathplugin-cztzr\" (UID: \"04426b83-61f0-4c87-b0e7-f175836692df\") " pod="hostpath-provisioner/csi-hostpathplugin-cztzr" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.043463 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ff258f9c-6ace-46bf-8228-05668edcbdd6-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-ztvf4\" (UID: \"ff258f9c-6ace-46bf-8228-05668edcbdd6\") " pod="openshift-marketplace/marketplace-operator-79b997595-ztvf4" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.043482 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d68516ef-c18f-4d3f-bc80-71739e73cee1-registry-tls\") pod \"image-registry-697d97f7c8-9w2qz\" (UID: \"d68516ef-c18f-4d3f-bc80-71739e73cee1\") " pod="openshift-image-registry/image-registry-697d97f7c8-9w2qz" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.043498 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjh72\" (UniqueName: \"kubernetes.io/projected/04426b83-61f0-4c87-b0e7-f175836692df-kube-api-access-zjh72\") pod \"csi-hostpathplugin-cztzr\" (UID: \"04426b83-61f0-4c87-b0e7-f175836692df\") " pod="hostpath-provisioner/csi-hostpathplugin-cztzr" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.043512 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/e58dd08a-2f64-4b2f-8779-3ea2e4088142-node-bootstrap-token\") pod \"machine-config-server-9prrw\" (UID: \"e58dd08a-2f64-4b2f-8779-3ea2e4088142\") " pod="openshift-machine-config-operator/machine-config-server-9prrw" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.043529 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/9bdad3dd-22a5-46d4-be89-9f5f98da1738-etcd-service-ca\") pod \"etcd-operator-b45778765-qtf9d\" (UID: \"9bdad3dd-22a5-46d4-be89-9f5f98da1738\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qtf9d" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.043543 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46mrm\" (UniqueName: \"kubernetes.io/projected/96be0671-6ddf-4af0-8989-da8c4a4dcfa7-kube-api-access-46mrm\") pod \"collect-profiles-29399700-hnjjf\" (UID: \"96be0671-6ddf-4af0-8989-da8c4a4dcfa7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399700-hnjjf" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.043559 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f662a10c-20f8-49b5-9a41-6a17e156038b-auth-proxy-config\") pod \"machine-config-operator-74547568cd-ln5s8\" (UID: \"f662a10c-20f8-49b5-9a41-6a17e156038b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ln5s8" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.044076 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f662a10c-20f8-49b5-9a41-6a17e156038b-auth-proxy-config\") pod \"machine-config-operator-74547568cd-ln5s8\" (UID: \"f662a10c-20f8-49b5-9a41-6a17e156038b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ln5s8" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.047884 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d68516ef-c18f-4d3f-bc80-71739e73cee1-trusted-ca\") pod \"image-registry-697d97f7c8-9w2qz\" (UID: \"d68516ef-c18f-4d3f-bc80-71739e73cee1\") " pod="openshift-image-registry/image-registry-697d97f7c8-9w2qz" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.048566 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9bdad3dd-22a5-46d4-be89-9f5f98da1738-config\") pod \"etcd-operator-b45778765-qtf9d\" (UID: \"9bdad3dd-22a5-46d4-be89-9f5f98da1738\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qtf9d" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.048729 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9bdad3dd-22a5-46d4-be89-9f5f98da1738-etcd-client\") pod \"etcd-operator-b45778765-qtf9d\" (UID: \"9bdad3dd-22a5-46d4-be89-9f5f98da1738\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qtf9d" Nov 24 11:11:25 crc kubenswrapper[5072]: E1124 11:11:25.049152 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:11:25.549138498 +0000 UTC m=+137.260662974 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9w2qz" (UID: "d68516ef-c18f-4d3f-bc80-71739e73cee1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.051224 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/8ef682f0-d784-48ac-83f3-4c718f34edaf-default-certificate\") pod \"router-default-5444994796-wxc9p\" (UID: \"8ef682f0-d784-48ac-83f3-4c718f34edaf\") " pod="openshift-ingress/router-default-5444994796-wxc9p" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.051264 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d68516ef-c18f-4d3f-bc80-71739e73cee1-ca-trust-extracted\") pod \"image-registry-697d97f7c8-9w2qz\" (UID: \"d68516ef-c18f-4d3f-bc80-71739e73cee1\") " pod="openshift-image-registry/image-registry-697d97f7c8-9w2qz" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.051732 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/9bdad3dd-22a5-46d4-be89-9f5f98da1738-etcd-ca\") pod \"etcd-operator-b45778765-qtf9d\" (UID: \"9bdad3dd-22a5-46d4-be89-9f5f98da1738\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qtf9d" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.052495 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2493e834-4bc7-43eb-a2c3-942598904f3a-apiservice-cert\") pod \"packageserver-d55dfcdfc-5k5rr\" (UID: \"2493e834-4bc7-43eb-a2c3-942598904f3a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5k5rr" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.052979 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/119d4f92-5b02-4cc7-bb41-adcc78ccb157-config\") pod \"kube-controller-manager-operator-78b949d7b-7bjm7\" (UID: \"119d4f92-5b02-4cc7-bb41-adcc78ccb157\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7bjm7" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.053541 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69a7724d-41d5-4946-81d6-d43497db7319-config\") pod \"kube-apiserver-operator-766d6c64bb-t6876\" (UID: \"69a7724d-41d5-4946-81d6-d43497db7319\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t6876" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.053810 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fb23ad0-2566-4f2c-8a33-97e253539289-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-h6q9x\" (UID: \"1fb23ad0-2566-4f2c-8a33-97e253539289\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-h6q9x" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.053829 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/2493e834-4bc7-43eb-a2c3-942598904f3a-tmpfs\") pod \"packageserver-d55dfcdfc-5k5rr\" (UID: \"2493e834-4bc7-43eb-a2c3-942598904f3a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5k5rr" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.054783 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/164c7d70-1b80-415a-8a7b-fbb1001b1286-trusted-ca\") pod \"ingress-operator-5b745b69d9-vftrc\" (UID: \"164c7d70-1b80-415a-8a7b-fbb1001b1286\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vftrc" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.055186 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/613216b8-2838-4eb4-8635-9aa0e797d101-config\") pod \"service-ca-operator-777779d784-x6g8r\" (UID: \"613216b8-2838-4eb4-8635-9aa0e797d101\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-x6g8r" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.056695 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f662a10c-20f8-49b5-9a41-6a17e156038b-images\") pod \"machine-config-operator-74547568cd-ln5s8\" (UID: \"f662a10c-20f8-49b5-9a41-6a17e156038b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ln5s8" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.058410 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d68516ef-c18f-4d3f-bc80-71739e73cee1-registry-certificates\") pod \"image-registry-697d97f7c8-9w2qz\" (UID: \"d68516ef-c18f-4d3f-bc80-71739e73cee1\") " pod="openshift-image-registry/image-registry-697d97f7c8-9w2qz" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.060018 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8ef682f0-d784-48ac-83f3-4c718f34edaf-service-ca-bundle\") pod \"router-default-5444994796-wxc9p\" (UID: \"8ef682f0-d784-48ac-83f3-4c718f34edaf\") " pod="openshift-ingress/router-default-5444994796-wxc9p" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.060669 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/9bdad3dd-22a5-46d4-be89-9f5f98da1738-etcd-service-ca\") pod \"etcd-operator-b45778765-qtf9d\" (UID: \"9bdad3dd-22a5-46d4-be89-9f5f98da1738\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qtf9d" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.063175 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/164c7d70-1b80-415a-8a7b-fbb1001b1286-metrics-tls\") pod \"ingress-operator-5b745b69d9-vftrc\" (UID: \"164c7d70-1b80-415a-8a7b-fbb1001b1286\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vftrc" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.063951 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/613216b8-2838-4eb4-8635-9aa0e797d101-serving-cert\") pod \"service-ca-operator-777779d784-x6g8r\" (UID: \"613216b8-2838-4eb4-8635-9aa0e797d101\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-x6g8r" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.064590 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9bdad3dd-22a5-46d4-be89-9f5f98da1738-serving-cert\") pod \"etcd-operator-b45778765-qtf9d\" (UID: \"9bdad3dd-22a5-46d4-be89-9f5f98da1738\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qtf9d" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.065339 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8ef682f0-d784-48ac-83f3-4c718f34edaf-metrics-certs\") pod \"router-default-5444994796-wxc9p\" (UID: \"8ef682f0-d784-48ac-83f3-4c718f34edaf\") " pod="openshift-ingress/router-default-5444994796-wxc9p" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.066291 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1fb23ad0-2566-4f2c-8a33-97e253539289-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-h6q9x\" (UID: \"1fb23ad0-2566-4f2c-8a33-97e253539289\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-h6q9x" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.067033 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2837271a-7003-4e16-aa64-432493decb73-srv-cert\") pod \"catalog-operator-68c6474976-2jj65\" (UID: \"2837271a-7003-4e16-aa64-432493decb73\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2jj65" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.067197 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/7b8bcc47-53bd-45a5-937f-b515a314f662-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-nwsjb\" (UID: \"7b8bcc47-53bd-45a5-937f-b515a314f662\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nwsjb" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.068996 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f662a10c-20f8-49b5-9a41-6a17e156038b-proxy-tls\") pod \"machine-config-operator-74547568cd-ln5s8\" (UID: \"f662a10c-20f8-49b5-9a41-6a17e156038b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ln5s8" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.069118 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/119d4f92-5b02-4cc7-bb41-adcc78ccb157-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-7bjm7\" (UID: \"119d4f92-5b02-4cc7-bb41-adcc78ccb157\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7bjm7" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.070770 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d68516ef-c18f-4d3f-bc80-71739e73cee1-installation-pull-secrets\") pod \"image-registry-697d97f7c8-9w2qz\" (UID: \"d68516ef-c18f-4d3f-bc80-71739e73cee1\") " pod="openshift-image-registry/image-registry-697d97f7c8-9w2qz" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.073909 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2493e834-4bc7-43eb-a2c3-942598904f3a-webhook-cert\") pod \"packageserver-d55dfcdfc-5k5rr\" (UID: \"2493e834-4bc7-43eb-a2c3-942598904f3a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5k5rr" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.074018 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/8ef682f0-d784-48ac-83f3-4c718f34edaf-stats-auth\") pod \"router-default-5444994796-wxc9p\" (UID: \"8ef682f0-d784-48ac-83f3-4c718f34edaf\") " pod="openshift-ingress/router-default-5444994796-wxc9p" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.080300 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69a7724d-41d5-4946-81d6-d43497db7319-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-t6876\" (UID: \"69a7724d-41d5-4946-81d6-d43497db7319\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t6876" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.095509 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d68516ef-c18f-4d3f-bc80-71739e73cee1-registry-tls\") pod \"image-registry-697d97f7c8-9w2qz\" (UID: \"d68516ef-c18f-4d3f-bc80-71739e73cee1\") " pod="openshift-image-registry/image-registry-697d97f7c8-9w2qz" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.098687 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ml6b\" (UniqueName: \"kubernetes.io/projected/164c7d70-1b80-415a-8a7b-fbb1001b1286-kube-api-access-4ml6b\") pod \"ingress-operator-5b745b69d9-vftrc\" (UID: \"164c7d70-1b80-415a-8a7b-fbb1001b1286\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vftrc" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.098964 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/2837271a-7003-4e16-aa64-432493decb73-profile-collector-cert\") pod \"catalog-operator-68c6474976-2jj65\" (UID: \"2837271a-7003-4e16-aa64-432493decb73\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2jj65" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.101344 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5fa82d2-0cf9-46d0-b319-45a36d14a3af-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-m47n7\" (UID: \"d5fa82d2-0cf9-46d0-b319-45a36d14a3af\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-m47n7" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.139310 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/164c7d70-1b80-415a-8a7b-fbb1001b1286-bound-sa-token\") pod \"ingress-operator-5b745b69d9-vftrc\" (UID: \"164c7d70-1b80-415a-8a7b-fbb1001b1286\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vftrc" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.140135 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxkh6\" (UniqueName: \"kubernetes.io/projected/9bdad3dd-22a5-46d4-be89-9f5f98da1738-kube-api-access-kxkh6\") pod \"etcd-operator-b45778765-qtf9d\" (UID: \"9bdad3dd-22a5-46d4-be89-9f5f98da1738\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qtf9d" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.144910 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.145064 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/70a53cfd-05d8-426e-9b52-55af67b9c200-profile-collector-cert\") pod \"olm-operator-6b444d44fb-j5sfl\" (UID: \"70a53cfd-05d8-426e-9b52-55af67b9c200\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j5sfl" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.145088 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47cz7\" (UniqueName: \"kubernetes.io/projected/7fd28f12-f21e-4050-9102-45579a294fac-kube-api-access-47cz7\") pod \"machine-config-controller-84d6567774-fvnl4\" (UID: \"7fd28f12-f21e-4050-9102-45579a294fac\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fvnl4" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.145107 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8aabc0b3-9299-4b7b-8d00-310cad0b4d63-cert\") pod \"ingress-canary-f8msc\" (UID: \"8aabc0b3-9299-4b7b-8d00-310cad0b4d63\") " pod="openshift-ingress-canary/ingress-canary-f8msc" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.145122 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/e58dd08a-2f64-4b2f-8779-3ea2e4088142-certs\") pod \"machine-config-server-9prrw\" (UID: \"e58dd08a-2f64-4b2f-8779-3ea2e4088142\") " pod="openshift-machine-config-operator/machine-config-server-9prrw" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.145148 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ff258f9c-6ace-46bf-8228-05668edcbdd6-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-ztvf4\" (UID: \"ff258f9c-6ace-46bf-8228-05668edcbdd6\") " pod="openshift-marketplace/marketplace-operator-79b997595-ztvf4" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.145165 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/04426b83-61f0-4c87-b0e7-f175836692df-mountpoint-dir\") pod \"csi-hostpathplugin-cztzr\" (UID: \"04426b83-61f0-4c87-b0e7-f175836692df\") " pod="hostpath-provisioner/csi-hostpathplugin-cztzr" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.145181 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjh72\" (UniqueName: \"kubernetes.io/projected/04426b83-61f0-4c87-b0e7-f175836692df-kube-api-access-zjh72\") pod \"csi-hostpathplugin-cztzr\" (UID: \"04426b83-61f0-4c87-b0e7-f175836692df\") " pod="hostpath-provisioner/csi-hostpathplugin-cztzr" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.145196 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/e58dd08a-2f64-4b2f-8779-3ea2e4088142-node-bootstrap-token\") pod \"machine-config-server-9prrw\" (UID: \"e58dd08a-2f64-4b2f-8779-3ea2e4088142\") " pod="openshift-machine-config-operator/machine-config-server-9prrw" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.145211 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46mrm\" (UniqueName: \"kubernetes.io/projected/96be0671-6ddf-4af0-8989-da8c4a4dcfa7-kube-api-access-46mrm\") pod \"collect-profiles-29399700-hnjjf\" (UID: \"96be0671-6ddf-4af0-8989-da8c4a4dcfa7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399700-hnjjf" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.145228 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7fd28f12-f21e-4050-9102-45579a294fac-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-fvnl4\" (UID: \"7fd28f12-f21e-4050-9102-45579a294fac\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fvnl4" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.145242 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7v48\" (UniqueName: \"kubernetes.io/projected/8aabc0b3-9299-4b7b-8d00-310cad0b4d63-kube-api-access-b7v48\") pod \"ingress-canary-f8msc\" (UID: \"8aabc0b3-9299-4b7b-8d00-310cad0b4d63\") " pod="openshift-ingress-canary/ingress-canary-f8msc" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.145263 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/c0a68115-9754-4071-b421-d9627182ff91-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-9x4dl\" (UID: \"c0a68115-9754-4071-b421-d9627182ff91\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9x4dl" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.145279 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/04426b83-61f0-4c87-b0e7-f175836692df-registration-dir\") pod \"csi-hostpathplugin-cztzr\" (UID: \"04426b83-61f0-4c87-b0e7-f175836692df\") " pod="hostpath-provisioner/csi-hostpathplugin-cztzr" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.145305 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbh28\" (UniqueName: \"kubernetes.io/projected/ff258f9c-6ace-46bf-8228-05668edcbdd6-kube-api-access-hbh28\") pod \"marketplace-operator-79b997595-ztvf4\" (UID: \"ff258f9c-6ace-46bf-8228-05668edcbdd6\") " pod="openshift-marketplace/marketplace-operator-79b997595-ztvf4" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.145327 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ff258f9c-6ace-46bf-8228-05668edcbdd6-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-ztvf4\" (UID: \"ff258f9c-6ace-46bf-8228-05668edcbdd6\") " pod="openshift-marketplace/marketplace-operator-79b997595-ztvf4" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.145358 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/04426b83-61f0-4c87-b0e7-f175836692df-plugins-dir\") pod \"csi-hostpathplugin-cztzr\" (UID: \"04426b83-61f0-4c87-b0e7-f175836692df\") " pod="hostpath-provisioner/csi-hostpathplugin-cztzr" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.145500 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcbdg\" (UniqueName: \"kubernetes.io/projected/c0a68115-9754-4071-b421-d9627182ff91-kube-api-access-zcbdg\") pod \"package-server-manager-789f6589d5-9x4dl\" (UID: \"c0a68115-9754-4071-b421-d9627182ff91\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9x4dl" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.145519 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pr4m\" (UniqueName: \"kubernetes.io/projected/70a53cfd-05d8-426e-9b52-55af67b9c200-kube-api-access-9pr4m\") pod \"olm-operator-6b444d44fb-j5sfl\" (UID: \"70a53cfd-05d8-426e-9b52-55af67b9c200\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j5sfl" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.145534 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blzpj\" (UniqueName: \"kubernetes.io/projected/311af931-95d6-429a-a86a-f54ab066747f-kube-api-access-blzpj\") pod \"kube-storage-version-migrator-operator-b67b599dd-8hq7n\" (UID: \"311af931-95d6-429a-a86a-f54ab066747f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8hq7n" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.145550 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/311af931-95d6-429a-a86a-f54ab066747f-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-8hq7n\" (UID: \"311af931-95d6-429a-a86a-f54ab066747f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8hq7n" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.145565 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/04426b83-61f0-4c87-b0e7-f175836692df-socket-dir\") pod \"csi-hostpathplugin-cztzr\" (UID: \"04426b83-61f0-4c87-b0e7-f175836692df\") " pod="hostpath-provisioner/csi-hostpathplugin-cztzr" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.145579 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/96be0671-6ddf-4af0-8989-da8c4a4dcfa7-config-volume\") pod \"collect-profiles-29399700-hnjjf\" (UID: \"96be0671-6ddf-4af0-8989-da8c4a4dcfa7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399700-hnjjf" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.145639 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/70a53cfd-05d8-426e-9b52-55af67b9c200-srv-cert\") pod \"olm-operator-6b444d44fb-j5sfl\" (UID: \"70a53cfd-05d8-426e-9b52-55af67b9c200\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j5sfl" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.145657 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/d2ba5157-b42f-477c-9db4-84b325960b47-signing-key\") pod \"service-ca-9c57cc56f-hv7lg\" (UID: \"d2ba5157-b42f-477c-9db4-84b325960b47\") " pod="openshift-service-ca/service-ca-9c57cc56f-hv7lg" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.145675 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cjpn\" (UniqueName: \"kubernetes.io/projected/d2ba5157-b42f-477c-9db4-84b325960b47-kube-api-access-8cjpn\") pod \"service-ca-9c57cc56f-hv7lg\" (UID: \"d2ba5157-b42f-477c-9db4-84b325960b47\") " pod="openshift-service-ca/service-ca-9c57cc56f-hv7lg" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.145696 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8mw9\" (UniqueName: \"kubernetes.io/projected/91d52696-3096-4d21-b1b5-8e0abab2b1ba-kube-api-access-g8mw9\") pod \"dns-default-dxmxv\" (UID: \"91d52696-3096-4d21-b1b5-8e0abab2b1ba\") " pod="openshift-dns/dns-default-dxmxv" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.145713 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/311af931-95d6-429a-a86a-f54ab066747f-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-8hq7n\" (UID: \"311af931-95d6-429a-a86a-f54ab066747f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8hq7n" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.145733 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/04426b83-61f0-4c87-b0e7-f175836692df-csi-data-dir\") pod \"csi-hostpathplugin-cztzr\" (UID: \"04426b83-61f0-4c87-b0e7-f175836692df\") " pod="hostpath-provisioner/csi-hostpathplugin-cztzr" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.145748 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7fd28f12-f21e-4050-9102-45579a294fac-proxy-tls\") pod \"machine-config-controller-84d6567774-fvnl4\" (UID: \"7fd28f12-f21e-4050-9102-45579a294fac\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fvnl4" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.145770 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/91d52696-3096-4d21-b1b5-8e0abab2b1ba-config-volume\") pod \"dns-default-dxmxv\" (UID: \"91d52696-3096-4d21-b1b5-8e0abab2b1ba\") " pod="openshift-dns/dns-default-dxmxv" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.145789 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/d2ba5157-b42f-477c-9db4-84b325960b47-signing-cabundle\") pod \"service-ca-9c57cc56f-hv7lg\" (UID: \"d2ba5157-b42f-477c-9db4-84b325960b47\") " pod="openshift-service-ca/service-ca-9c57cc56f-hv7lg" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.145805 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92wlj\" (UniqueName: \"kubernetes.io/projected/e58dd08a-2f64-4b2f-8779-3ea2e4088142-kube-api-access-92wlj\") pod \"machine-config-server-9prrw\" (UID: \"e58dd08a-2f64-4b2f-8779-3ea2e4088142\") " pod="openshift-machine-config-operator/machine-config-server-9prrw" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.145818 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/91d52696-3096-4d21-b1b5-8e0abab2b1ba-metrics-tls\") pod \"dns-default-dxmxv\" (UID: \"91d52696-3096-4d21-b1b5-8e0abab2b1ba\") " pod="openshift-dns/dns-default-dxmxv" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.145837 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/96be0671-6ddf-4af0-8989-da8c4a4dcfa7-secret-volume\") pod \"collect-profiles-29399700-hnjjf\" (UID: \"96be0671-6ddf-4af0-8989-da8c4a4dcfa7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399700-hnjjf" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.146950 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/04426b83-61f0-4c87-b0e7-f175836692df-plugins-dir\") pod \"csi-hostpathplugin-cztzr\" (UID: \"04426b83-61f0-4c87-b0e7-f175836692df\") " pod="hostpath-provisioner/csi-hostpathplugin-cztzr" Nov 24 11:11:25 crc kubenswrapper[5072]: E1124 11:11:25.147036 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:11:25.647019613 +0000 UTC m=+137.358544089 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.148800 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/04426b83-61f0-4c87-b0e7-f175836692df-registration-dir\") pod \"csi-hostpathplugin-cztzr\" (UID: \"04426b83-61f0-4c87-b0e7-f175836692df\") " pod="hostpath-provisioner/csi-hostpathplugin-cztzr" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.149098 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/04426b83-61f0-4c87-b0e7-f175836692df-socket-dir\") pod \"csi-hostpathplugin-cztzr\" (UID: \"04426b83-61f0-4c87-b0e7-f175836692df\") " pod="hostpath-provisioner/csi-hostpathplugin-cztzr" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.149876 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/96be0671-6ddf-4af0-8989-da8c4a4dcfa7-config-volume\") pod \"collect-profiles-29399700-hnjjf\" (UID: \"96be0671-6ddf-4af0-8989-da8c4a4dcfa7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399700-hnjjf" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.151333 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/04426b83-61f0-4c87-b0e7-f175836692df-csi-data-dir\") pod \"csi-hostpathplugin-cztzr\" (UID: \"04426b83-61f0-4c87-b0e7-f175836692df\") " pod="hostpath-provisioner/csi-hostpathplugin-cztzr" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.151438 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/91d52696-3096-4d21-b1b5-8e0abab2b1ba-config-volume\") pod \"dns-default-dxmxv\" (UID: \"91d52696-3096-4d21-b1b5-8e0abab2b1ba\") " pod="openshift-dns/dns-default-dxmxv" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.151456 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/04426b83-61f0-4c87-b0e7-f175836692df-mountpoint-dir\") pod \"csi-hostpathplugin-cztzr\" (UID: \"04426b83-61f0-4c87-b0e7-f175836692df\") " pod="hostpath-provisioner/csi-hostpathplugin-cztzr" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.151493 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/d2ba5157-b42f-477c-9db4-84b325960b47-signing-cabundle\") pod \"service-ca-9c57cc56f-hv7lg\" (UID: \"d2ba5157-b42f-477c-9db4-84b325960b47\") " pod="openshift-service-ca/service-ca-9c57cc56f-hv7lg" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.151495 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7fd28f12-f21e-4050-9102-45579a294fac-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-fvnl4\" (UID: \"7fd28f12-f21e-4050-9102-45579a294fac\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fvnl4" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.151566 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/70a53cfd-05d8-426e-9b52-55af67b9c200-profile-collector-cert\") pod \"olm-operator-6b444d44fb-j5sfl\" (UID: \"70a53cfd-05d8-426e-9b52-55af67b9c200\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j5sfl" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.152204 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/311af931-95d6-429a-a86a-f54ab066747f-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-8hq7n\" (UID: \"311af931-95d6-429a-a86a-f54ab066747f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8hq7n" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.152526 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ff258f9c-6ace-46bf-8228-05668edcbdd6-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-ztvf4\" (UID: \"ff258f9c-6ace-46bf-8228-05668edcbdd6\") " pod="openshift-marketplace/marketplace-operator-79b997595-ztvf4" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.153255 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/e58dd08a-2f64-4b2f-8779-3ea2e4088142-certs\") pod \"machine-config-server-9prrw\" (UID: \"e58dd08a-2f64-4b2f-8779-3ea2e4088142\") " pod="openshift-machine-config-operator/machine-config-server-9prrw" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.154580 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7fd28f12-f21e-4050-9102-45579a294fac-proxy-tls\") pod \"machine-config-controller-84d6567774-fvnl4\" (UID: \"7fd28f12-f21e-4050-9102-45579a294fac\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fvnl4" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.154705 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/c0a68115-9754-4071-b421-d9627182ff91-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-9x4dl\" (UID: \"c0a68115-9754-4071-b421-d9627182ff91\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9x4dl" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.154724 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/96be0671-6ddf-4af0-8989-da8c4a4dcfa7-secret-volume\") pod \"collect-profiles-29399700-hnjjf\" (UID: \"96be0671-6ddf-4af0-8989-da8c4a4dcfa7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399700-hnjjf" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.155468 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8aabc0b3-9299-4b7b-8d00-310cad0b4d63-cert\") pod \"ingress-canary-f8msc\" (UID: \"8aabc0b3-9299-4b7b-8d00-310cad0b4d63\") " pod="openshift-ingress-canary/ingress-canary-f8msc" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.156355 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/311af931-95d6-429a-a86a-f54ab066747f-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-8hq7n\" (UID: \"311af931-95d6-429a-a86a-f54ab066747f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8hq7n" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.156630 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/d2ba5157-b42f-477c-9db4-84b325960b47-signing-key\") pod \"service-ca-9c57cc56f-hv7lg\" (UID: \"d2ba5157-b42f-477c-9db4-84b325960b47\") " pod="openshift-service-ca/service-ca-9c57cc56f-hv7lg" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.156634 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwnnl\" (UniqueName: \"kubernetes.io/projected/d5fa82d2-0cf9-46d0-b319-45a36d14a3af-kube-api-access-gwnnl\") pod \"multus-admission-controller-857f4d67dd-m47n7\" (UID: \"d5fa82d2-0cf9-46d0-b319-45a36d14a3af\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-m47n7" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.157533 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/70a53cfd-05d8-426e-9b52-55af67b9c200-srv-cert\") pod \"olm-operator-6b444d44fb-j5sfl\" (UID: \"70a53cfd-05d8-426e-9b52-55af67b9c200\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j5sfl" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.157545 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/e58dd08a-2f64-4b2f-8779-3ea2e4088142-node-bootstrap-token\") pod \"machine-config-server-9prrw\" (UID: \"e58dd08a-2f64-4b2f-8779-3ea2e4088142\") " pod="openshift-machine-config-operator/machine-config-server-9prrw" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.158266 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/91d52696-3096-4d21-b1b5-8e0abab2b1ba-metrics-tls\") pod \"dns-default-dxmxv\" (UID: \"91d52696-3096-4d21-b1b5-8e0abab2b1ba\") " pod="openshift-dns/dns-default-dxmxv" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.158750 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ff258f9c-6ace-46bf-8228-05668edcbdd6-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-ztvf4\" (UID: \"ff258f9c-6ace-46bf-8228-05668edcbdd6\") " pod="openshift-marketplace/marketplace-operator-79b997595-ztvf4" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.175054 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gqpd\" (UniqueName: \"kubernetes.io/projected/2493e834-4bc7-43eb-a2c3-942598904f3a-kube-api-access-7gqpd\") pod \"packageserver-d55dfcdfc-5k5rr\" (UID: \"2493e834-4bc7-43eb-a2c3-942598904f3a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5k5rr" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.182526 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-rxs28"] Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.184028 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-qtf9d" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.194026 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vftrc" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.196475 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48xbw\" (UniqueName: \"kubernetes.io/projected/d68516ef-c18f-4d3f-bc80-71739e73cee1-kube-api-access-48xbw\") pod \"image-registry-697d97f7c8-9w2qz\" (UID: \"d68516ef-c18f-4d3f-bc80-71739e73cee1\") " pod="openshift-image-registry/image-registry-697d97f7c8-9w2qz" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.231786 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgp6s\" (UniqueName: \"kubernetes.io/projected/613216b8-2838-4eb4-8635-9aa0e797d101-kube-api-access-cgp6s\") pod \"service-ca-operator-777779d784-x6g8r\" (UID: \"613216b8-2838-4eb4-8635-9aa0e797d101\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-x6g8r" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.238961 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hcl9\" (UniqueName: \"kubernetes.io/projected/2837271a-7003-4e16-aa64-432493decb73-kube-api-access-8hcl9\") pod \"catalog-operator-68c6474976-2jj65\" (UID: \"2837271a-7003-4e16-aa64-432493decb73\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2jj65" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.243186 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5k5rr" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.247357 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9w2qz\" (UID: \"d68516ef-c18f-4d3f-bc80-71739e73cee1\") " pod="openshift-image-registry/image-registry-697d97f7c8-9w2qz" Nov 24 11:11:25 crc kubenswrapper[5072]: E1124 11:11:25.247788 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:11:25.747775811 +0000 UTC m=+137.459300287 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9w2qz" (UID: "d68516ef-c18f-4d3f-bc80-71739e73cee1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.252441 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-m47n7" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.254490 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/119d4f92-5b02-4cc7-bb41-adcc78ccb157-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-7bjm7\" (UID: \"119d4f92-5b02-4cc7-bb41-adcc78ccb157\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7bjm7" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.257857 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-x6g8r" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.271815 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-rmzh4"] Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.284230 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-ms2fp"] Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.286925 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d68516ef-c18f-4d3f-bc80-71739e73cee1-bound-sa-token\") pod \"image-registry-697d97f7c8-9w2qz\" (UID: \"d68516ef-c18f-4d3f-bc80-71739e73cee1\") " pod="openshift-image-registry/image-registry-697d97f7c8-9w2qz" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.301432 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r26gl\" (UniqueName: \"kubernetes.io/projected/1fb23ad0-2566-4f2c-8a33-97e253539289-kube-api-access-r26gl\") pod \"openshift-controller-manager-operator-756b6f6bc6-h6q9x\" (UID: \"1fb23ad0-2566-4f2c-8a33-97e253539289\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-h6q9x" Nov 24 11:11:25 crc kubenswrapper[5072]: W1124 11:11:25.312967 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24b0c90f_a223_41e9_beb5_619fdeaf49c1.slice/crio-551b67c50ee837700c1e3ec42b52a508dd1b64054f824d65da68dca47e4e6edd WatchSource:0}: Error finding container 551b67c50ee837700c1e3ec42b52a508dd1b64054f824d65da68dca47e4e6edd: Status 404 returned error can't find the container with id 551b67c50ee837700c1e3ec42b52a508dd1b64054f824d65da68dca47e4e6edd Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.315026 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/69a7724d-41d5-4946-81d6-d43497db7319-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-t6876\" (UID: \"69a7724d-41d5-4946-81d6-d43497db7319\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t6876" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.347032 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tn5kv\" (UniqueName: \"kubernetes.io/projected/8ef682f0-d784-48ac-83f3-4c718f34edaf-kube-api-access-tn5kv\") pod \"router-default-5444994796-wxc9p\" (UID: \"8ef682f0-d784-48ac-83f3-4c718f34edaf\") " pod="openshift-ingress/router-default-5444994796-wxc9p" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.351548 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:11:25 crc kubenswrapper[5072]: E1124 11:11:25.351727 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:11:25.85171001 +0000 UTC m=+137.563234486 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.351919 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9w2qz\" (UID: \"d68516ef-c18f-4d3f-bc80-71739e73cee1\") " pod="openshift-image-registry/image-registry-697d97f7c8-9w2qz" Nov 24 11:11:25 crc kubenswrapper[5072]: E1124 11:11:25.352583 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:11:25.852573425 +0000 UTC m=+137.564097901 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9w2qz" (UID: "d68516ef-c18f-4d3f-bc80-71739e73cee1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.356452 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nldcl"] Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.362190 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bm2lw"] Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.369000 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kf56j\" (UniqueName: \"kubernetes.io/projected/f662a10c-20f8-49b5-9a41-6a17e156038b-kube-api-access-kf56j\") pod \"machine-config-operator-74547568cd-ln5s8\" (UID: \"f662a10c-20f8-49b5-9a41-6a17e156038b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ln5s8" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.383289 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dt9rp\" (UniqueName: \"kubernetes.io/projected/815768ad-2984-4e34-afb0-4e98c3f0373f-kube-api-access-dt9rp\") pod \"migrator-59844c95c7-5d2ld\" (UID: \"815768ad-2984-4e34-afb0-4e98c3f0373f\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5d2ld" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.427189 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqbkl\" (UniqueName: \"kubernetes.io/projected/7b8bcc47-53bd-45a5-937f-b515a314f662-kube-api-access-sqbkl\") pod \"control-plane-machine-set-operator-78cbb6b69f-nwsjb\" (UID: \"7b8bcc47-53bd-45a5-937f-b515a314f662\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nwsjb" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.445458 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-qtf9d"] Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.446193 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cjpn\" (UniqueName: \"kubernetes.io/projected/d2ba5157-b42f-477c-9db4-84b325960b47-kube-api-access-8cjpn\") pod \"service-ca-9c57cc56f-hv7lg\" (UID: \"d2ba5157-b42f-477c-9db4-84b325960b47\") " pod="openshift-service-ca/service-ca-9c57cc56f-hv7lg" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.454449 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:11:25 crc kubenswrapper[5072]: E1124 11:11:25.454856 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:11:25.954841247 +0000 UTC m=+137.666365723 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.466333 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4fg22"] Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.466979 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcbdg\" (UniqueName: \"kubernetes.io/projected/c0a68115-9754-4071-b421-d9627182ff91-kube-api-access-zcbdg\") pod \"package-server-manager-789f6589d5-9x4dl\" (UID: \"c0a68115-9754-4071-b421-d9627182ff91\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9x4dl" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.468204 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-l28pf"] Nov 24 11:11:25 crc kubenswrapper[5072]: W1124 11:11:25.468646 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9bdad3dd_22a5_46d4_be89_9f5f98da1738.slice/crio-4564844e7a4447c7ff9dd7df2617b85cfa995aac626031c25772daf41eb1954f WatchSource:0}: Error finding container 4564844e7a4447c7ff9dd7df2617b85cfa995aac626031c25772daf41eb1954f: Status 404 returned error can't find the container with id 4564844e7a4447c7ff9dd7df2617b85cfa995aac626031c25772daf41eb1954f Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.468926 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-kcz78"] Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.487287 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-h6q9x" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.487458 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-4qrkp"] Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.500134 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pr4m\" (UniqueName: \"kubernetes.io/projected/70a53cfd-05d8-426e-9b52-55af67b9c200-kube-api-access-9pr4m\") pod \"olm-operator-6b444d44fb-j5sfl\" (UID: \"70a53cfd-05d8-426e-9b52-55af67b9c200\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j5sfl" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.500351 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-wxc9p" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.507986 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ln5s8" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.510659 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blzpj\" (UniqueName: \"kubernetes.io/projected/311af931-95d6-429a-a86a-f54ab066747f-kube-api-access-blzpj\") pod \"kube-storage-version-migrator-operator-b67b599dd-8hq7n\" (UID: \"311af931-95d6-429a-a86a-f54ab066747f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8hq7n" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.514642 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7bjm7" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.534463 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5d2ld" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.547406 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47cz7\" (UniqueName: \"kubernetes.io/projected/7fd28f12-f21e-4050-9102-45579a294fac-kube-api-access-47cz7\") pod \"machine-config-controller-84d6567774-fvnl4\" (UID: \"7fd28f12-f21e-4050-9102-45579a294fac\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fvnl4" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.547539 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t6876" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.553717 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2jj65" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.556538 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9w2qz\" (UID: \"d68516ef-c18f-4d3f-bc80-71739e73cee1\") " pod="openshift-image-registry/image-registry-697d97f7c8-9w2qz" Nov 24 11:11:25 crc kubenswrapper[5072]: E1124 11:11:25.556881 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:11:26.056868541 +0000 UTC m=+137.768393017 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9w2qz" (UID: "d68516ef-c18f-4d3f-bc80-71739e73cee1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.557149 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-mzvpf"] Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.568425 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8mw9\" (UniqueName: \"kubernetes.io/projected/91d52696-3096-4d21-b1b5-8e0abab2b1ba-kube-api-access-g8mw9\") pod \"dns-default-dxmxv\" (UID: \"91d52696-3096-4d21-b1b5-8e0abab2b1ba\") " pod="openshift-dns/dns-default-dxmxv" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.571489 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-dzh8r"] Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.574733 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nwsjb" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.576843 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbh28\" (UniqueName: \"kubernetes.io/projected/ff258f9c-6ace-46bf-8228-05668edcbdd6-kube-api-access-hbh28\") pod \"marketplace-operator-79b997595-ztvf4\" (UID: \"ff258f9c-6ace-46bf-8228-05668edcbdd6\") " pod="openshift-marketplace/marketplace-operator-79b997595-ztvf4" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.591796 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46mrm\" (UniqueName: \"kubernetes.io/projected/96be0671-6ddf-4af0-8989-da8c4a4dcfa7-kube-api-access-46mrm\") pod \"collect-profiles-29399700-hnjjf\" (UID: \"96be0671-6ddf-4af0-8989-da8c4a4dcfa7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399700-hnjjf" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.593567 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5k5rr"] Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.598625 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j5sfl" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.599354 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7v48\" (UniqueName: \"kubernetes.io/projected/8aabc0b3-9299-4b7b-8d00-310cad0b4d63-kube-api-access-b7v48\") pod \"ingress-canary-f8msc\" (UID: \"8aabc0b3-9299-4b7b-8d00-310cad0b4d63\") " pod="openshift-ingress-canary/ingress-canary-f8msc" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.609147 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8hq7n" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.614580 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fvnl4" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.621868 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9x4dl" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.624586 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-ztvf4" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.625004 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92wlj\" (UniqueName: \"kubernetes.io/projected/e58dd08a-2f64-4b2f-8779-3ea2e4088142-kube-api-access-92wlj\") pod \"machine-config-server-9prrw\" (UID: \"e58dd08a-2f64-4b2f-8779-3ea2e4088142\") " pod="openshift-machine-config-operator/machine-config-server-9prrw" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.628725 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-hv7lg" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.641052 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjh72\" (UniqueName: \"kubernetes.io/projected/04426b83-61f0-4c87-b0e7-f175836692df-kube-api-access-zjh72\") pod \"csi-hostpathplugin-cztzr\" (UID: \"04426b83-61f0-4c87-b0e7-f175836692df\") " pod="hostpath-provisioner/csi-hostpathplugin-cztzr" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.644971 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-cztzr" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.657303 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:11:25 crc kubenswrapper[5072]: E1124 11:11:25.657756 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:11:26.157740882 +0000 UTC m=+137.869265358 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.657836 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-f8msc" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.669201 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-vftrc"] Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.669527 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-9prrw" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.674143 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-dxmxv" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.696815 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-h6q9x"] Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.739971 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mzvpf" event={"ID":"ca699c4e-ccec-4ff8-895f-109777beca4c","Type":"ContainerStarted","Data":"3ffa303f86dad3facd8517c3c2829894323177b2d82268d2bff3ba2f41b202e7"} Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.743223 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kcz78" event={"ID":"5354347e-2a7e-42d4-a13c-33daf97e79c0","Type":"ContainerStarted","Data":"a1d9a5180ebf718a6aa044afb09eb721384ed1ea99fa87977b34db74c53de798"} Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.745171 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-m47n7"] Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.751656 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-q8585" event={"ID":"d33a4711-23b8-41cb-bf35-708e252369ac","Type":"ContainerStarted","Data":"4616bb80e0727b310ae784bb053effea18c1854e94d5e213c4a93fcc0ce9eebc"} Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.751691 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-q8585" event={"ID":"d33a4711-23b8-41cb-bf35-708e252369ac","Type":"ContainerStarted","Data":"29402e1f8f3863277bf5cbdd5963949d9f2a63c9c95a717c29b2f51b0983cff0"} Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.759835 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9w2qz\" (UID: \"d68516ef-c18f-4d3f-bc80-71739e73cee1\") " pod="openshift-image-registry/image-registry-697d97f7c8-9w2qz" Nov 24 11:11:25 crc kubenswrapper[5072]: E1124 11:11:25.760121 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:11:26.260107997 +0000 UTC m=+137.971632473 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9w2qz" (UID: "d68516ef-c18f-4d3f-bc80-71739e73cee1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.768117 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-qtf9d" event={"ID":"9bdad3dd-22a5-46d4-be89-9f5f98da1738","Type":"ContainerStarted","Data":"4564844e7a4447c7ff9dd7df2617b85cfa995aac626031c25772daf41eb1954f"} Nov 24 11:11:25 crc kubenswrapper[5072]: W1124 11:11:25.778759 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1fb23ad0_2566_4f2c_8a33_97e253539289.slice/crio-bb2c9688ec34c5ddb927b4c5feb9378be8b6fae2b4db9dfc8a4d58af67b0b142 WatchSource:0}: Error finding container bb2c9688ec34c5ddb927b4c5feb9378be8b6fae2b4db9dfc8a4d58af67b0b142: Status 404 returned error can't find the container with id bb2c9688ec34c5ddb927b4c5feb9378be8b6fae2b4db9dfc8a4d58af67b0b142 Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.791131 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4fg22" event={"ID":"f62763cf-97b0-41ff-bac4-e4acd8060859","Type":"ContainerStarted","Data":"611c8928c125dc32648a490f32d4631f4685350c5c964cdd33ba00fd0979394e"} Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.794512 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ms2fp" event={"ID":"042c5da0-34af-4413-af57-feb5f484bfc3","Type":"ContainerStarted","Data":"b396230b04f20fdd83d3f4a48be1c154ab4eeb86f72fe4a20116f87446076f5a"} Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.794553 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ms2fp" event={"ID":"042c5da0-34af-4413-af57-feb5f484bfc3","Type":"ContainerStarted","Data":"d097c75019c8cd8d5177e49aa2e351786e0c9a511210ff57a48217296a94868b"} Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.799474 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dqmfz" event={"ID":"60ed0c7a-5210-4706-b7b6-d989561edf26","Type":"ContainerStarted","Data":"f5e72b8cf4e2896d56ba5f89577105e38f77ed348898bad3502d899f9915352c"} Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.799518 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dqmfz" event={"ID":"60ed0c7a-5210-4706-b7b6-d989561edf26","Type":"ContainerStarted","Data":"4b565809173cddfbea2a8678b355bdd3682d8b29f06fbda1587663097dddc5a2"} Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.800959 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-798pd" event={"ID":"9d30ed7a-3577-40f4-8d32-eec9f851ab19","Type":"ContainerStarted","Data":"86db00fa613322d83f7edb0d0995dcdb70016cd829e8f458d7f9b1b086d78b94"} Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.800985 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-798pd" event={"ID":"9d30ed7a-3577-40f4-8d32-eec9f851ab19","Type":"ContainerStarted","Data":"16b8bb70a3c0c6a3aa3cde9816118e6c8174c822fe59fe7d3a2903f6c558076d"} Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.801747 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-fpxll" event={"ID":"1cd359a9-17ba-43c9-8cb3-7c786777226b","Type":"ContainerStarted","Data":"c0194e7a785eeeeb5cd8b7bdda00538611a8cc9d8b61064cc2f8fad05cd05fce"} Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.801765 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-fpxll" event={"ID":"1cd359a9-17ba-43c9-8cb3-7c786777226b","Type":"ContainerStarted","Data":"f1fdfd6115c9d7e442c4faf4f23bcbcad233c9442bf7541cc55eb8622f868a34"} Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.802275 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-fpxll" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.803167 5072 patch_prober.go:28] interesting pod/downloads-7954f5f757-fpxll container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.803199 5072 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fpxll" podUID="1cd359a9-17ba-43c9-8cb3-7c786777226b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.805630 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-4qrkp" event={"ID":"2b4f223b-f1f8-4e6b-ae06-519bc73d38ea","Type":"ContainerStarted","Data":"5168baca96d2e0fc53eb21c8780727e25a345ce7ca4f02bd21feaf69cbe9219d"} Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.813663 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bm2lw" event={"ID":"a5c87ed3-ec26-42d1-99d0-37fd576f970d","Type":"ContainerStarted","Data":"eee0b6bdb969017b626f3995de861e59e3f32758c77e62642f486599570c88db"} Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.814453 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5k5rr" event={"ID":"2493e834-4bc7-43eb-a2c3-942598904f3a","Type":"ContainerStarted","Data":"078aa1d7223bb899e74eb30fc6b40824d7101c646d79e757fb2844275ac7a32b"} Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.824675 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-rxs28" event={"ID":"c77a843c-6b36-4143-aff0-f5e7d227c11d","Type":"ContainerStarted","Data":"5bb89a188c4140e6a63a98fe9a82ba1ca60e79ee8abebf0e85d4bf6b09c99e19"} Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.824712 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-rxs28" event={"ID":"c77a843c-6b36-4143-aff0-f5e7d227c11d","Type":"ContainerStarted","Data":"8f0f7944981212dadc57678af153a4aa7cc9f32b4194098a51b601f230ea9af5"} Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.824986 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-rxs28" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.831354 5072 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-rxs28 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.21:6443/healthz\": dial tcp 10.217.0.21:6443: connect: connection refused" start-of-body= Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.831406 5072 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-rxs28" podUID="c77a843c-6b36-4143-aff0-f5e7d227c11d" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.21:6443/healthz\": dial tcp 10.217.0.21:6443: connect: connection refused" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.834048 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-dzh8r" event={"ID":"bcbc6938-ae1b-4306-a73d-7f2c5dc64047","Type":"ContainerStarted","Data":"6feb6744992ff39c43ab043e10ae8567f1de9405bd0c8766383a22715b7e3899"} Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.837632 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-km2xf" event={"ID":"421f29d9-28d7-4e85-852e-d25b0529497a","Type":"ContainerStarted","Data":"3094c361101979baf09885afdf03b95d3f681054275d5a2c5f220c9cdcbd3d20"} Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.837668 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-km2xf" event={"ID":"421f29d9-28d7-4e85-852e-d25b0529497a","Type":"ContainerStarted","Data":"fab1a48635d92f98293e5b0b0a4ff1824b6abef1558da5ca3563e04b8677bbc8"} Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.839323 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-km2xf" Nov 24 11:11:25 crc kubenswrapper[5072]: W1124 11:11:25.843726 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd5fa82d2_0cf9_46d0_b319_45a36d14a3af.slice/crio-eee473546c1fe28d6f4d4ac69f5e1def6d1b3c007500b897c4f9720525b6e65b WatchSource:0}: Error finding container eee473546c1fe28d6f4d4ac69f5e1def6d1b3c007500b897c4f9720525b6e65b: Status 404 returned error can't find the container with id eee473546c1fe28d6f4d4ac69f5e1def6d1b3c007500b897c4f9720525b6e65b Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.845696 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-x6g8r"] Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.846613 5072 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-km2xf container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.846673 5072 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-km2xf" podUID="421f29d9-28d7-4e85-852e-d25b0529497a" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.848081 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8rg9n" event={"ID":"d9188831-917b-434c-b118-24c7971f6381","Type":"ContainerStarted","Data":"ba4e8ff11317f56c6e71c446419bd7e205169c9a84ccbd4019ac9f694431c2ba"} Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.848120 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8rg9n" event={"ID":"d9188831-917b-434c-b118-24c7971f6381","Type":"ContainerStarted","Data":"38740b96b15f87a57162a959e95268a0f16babc4738131916a6f37f983875bc4"} Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.851771 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-l28pf" event={"ID":"c677e814-7e89-49be-a000-091b8e49d6b8","Type":"ContainerStarted","Data":"e1874fc2f9621b3070b9c3827d702d0ac9c9d854af0fe6602c18b2ee8800d1b5"} Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.854228 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nldcl" event={"ID":"b2182353-061f-40bf-8f81-1cb1aaaf1b97","Type":"ContainerStarted","Data":"d1dab94715b61eaeb075107e239723cb24ef6c08bdd0e1c8c2a8efc89cebacd6"} Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.858266 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7bjm7"] Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.861226 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:11:25 crc kubenswrapper[5072]: E1124 11:11:25.861686 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:11:26.361662167 +0000 UTC m=+138.073186653 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.862335 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-rmzh4" event={"ID":"24b0c90f-a223-41e9-beb5-619fdeaf49c1","Type":"ContainerStarted","Data":"551b67c50ee837700c1e3ec42b52a508dd1b64054f824d65da68dca47e4e6edd"} Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.866451 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399700-hnjjf" Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.895685 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-ln5s8"] Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.962722 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9w2qz\" (UID: \"d68516ef-c18f-4d3f-bc80-71739e73cee1\") " pod="openshift-image-registry/image-registry-697d97f7c8-9w2qz" Nov 24 11:11:25 crc kubenswrapper[5072]: E1124 11:11:25.966936 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:11:26.466894564 +0000 UTC m=+138.178419110 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9w2qz" (UID: "d68516ef-c18f-4d3f-bc80-71739e73cee1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:11:25 crc kubenswrapper[5072]: I1124 11:11:25.980397 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-5d2ld"] Nov 24 11:11:26 crc kubenswrapper[5072]: I1124 11:11:26.062143 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j5sfl"] Nov 24 11:11:26 crc kubenswrapper[5072]: I1124 11:11:26.065513 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:11:26 crc kubenswrapper[5072]: E1124 11:11:26.066079 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:11:26.565889351 +0000 UTC m=+138.277413827 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:11:26 crc kubenswrapper[5072]: I1124 11:11:26.167758 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9w2qz\" (UID: \"d68516ef-c18f-4d3f-bc80-71739e73cee1\") " pod="openshift-image-registry/image-registry-697d97f7c8-9w2qz" Nov 24 11:11:26 crc kubenswrapper[5072]: E1124 11:11:26.168458 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:11:26.668445531 +0000 UTC m=+138.379970007 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9w2qz" (UID: "d68516ef-c18f-4d3f-bc80-71739e73cee1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:11:26 crc kubenswrapper[5072]: W1124 11:11:26.194109 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf662a10c_20f8_49b5_9a41_6a17e156038b.slice/crio-6bbdabfe44ff551ea24f201474f8a5e955e8b71093b773f45bae4ae7fa1eba1c WatchSource:0}: Error finding container 6bbdabfe44ff551ea24f201474f8a5e955e8b71093b773f45bae4ae7fa1eba1c: Status 404 returned error can't find the container with id 6bbdabfe44ff551ea24f201474f8a5e955e8b71093b773f45bae4ae7fa1eba1c Nov 24 11:11:26 crc kubenswrapper[5072]: I1124 11:11:26.269742 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:11:26 crc kubenswrapper[5072]: E1124 11:11:26.270464 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:11:26.770439974 +0000 UTC m=+138.481964450 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:11:26 crc kubenswrapper[5072]: I1124 11:11:26.326345 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t6876"] Nov 24 11:11:26 crc kubenswrapper[5072]: I1124 11:11:26.376070 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9w2qz\" (UID: \"d68516ef-c18f-4d3f-bc80-71739e73cee1\") " pod="openshift-image-registry/image-registry-697d97f7c8-9w2qz" Nov 24 11:11:26 crc kubenswrapper[5072]: E1124 11:11:26.376574 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:11:26.876562086 +0000 UTC m=+138.588086562 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9w2qz" (UID: "d68516ef-c18f-4d3f-bc80-71739e73cee1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:11:26 crc kubenswrapper[5072]: W1124 11:11:26.418134 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod70a53cfd_05d8_426e_9b52_55af67b9c200.slice/crio-63a7be9b6f519c56a211e6557ec85fadbb8aed8f4aa2f1dab15cb3074756c790 WatchSource:0}: Error finding container 63a7be9b6f519c56a211e6557ec85fadbb8aed8f4aa2f1dab15cb3074756c790: Status 404 returned error can't find the container with id 63a7be9b6f519c56a211e6557ec85fadbb8aed8f4aa2f1dab15cb3074756c790 Nov 24 11:11:26 crc kubenswrapper[5072]: I1124 11:11:26.479811 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:11:26 crc kubenswrapper[5072]: E1124 11:11:26.479910 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:11:26.979889118 +0000 UTC m=+138.691413594 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:11:26 crc kubenswrapper[5072]: I1124 11:11:26.480331 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9w2qz\" (UID: \"d68516ef-c18f-4d3f-bc80-71739e73cee1\") " pod="openshift-image-registry/image-registry-697d97f7c8-9w2qz" Nov 24 11:11:26 crc kubenswrapper[5072]: E1124 11:11:26.480661 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:11:26.98064812 +0000 UTC m=+138.692172596 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9w2qz" (UID: "d68516ef-c18f-4d3f-bc80-71739e73cee1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:11:26 crc kubenswrapper[5072]: I1124 11:11:26.491353 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2jj65"] Nov 24 11:11:26 crc kubenswrapper[5072]: W1124 11:11:26.539994 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode58dd08a_2f64_4b2f_8779_3ea2e4088142.slice/crio-0319f9ffdf957e780a7c29340708bcc1edacf9d6e67f3089602c4cc099578cee WatchSource:0}: Error finding container 0319f9ffdf957e780a7c29340708bcc1edacf9d6e67f3089602c4cc099578cee: Status 404 returned error can't find the container with id 0319f9ffdf957e780a7c29340708bcc1edacf9d6e67f3089602c4cc099578cee Nov 24 11:11:26 crc kubenswrapper[5072]: I1124 11:11:26.584594 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8hq7n"] Nov 24 11:11:26 crc kubenswrapper[5072]: I1124 11:11:26.585314 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:11:26 crc kubenswrapper[5072]: E1124 11:11:26.585695 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:11:27.08568104 +0000 UTC m=+138.797205516 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:11:26 crc kubenswrapper[5072]: I1124 11:11:26.587732 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-fvnl4"] Nov 24 11:11:26 crc kubenswrapper[5072]: I1124 11:11:26.612886 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nwsjb"] Nov 24 11:11:26 crc kubenswrapper[5072]: I1124 11:11:26.686737 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9w2qz\" (UID: \"d68516ef-c18f-4d3f-bc80-71739e73cee1\") " pod="openshift-image-registry/image-registry-697d97f7c8-9w2qz" Nov 24 11:11:26 crc kubenswrapper[5072]: E1124 11:11:26.687048 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:11:27.187037335 +0000 UTC m=+138.898561801 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9w2qz" (UID: "d68516ef-c18f-4d3f-bc80-71739e73cee1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:11:26 crc kubenswrapper[5072]: I1124 11:11:26.792827 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:11:26 crc kubenswrapper[5072]: E1124 11:11:26.793005 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:11:27.292983272 +0000 UTC m=+139.004507748 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:11:26 crc kubenswrapper[5072]: I1124 11:11:26.793447 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9w2qz\" (UID: \"d68516ef-c18f-4d3f-bc80-71739e73cee1\") " pod="openshift-image-registry/image-registry-697d97f7c8-9w2qz" Nov 24 11:11:26 crc kubenswrapper[5072]: E1124 11:11:26.794034 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:11:27.293867808 +0000 UTC m=+139.005392284 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9w2qz" (UID: "d68516ef-c18f-4d3f-bc80-71739e73cee1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:11:26 crc kubenswrapper[5072]: I1124 11:11:26.882247 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-9prrw" event={"ID":"e58dd08a-2f64-4b2f-8779-3ea2e4088142","Type":"ContainerStarted","Data":"0319f9ffdf957e780a7c29340708bcc1edacf9d6e67f3089602c4cc099578cee"} Nov 24 11:11:26 crc kubenswrapper[5072]: I1124 11:11:26.884092 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-x6g8r" event={"ID":"613216b8-2838-4eb4-8635-9aa0e797d101","Type":"ContainerStarted","Data":"ca9d616e751144033709147d672a6417d68030d6b13602bcd76543b2851a240e"} Nov 24 11:11:26 crc kubenswrapper[5072]: I1124 11:11:26.897066 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:11:26 crc kubenswrapper[5072]: E1124 11:11:26.897393 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:11:27.397362504 +0000 UTC m=+139.108886980 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:11:26 crc kubenswrapper[5072]: I1124 11:11:26.899153 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dqmfz" event={"ID":"60ed0c7a-5210-4706-b7b6-d989561edf26","Type":"ContainerStarted","Data":"c095a77f141d61283dd0cc7c001a5306589ffc7061b3e0d4226254d2e9e2a6e9"} Nov 24 11:11:26 crc kubenswrapper[5072]: I1124 11:11:26.911517 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ln5s8" event={"ID":"f662a10c-20f8-49b5-9a41-6a17e156038b","Type":"ContainerStarted","Data":"6bbdabfe44ff551ea24f201474f8a5e955e8b71093b773f45bae4ae7fa1eba1c"} Nov 24 11:11:26 crc kubenswrapper[5072]: I1124 11:11:26.913875 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2jj65" event={"ID":"2837271a-7003-4e16-aa64-432493decb73","Type":"ContainerStarted","Data":"47993ef95a8b09569bab58f6b8701f8e7e104236663fa7fa94cc04d6b65f71b6"} Nov 24 11:11:26 crc kubenswrapper[5072]: I1124 11:11:26.918637 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-l28pf" event={"ID":"c677e814-7e89-49be-a000-091b8e49d6b8","Type":"ContainerStarted","Data":"687341eee58ccd884e015fc3e9f75faac5b05f148a1bb62a2bbef11274fb4751"} Nov 24 11:11:26 crc kubenswrapper[5072]: I1124 11:11:26.920969 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-l28pf" Nov 24 11:11:26 crc kubenswrapper[5072]: I1124 11:11:26.928230 5072 patch_prober.go:28] interesting pod/console-operator-58897d9998-l28pf container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/readyz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Nov 24 11:11:26 crc kubenswrapper[5072]: I1124 11:11:26.928277 5072 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-l28pf" podUID="c677e814-7e89-49be-a000-091b8e49d6b8" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/readyz\": dial tcp 10.217.0.25:8443: connect: connection refused" Nov 24 11:11:26 crc kubenswrapper[5072]: I1124 11:11:26.936946 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mzvpf" event={"ID":"ca699c4e-ccec-4ff8-895f-109777beca4c","Type":"ContainerStarted","Data":"8fa7a95d108472a5a96017a67b76f1e4c64d97ae1be0d1e7b64586b60918620c"} Nov 24 11:11:26 crc kubenswrapper[5072]: I1124 11:11:26.937022 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mzvpf" Nov 24 11:11:26 crc kubenswrapper[5072]: I1124 11:11:26.945315 5072 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-mzvpf container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Nov 24 11:11:26 crc kubenswrapper[5072]: I1124 11:11:26.945389 5072 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mzvpf" podUID="ca699c4e-ccec-4ff8-895f-109777beca4c" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Nov 24 11:11:26 crc kubenswrapper[5072]: I1124 11:11:26.953347 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-dzh8r" event={"ID":"bcbc6938-ae1b-4306-a73d-7f2c5dc64047","Type":"ContainerStarted","Data":"60196c0071feca1b1b6d7d71912912589b096b6f80d87f84c3b0c6863cd1bf9a"} Nov 24 11:11:26 crc kubenswrapper[5072]: I1124 11:11:26.954766 5072 generic.go:334] "Generic (PLEG): container finished" podID="042c5da0-34af-4413-af57-feb5f484bfc3" containerID="b396230b04f20fdd83d3f4a48be1c154ab4eeb86f72fe4a20116f87446076f5a" exitCode=0 Nov 24 11:11:26 crc kubenswrapper[5072]: I1124 11:11:26.954827 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ms2fp" event={"ID":"042c5da0-34af-4413-af57-feb5f484bfc3","Type":"ContainerDied","Data":"b396230b04f20fdd83d3f4a48be1c154ab4eeb86f72fe4a20116f87446076f5a"} Nov 24 11:11:26 crc kubenswrapper[5072]: I1124 11:11:26.955669 5072 generic.go:334] "Generic (PLEG): container finished" podID="5354347e-2a7e-42d4-a13c-33daf97e79c0" containerID="0197a9a6a01ff577ad1d2738e414f4d1709310cd837cebae5ec1ef72b1273868" exitCode=0 Nov 24 11:11:26 crc kubenswrapper[5072]: I1124 11:11:26.955704 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kcz78" event={"ID":"5354347e-2a7e-42d4-a13c-33daf97e79c0","Type":"ContainerDied","Data":"0197a9a6a01ff577ad1d2738e414f4d1709310cd837cebae5ec1ef72b1273868"} Nov 24 11:11:26 crc kubenswrapper[5072]: I1124 11:11:26.957850 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7bjm7" event={"ID":"119d4f92-5b02-4cc7-bb41-adcc78ccb157","Type":"ContainerStarted","Data":"361278c05ba95c84ca1c6afb69a697a2b48b0b153619940171734430fe235a67"} Nov 24 11:11:26 crc kubenswrapper[5072]: I1124 11:11:26.977776 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8hq7n" event={"ID":"311af931-95d6-429a-a86a-f54ab066747f","Type":"ContainerStarted","Data":"42f014dc531e6b6ab7d8ae5170c39b5746d1b84c88866e470689d60e31d75e0c"} Nov 24 11:11:26 crc kubenswrapper[5072]: I1124 11:11:26.981687 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-wxc9p" event={"ID":"8ef682f0-d784-48ac-83f3-4c718f34edaf","Type":"ContainerStarted","Data":"69291193cbca677f417ece1d52b98bf79426937b7a87c66cf945d8d910e4ce2f"} Nov 24 11:11:26 crc kubenswrapper[5072]: I1124 11:11:26.994434 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nldcl" event={"ID":"b2182353-061f-40bf-8f81-1cb1aaaf1b97","Type":"ContainerStarted","Data":"13ef6d0e29d8082ef64d60b39e326051a3ea0d7ac520a565d5dce3e579aca039"} Nov 24 11:11:26 crc kubenswrapper[5072]: I1124 11:11:26.997275 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5d2ld" event={"ID":"815768ad-2984-4e34-afb0-4e98c3f0373f","Type":"ContainerStarted","Data":"63802d7009670f46e074c6e1ac0d0acd164c0f2b852916a5c751fcabf5c010dc"} Nov 24 11:11:26 crc kubenswrapper[5072]: I1124 11:11:26.997952 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9w2qz\" (UID: \"d68516ef-c18f-4d3f-bc80-71739e73cee1\") " pod="openshift-image-registry/image-registry-697d97f7c8-9w2qz" Nov 24 11:11:26 crc kubenswrapper[5072]: E1124 11:11:26.998277 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:11:27.498265726 +0000 UTC m=+139.209790202 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9w2qz" (UID: "d68516ef-c18f-4d3f-bc80-71739e73cee1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:11:27 crc kubenswrapper[5072]: I1124 11:11:27.000560 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-rmzh4" event={"ID":"24b0c90f-a223-41e9-beb5-619fdeaf49c1","Type":"ContainerStarted","Data":"79e37cb3c3f68bcb8d523a19b65719500e104146e7e97bf89010403645894a43"} Nov 24 11:11:27 crc kubenswrapper[5072]: I1124 11:11:27.011058 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-fpxll" podStartSLOduration=116.011045403 podStartE2EDuration="1m56.011045403s" podCreationTimestamp="2025-11-24 11:09:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:11:26.973148056 +0000 UTC m=+138.684672532" watchObservedRunningTime="2025-11-24 11:11:27.011045403 +0000 UTC m=+138.722569879" Nov 24 11:11:27 crc kubenswrapper[5072]: I1124 11:11:27.013089 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bm2lw" event={"ID":"a5c87ed3-ec26-42d1-99d0-37fd576f970d","Type":"ContainerStarted","Data":"33f11e2f98dc1dbcc20c9ceff184dfdd4e73b8bec0d6af68cf7e58d15cd1b090"} Nov 24 11:11:27 crc kubenswrapper[5072]: I1124 11:11:27.054554 5072 patch_prober.go:28] interesting pod/downloads-7954f5f757-fpxll container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Nov 24 11:11:27 crc kubenswrapper[5072]: I1124 11:11:27.055965 5072 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fpxll" podUID="1cd359a9-17ba-43c9-8cb3-7c786777226b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Nov 24 11:11:27 crc kubenswrapper[5072]: I1124 11:11:27.098930 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:11:27 crc kubenswrapper[5072]: E1124 11:11:27.099016 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:11:27.598998544 +0000 UTC m=+139.310523020 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:11:27 crc kubenswrapper[5072]: I1124 11:11:27.100073 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9w2qz\" (UID: \"d68516ef-c18f-4d3f-bc80-71739e73cee1\") " pod="openshift-image-registry/image-registry-697d97f7c8-9w2qz" Nov 24 11:11:27 crc kubenswrapper[5072]: E1124 11:11:27.104973 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:11:27.604961105 +0000 UTC m=+139.316485581 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9w2qz" (UID: "d68516ef-c18f-4d3f-bc80-71739e73cee1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:11:27 crc kubenswrapper[5072]: I1124 11:11:27.141472 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-h6q9x" event={"ID":"1fb23ad0-2566-4f2c-8a33-97e253539289","Type":"ContainerStarted","Data":"bb2c9688ec34c5ddb927b4c5feb9378be8b6fae2b4db9dfc8a4d58af67b0b142"} Nov 24 11:11:27 crc kubenswrapper[5072]: I1124 11:11:27.141501 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vftrc" event={"ID":"164c7d70-1b80-415a-8a7b-fbb1001b1286","Type":"ContainerStarted","Data":"777f93e54507d0765c1cb0ed07e354f03782bc0bf60d1dec4012c1f4d84d9f36"} Nov 24 11:11:27 crc kubenswrapper[5072]: I1124 11:11:27.141512 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vftrc" event={"ID":"164c7d70-1b80-415a-8a7b-fbb1001b1286","Type":"ContainerStarted","Data":"bda57c30fe89a15c1d71a607412baad6b4721a60d31e63040e793ae333eb2a0c"} Nov 24 11:11:27 crc kubenswrapper[5072]: I1124 11:11:27.141542 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-km2xf" Nov 24 11:11:27 crc kubenswrapper[5072]: I1124 11:11:27.141551 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-qtf9d" event={"ID":"9bdad3dd-22a5-46d4-be89-9f5f98da1738","Type":"ContainerStarted","Data":"c8a61a5ca88215f1a83c0cb4784cea4321343cf509b09213b31197ac143ec947"} Nov 24 11:11:27 crc kubenswrapper[5072]: I1124 11:11:27.141562 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4fg22" event={"ID":"f62763cf-97b0-41ff-bac4-e4acd8060859","Type":"ContainerStarted","Data":"edd86c6c1f70b07e2a3abe05c6d09c6602ffd1c3dddc1c4429b11b48523b8e90"} Nov 24 11:11:27 crc kubenswrapper[5072]: I1124 11:11:27.141571 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j5sfl" event={"ID":"70a53cfd-05d8-426e-9b52-55af67b9c200","Type":"ContainerStarted","Data":"63a7be9b6f519c56a211e6557ec85fadbb8aed8f4aa2f1dab15cb3074756c790"} Nov 24 11:11:27 crc kubenswrapper[5072]: I1124 11:11:27.141580 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t6876" event={"ID":"69a7724d-41d5-4946-81d6-d43497db7319","Type":"ContainerStarted","Data":"024f552175e5bac6b7124c202bfa4620fcedc6f93c2a463cbfbab3cef2700e57"} Nov 24 11:11:27 crc kubenswrapper[5072]: I1124 11:11:27.141593 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-m47n7" event={"ID":"d5fa82d2-0cf9-46d0-b319-45a36d14a3af","Type":"ContainerStarted","Data":"eee473546c1fe28d6f4d4ac69f5e1def6d1b3c007500b897c4f9720525b6e65b"} Nov 24 11:11:27 crc kubenswrapper[5072]: I1124 11:11:27.156154 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-km2xf" podStartSLOduration=116.156138191 podStartE2EDuration="1m56.156138191s" podCreationTimestamp="2025-11-24 11:09:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:11:27.132975558 +0000 UTC m=+138.844500034" watchObservedRunningTime="2025-11-24 11:11:27.156138191 +0000 UTC m=+138.867662667" Nov 24 11:11:27 crc kubenswrapper[5072]: I1124 11:11:27.156442 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-rxs28" podStartSLOduration=116.15643762 podStartE2EDuration="1m56.15643762s" podCreationTimestamp="2025-11-24 11:09:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:11:27.097281914 +0000 UTC m=+138.808806390" watchObservedRunningTime="2025-11-24 11:11:27.15643762 +0000 UTC m=+138.867962096" Nov 24 11:11:27 crc kubenswrapper[5072]: W1124 11:11:27.168506 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7b8bcc47_53bd_45a5_937f_b515a314f662.slice/crio-d8e6976f6727177993bfa093264b9a2760d61da68e8546b51bb6ccc2e5c84f68 WatchSource:0}: Error finding container d8e6976f6727177993bfa093264b9a2760d61da68e8546b51bb6ccc2e5c84f68: Status 404 returned error can't find the container with id d8e6976f6727177993bfa093264b9a2760d61da68e8546b51bb6ccc2e5c84f68 Nov 24 11:11:27 crc kubenswrapper[5072]: I1124 11:11:27.188644 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8rg9n" podStartSLOduration=116.188628913 podStartE2EDuration="1m56.188628913s" podCreationTimestamp="2025-11-24 11:09:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:11:27.187242313 +0000 UTC m=+138.898766789" watchObservedRunningTime="2025-11-24 11:11:27.188628913 +0000 UTC m=+138.900153389" Nov 24 11:11:27 crc kubenswrapper[5072]: I1124 11:11:27.200900 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:11:27 crc kubenswrapper[5072]: E1124 11:11:27.201185 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:11:27.701171672 +0000 UTC m=+139.412696148 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:11:27 crc kubenswrapper[5072]: I1124 11:11:27.218565 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-q8585" podStartSLOduration=116.21854706 podStartE2EDuration="1m56.21854706s" podCreationTimestamp="2025-11-24 11:09:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:11:27.21783001 +0000 UTC m=+138.929354476" watchObservedRunningTime="2025-11-24 11:11:27.21854706 +0000 UTC m=+138.930071536" Nov 24 11:11:27 crc kubenswrapper[5072]: I1124 11:11:27.301969 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9w2qz\" (UID: \"d68516ef-c18f-4d3f-bc80-71739e73cee1\") " pod="openshift-image-registry/image-registry-697d97f7c8-9w2qz" Nov 24 11:11:27 crc kubenswrapper[5072]: E1124 11:11:27.302289 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:11:27.80227697 +0000 UTC m=+139.513801436 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9w2qz" (UID: "d68516ef-c18f-4d3f-bc80-71739e73cee1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:11:27 crc kubenswrapper[5072]: I1124 11:11:27.330026 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-798pd" podStartSLOduration=116.330009525 podStartE2EDuration="1m56.330009525s" podCreationTimestamp="2025-11-24 11:09:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:11:27.298761859 +0000 UTC m=+139.010286345" watchObservedRunningTime="2025-11-24 11:11:27.330009525 +0000 UTC m=+139.041534001" Nov 24 11:11:27 crc kubenswrapper[5072]: I1124 11:11:27.345976 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mzvpf" podStartSLOduration=116.345958222 podStartE2EDuration="1m56.345958222s" podCreationTimestamp="2025-11-24 11:09:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:11:27.345552921 +0000 UTC m=+139.057077397" watchObservedRunningTime="2025-11-24 11:11:27.345958222 +0000 UTC m=+139.057482698" Nov 24 11:11:27 crc kubenswrapper[5072]: I1124 11:11:27.403092 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:11:27 crc kubenswrapper[5072]: E1124 11:11:27.403519 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:11:27.903503132 +0000 UTC m=+139.615027608 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:11:27 crc kubenswrapper[5072]: I1124 11:11:27.442122 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dqmfz" podStartSLOduration=118.442101188 podStartE2EDuration="1m58.442101188s" podCreationTimestamp="2025-11-24 11:09:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:11:27.383632602 +0000 UTC m=+139.095157078" watchObservedRunningTime="2025-11-24 11:11:27.442101188 +0000 UTC m=+139.153625664" Nov 24 11:11:27 crc kubenswrapper[5072]: I1124 11:11:27.456179 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-rxs28" Nov 24 11:11:27 crc kubenswrapper[5072]: I1124 11:11:27.466978 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4fg22" podStartSLOduration=116.46696101 podStartE2EDuration="1m56.46696101s" podCreationTimestamp="2025-11-24 11:09:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:11:27.465178289 +0000 UTC m=+139.176702765" watchObservedRunningTime="2025-11-24 11:11:27.46696101 +0000 UTC m=+139.178485486" Nov 24 11:11:27 crc kubenswrapper[5072]: I1124 11:11:27.507251 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9w2qz\" (UID: \"d68516ef-c18f-4d3f-bc80-71739e73cee1\") " pod="openshift-image-registry/image-registry-697d97f7c8-9w2qz" Nov 24 11:11:27 crc kubenswrapper[5072]: E1124 11:11:27.507585 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:11:28.007575384 +0000 UTC m=+139.719099860 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9w2qz" (UID: "d68516ef-c18f-4d3f-bc80-71739e73cee1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:11:27 crc kubenswrapper[5072]: I1124 11:11:27.536054 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bm2lw" podStartSLOduration=116.53604136 podStartE2EDuration="1m56.53604136s" podCreationTimestamp="2025-11-24 11:09:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:11:27.533445585 +0000 UTC m=+139.244970061" watchObservedRunningTime="2025-11-24 11:11:27.53604136 +0000 UTC m=+139.247565836" Nov 24 11:11:27 crc kubenswrapper[5072]: I1124 11:11:27.538866 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-cztzr"] Nov 24 11:11:27 crc kubenswrapper[5072]: I1124 11:11:27.551562 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-f8msc"] Nov 24 11:11:27 crc kubenswrapper[5072]: I1124 11:11:27.581758 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-qtf9d" podStartSLOduration=116.58174479 podStartE2EDuration="1m56.58174479s" podCreationTimestamp="2025-11-24 11:09:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:11:27.579811604 +0000 UTC m=+139.291336080" watchObservedRunningTime="2025-11-24 11:11:27.58174479 +0000 UTC m=+139.293269256" Nov 24 11:11:27 crc kubenswrapper[5072]: I1124 11:11:27.613361 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:11:27 crc kubenswrapper[5072]: E1124 11:11:27.613588 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:11:28.113562912 +0000 UTC m=+139.825087388 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:11:27 crc kubenswrapper[5072]: I1124 11:11:27.613631 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9w2qz\" (UID: \"d68516ef-c18f-4d3f-bc80-71739e73cee1\") " pod="openshift-image-registry/image-registry-697d97f7c8-9w2qz" Nov 24 11:11:27 crc kubenswrapper[5072]: E1124 11:11:27.613928 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:11:28.113916632 +0000 UTC m=+139.825441108 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9w2qz" (UID: "d68516ef-c18f-4d3f-bc80-71739e73cee1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:11:27 crc kubenswrapper[5072]: I1124 11:11:27.666818 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-l28pf" podStartSLOduration=116.666800658 podStartE2EDuration="1m56.666800658s" podCreationTimestamp="2025-11-24 11:09:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:11:27.613016916 +0000 UTC m=+139.324541382" watchObservedRunningTime="2025-11-24 11:11:27.666800658 +0000 UTC m=+139.378325134" Nov 24 11:11:27 crc kubenswrapper[5072]: I1124 11:11:27.713474 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399700-hnjjf"] Nov 24 11:11:27 crc kubenswrapper[5072]: I1124 11:11:27.715055 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:11:27 crc kubenswrapper[5072]: E1124 11:11:27.715278 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:11:28.215263437 +0000 UTC m=+139.926787913 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:11:27 crc kubenswrapper[5072]: I1124 11:11:27.816017 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9w2qz\" (UID: \"d68516ef-c18f-4d3f-bc80-71739e73cee1\") " pod="openshift-image-registry/image-registry-697d97f7c8-9w2qz" Nov 24 11:11:27 crc kubenswrapper[5072]: E1124 11:11:27.816458 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:11:28.316440607 +0000 UTC m=+140.027965083 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9w2qz" (UID: "d68516ef-c18f-4d3f-bc80-71739e73cee1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:11:27 crc kubenswrapper[5072]: I1124 11:11:27.819201 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ztvf4"] Nov 24 11:11:27 crc kubenswrapper[5072]: I1124 11:11:27.831439 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-dxmxv"] Nov 24 11:11:27 crc kubenswrapper[5072]: I1124 11:11:27.833824 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9x4dl"] Nov 24 11:11:27 crc kubenswrapper[5072]: I1124 11:11:27.836362 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-hv7lg"] Nov 24 11:11:27 crc kubenswrapper[5072]: W1124 11:11:27.892135 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podff258f9c_6ace_46bf_8228_05668edcbdd6.slice/crio-cc3419d4ddcdcdfd5fa243cafce6c84fbbc7c86089add5416c8d67f8e2fe6d37 WatchSource:0}: Error finding container cc3419d4ddcdcdfd5fa243cafce6c84fbbc7c86089add5416c8d67f8e2fe6d37: Status 404 returned error can't find the container with id cc3419d4ddcdcdfd5fa243cafce6c84fbbc7c86089add5416c8d67f8e2fe6d37 Nov 24 11:11:27 crc kubenswrapper[5072]: W1124 11:11:27.903479 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc0a68115_9754_4071_b421_d9627182ff91.slice/crio-04d0adfda8f0811df29e8a7815298af4add05ee9536b9645e99675be334bbeb0 WatchSource:0}: Error finding container 04d0adfda8f0811df29e8a7815298af4add05ee9536b9645e99675be334bbeb0: Status 404 returned error can't find the container with id 04d0adfda8f0811df29e8a7815298af4add05ee9536b9645e99675be334bbeb0 Nov 24 11:11:27 crc kubenswrapper[5072]: I1124 11:11:27.917009 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:11:27 crc kubenswrapper[5072]: E1124 11:11:27.919391 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:11:28.419327756 +0000 UTC m=+140.130852232 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:11:27 crc kubenswrapper[5072]: I1124 11:11:27.923916 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9w2qz\" (UID: \"d68516ef-c18f-4d3f-bc80-71739e73cee1\") " pod="openshift-image-registry/image-registry-697d97f7c8-9w2qz" Nov 24 11:11:27 crc kubenswrapper[5072]: E1124 11:11:27.924490 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:11:28.424472013 +0000 UTC m=+140.135996489 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9w2qz" (UID: "d68516ef-c18f-4d3f-bc80-71739e73cee1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:11:28 crc kubenswrapper[5072]: I1124 11:11:28.025583 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:11:28 crc kubenswrapper[5072]: E1124 11:11:28.026529 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:11:28.526514548 +0000 UTC m=+140.238039024 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:11:28 crc kubenswrapper[5072]: I1124 11:11:28.127732 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9w2qz\" (UID: \"d68516ef-c18f-4d3f-bc80-71739e73cee1\") " pod="openshift-image-registry/image-registry-697d97f7c8-9w2qz" Nov 24 11:11:28 crc kubenswrapper[5072]: E1124 11:11:28.128240 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:11:28.628228284 +0000 UTC m=+140.339752760 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9w2qz" (UID: "d68516ef-c18f-4d3f-bc80-71739e73cee1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:11:28 crc kubenswrapper[5072]: I1124 11:11:28.143453 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5d2ld" event={"ID":"815768ad-2984-4e34-afb0-4e98c3f0373f","Type":"ContainerStarted","Data":"e36756453c56eab86618b7439cb073426f57909a4a9cc6d0cface87525b05582"} Nov 24 11:11:28 crc kubenswrapper[5072]: I1124 11:11:28.175643 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nwsjb" event={"ID":"7b8bcc47-53bd-45a5-937f-b515a314f662","Type":"ContainerStarted","Data":"d8e6976f6727177993bfa093264b9a2760d61da68e8546b51bb6ccc2e5c84f68"} Nov 24 11:11:28 crc kubenswrapper[5072]: I1124 11:11:28.229727 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:11:28 crc kubenswrapper[5072]: E1124 11:11:28.230045 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:11:28.730028952 +0000 UTC m=+140.441553428 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:11:28 crc kubenswrapper[5072]: I1124 11:11:28.230451 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9w2qz\" (UID: \"d68516ef-c18f-4d3f-bc80-71739e73cee1\") " pod="openshift-image-registry/image-registry-697d97f7c8-9w2qz" Nov 24 11:11:28 crc kubenswrapper[5072]: E1124 11:11:28.230723 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:11:28.730715661 +0000 UTC m=+140.442240137 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9w2qz" (UID: "d68516ef-c18f-4d3f-bc80-71739e73cee1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:11:28 crc kubenswrapper[5072]: I1124 11:11:28.246357 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-m47n7" event={"ID":"d5fa82d2-0cf9-46d0-b319-45a36d14a3af","Type":"ContainerStarted","Data":"2febc391c09979b6a2d102de276ca0012dfc022e13c83574606fae3a9dec39ee"} Nov 24 11:11:28 crc kubenswrapper[5072]: I1124 11:11:28.276151 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9x4dl" event={"ID":"c0a68115-9754-4071-b421-d9627182ff91","Type":"ContainerStarted","Data":"04d0adfda8f0811df29e8a7815298af4add05ee9536b9645e99675be334bbeb0"} Nov 24 11:11:28 crc kubenswrapper[5072]: I1124 11:11:28.299699 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-ztvf4" event={"ID":"ff258f9c-6ace-46bf-8228-05668edcbdd6","Type":"ContainerStarted","Data":"cc3419d4ddcdcdfd5fa243cafce6c84fbbc7c86089add5416c8d67f8e2fe6d37"} Nov 24 11:11:28 crc kubenswrapper[5072]: I1124 11:11:28.307195 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-cztzr" event={"ID":"04426b83-61f0-4c87-b0e7-f175836692df","Type":"ContainerStarted","Data":"470435b025df8a4c8aed2bfee3981fd5064436eae0cc2b6789a022555220a8d1"} Nov 24 11:11:28 crc kubenswrapper[5072]: I1124 11:11:28.331328 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:11:28 crc kubenswrapper[5072]: E1124 11:11:28.331469 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:11:28.831445819 +0000 UTC m=+140.542970295 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:11:28 crc kubenswrapper[5072]: I1124 11:11:28.331925 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9w2qz\" (UID: \"d68516ef-c18f-4d3f-bc80-71739e73cee1\") " pod="openshift-image-registry/image-registry-697d97f7c8-9w2qz" Nov 24 11:11:28 crc kubenswrapper[5072]: E1124 11:11:28.332193 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:11:28.83218176 +0000 UTC m=+140.543706236 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9w2qz" (UID: "d68516ef-c18f-4d3f-bc80-71739e73cee1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:11:28 crc kubenswrapper[5072]: I1124 11:11:28.350585 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-f8msc" event={"ID":"8aabc0b3-9299-4b7b-8d00-310cad0b4d63","Type":"ContainerStarted","Data":"192b99dd125781c926c3496443e47b1967e993d10049a9a83b5b019a87f12294"} Nov 24 11:11:28 crc kubenswrapper[5072]: I1124 11:11:28.358142 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-dxmxv" event={"ID":"91d52696-3096-4d21-b1b5-8e0abab2b1ba","Type":"ContainerStarted","Data":"b8ddce13392a4f1a20228a8a95de1d343bdf11b08ea85b37b949a5994ac33e68"} Nov 24 11:11:28 crc kubenswrapper[5072]: I1124 11:11:28.366296 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-x6g8r" podStartSLOduration=117.366280277 podStartE2EDuration="1m57.366280277s" podCreationTimestamp="2025-11-24 11:09:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:11:28.319465385 +0000 UTC m=+140.030989861" watchObservedRunningTime="2025-11-24 11:11:28.366280277 +0000 UTC m=+140.077804753" Nov 24 11:11:28 crc kubenswrapper[5072]: I1124 11:11:28.367908 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vftrc" podStartSLOduration=117.367903274 podStartE2EDuration="1m57.367903274s" podCreationTimestamp="2025-11-24 11:09:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:11:28.365821614 +0000 UTC m=+140.077346090" watchObservedRunningTime="2025-11-24 11:11:28.367903274 +0000 UTC m=+140.079427750" Nov 24 11:11:28 crc kubenswrapper[5072]: I1124 11:11:28.420000 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7bjm7" podStartSLOduration=117.419983576 podStartE2EDuration="1m57.419983576s" podCreationTimestamp="2025-11-24 11:09:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:11:28.419505483 +0000 UTC m=+140.131029959" watchObservedRunningTime="2025-11-24 11:11:28.419983576 +0000 UTC m=+140.131508052" Nov 24 11:11:28 crc kubenswrapper[5072]: I1124 11:11:28.421759 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nldcl" podStartSLOduration=117.421753667 podStartE2EDuration="1m57.421753667s" podCreationTimestamp="2025-11-24 11:09:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:11:28.392055816 +0000 UTC m=+140.103580292" watchObservedRunningTime="2025-11-24 11:11:28.421753667 +0000 UTC m=+140.133278143" Nov 24 11:11:28 crc kubenswrapper[5072]: I1124 11:11:28.435714 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:11:28 crc kubenswrapper[5072]: I1124 11:11:28.437798 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t6876" event={"ID":"69a7724d-41d5-4946-81d6-d43497db7319","Type":"ContainerStarted","Data":"4301a47f35835f2fd352f30c402bafda7f36530188edde98e82d3c1d2b9b1f5f"} Nov 24 11:11:28 crc kubenswrapper[5072]: E1124 11:11:28.438984 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:11:28.938968711 +0000 UTC m=+140.650493187 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:11:28 crc kubenswrapper[5072]: I1124 11:11:28.461341 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-dzh8r" podStartSLOduration=117.461325291 podStartE2EDuration="1m57.461325291s" podCreationTimestamp="2025-11-24 11:09:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:11:28.460147238 +0000 UTC m=+140.171671714" watchObservedRunningTime="2025-11-24 11:11:28.461325291 +0000 UTC m=+140.172849767" Nov 24 11:11:28 crc kubenswrapper[5072]: I1124 11:11:28.462666 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399700-hnjjf" event={"ID":"96be0671-6ddf-4af0-8989-da8c4a4dcfa7","Type":"ContainerStarted","Data":"1bbd92c18eed9b8aa9b2cbef824a3e735cb2c807fe195c897f054d67e71f219d"} Nov 24 11:11:28 crc kubenswrapper[5072]: I1124 11:11:28.507872 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-wxc9p" event={"ID":"8ef682f0-d784-48ac-83f3-4c718f34edaf","Type":"ContainerStarted","Data":"3c2aa7c1626e72cc794537513bb7c081ce3a09607674ee8dcf391f2c6b2d16ed"} Nov 24 11:11:28 crc kubenswrapper[5072]: I1124 11:11:28.522444 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29399700-hnjjf" podStartSLOduration=117.522428943 podStartE2EDuration="1m57.522428943s" podCreationTimestamp="2025-11-24 11:09:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:11:28.522017291 +0000 UTC m=+140.233541767" watchObservedRunningTime="2025-11-24 11:11:28.522428943 +0000 UTC m=+140.233953419" Nov 24 11:11:28 crc kubenswrapper[5072]: I1124 11:11:28.523694 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t6876" podStartSLOduration=117.523688499 podStartE2EDuration="1m57.523688499s" podCreationTimestamp="2025-11-24 11:09:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:11:28.498521238 +0000 UTC m=+140.210045714" watchObservedRunningTime="2025-11-24 11:11:28.523688499 +0000 UTC m=+140.235212975" Nov 24 11:11:28 crc kubenswrapper[5072]: I1124 11:11:28.531894 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-hv7lg" event={"ID":"d2ba5157-b42f-477c-9db4-84b325960b47","Type":"ContainerStarted","Data":"3ca7cf88a99183cb2ecbc7e6f8fa1f6852037ceeffa743ccb10e205ca070176b"} Nov 24 11:11:28 crc kubenswrapper[5072]: I1124 11:11:28.537839 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9w2qz\" (UID: \"d68516ef-c18f-4d3f-bc80-71739e73cee1\") " pod="openshift-image-registry/image-registry-697d97f7c8-9w2qz" Nov 24 11:11:28 crc kubenswrapper[5072]: E1124 11:11:28.539760 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:11:29.039747129 +0000 UTC m=+140.751271605 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9w2qz" (UID: "d68516ef-c18f-4d3f-bc80-71739e73cee1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:11:28 crc kubenswrapper[5072]: I1124 11:11:28.553302 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-wxc9p" podStartSLOduration=117.553284047 podStartE2EDuration="1m57.553284047s" podCreationTimestamp="2025-11-24 11:09:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:11:28.551956729 +0000 UTC m=+140.263481205" watchObservedRunningTime="2025-11-24 11:11:28.553284047 +0000 UTC m=+140.264808523" Nov 24 11:11:28 crc kubenswrapper[5072]: I1124 11:11:28.564762 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j5sfl" Nov 24 11:11:28 crc kubenswrapper[5072]: I1124 11:11:28.589282 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kcz78" event={"ID":"5354347e-2a7e-42d4-a13c-33daf97e79c0","Type":"ContainerStarted","Data":"5e87c40fab66c03063bddc01753f15db498ab335703bd5d9a615514cc239f28b"} Nov 24 11:11:28 crc kubenswrapper[5072]: I1124 11:11:28.591633 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5k5rr" event={"ID":"2493e834-4bc7-43eb-a2c3-942598904f3a","Type":"ContainerStarted","Data":"d87e55e455e3d4e90f9ab8072a7a51b79bd38dd2b5c8221af33c152c6c39fdd2"} Nov 24 11:11:28 crc kubenswrapper[5072]: I1124 11:11:28.592460 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5k5rr" Nov 24 11:11:28 crc kubenswrapper[5072]: I1124 11:11:28.601539 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8hq7n" podStartSLOduration=117.60152352 podStartE2EDuration="1m57.60152352s" podCreationTimestamp="2025-11-24 11:09:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:11:28.600584163 +0000 UTC m=+140.312108639" watchObservedRunningTime="2025-11-24 11:11:28.60152352 +0000 UTC m=+140.313047996" Nov 24 11:11:28 crc kubenswrapper[5072]: I1124 11:11:28.603364 5072 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-j5sfl container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" start-of-body= Nov 24 11:11:28 crc kubenswrapper[5072]: I1124 11:11:28.603421 5072 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j5sfl" podUID="70a53cfd-05d8-426e-9b52-55af67b9c200" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" Nov 24 11:11:28 crc kubenswrapper[5072]: I1124 11:11:28.604524 5072 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-5k5rr container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.20:5443/healthz\": dial tcp 10.217.0.20:5443: connect: connection refused" start-of-body= Nov 24 11:11:28 crc kubenswrapper[5072]: I1124 11:11:28.604553 5072 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5k5rr" podUID="2493e834-4bc7-43eb-a2c3-942598904f3a" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.20:5443/healthz\": dial tcp 10.217.0.20:5443: connect: connection refused" Nov 24 11:11:28 crc kubenswrapper[5072]: I1124 11:11:28.620812 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-h6q9x" event={"ID":"1fb23ad0-2566-4f2c-8a33-97e253539289","Type":"ContainerStarted","Data":"335abceb3dff208e6f570c471bcb50c7e35a36eb704a19dfc7f0ff49f0e0ea2d"} Nov 24 11:11:28 crc kubenswrapper[5072]: I1124 11:11:28.639537 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:11:28 crc kubenswrapper[5072]: E1124 11:11:28.640423 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:11:29.140403274 +0000 UTC m=+140.851927740 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:11:28 crc kubenswrapper[5072]: I1124 11:11:28.664524 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2jj65" Nov 24 11:11:28 crc kubenswrapper[5072]: I1124 11:11:28.673983 5072 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-2jj65 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": dial tcp 10.217.0.18:8443: connect: connection refused" start-of-body= Nov 24 11:11:28 crc kubenswrapper[5072]: I1124 11:11:28.674021 5072 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2jj65" podUID="2837271a-7003-4e16-aa64-432493decb73" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": dial tcp 10.217.0.18:8443: connect: connection refused" Nov 24 11:11:28 crc kubenswrapper[5072]: I1124 11:11:28.750327 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9w2qz\" (UID: \"d68516ef-c18f-4d3f-bc80-71739e73cee1\") " pod="openshift-image-registry/image-registry-697d97f7c8-9w2qz" Nov 24 11:11:28 crc kubenswrapper[5072]: E1124 11:11:28.751585 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:11:29.251573941 +0000 UTC m=+140.963098417 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9w2qz" (UID: "d68516ef-c18f-4d3f-bc80-71739e73cee1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:11:28 crc kubenswrapper[5072]: I1124 11:11:28.771918 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j5sfl" podStartSLOduration=117.771903334 podStartE2EDuration="1m57.771903334s" podCreationTimestamp="2025-11-24 11:09:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:11:28.663305921 +0000 UTC m=+140.374830397" watchObservedRunningTime="2025-11-24 11:11:28.771903334 +0000 UTC m=+140.483427810" Nov 24 11:11:28 crc kubenswrapper[5072]: I1124 11:11:28.780916 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5k5rr" podStartSLOduration=117.78089941100001 podStartE2EDuration="1m57.780899411s" podCreationTimestamp="2025-11-24 11:09:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:11:28.769766802 +0000 UTC m=+140.481291278" watchObservedRunningTime="2025-11-24 11:11:28.780899411 +0000 UTC m=+140.492423887" Nov 24 11:11:28 crc kubenswrapper[5072]: I1124 11:11:28.792266 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fvnl4" event={"ID":"7fd28f12-f21e-4050-9102-45579a294fac","Type":"ContainerStarted","Data":"26f709443c28d8b8c079ae24443cdc1eed5e299c363c910c0f8035a8acd4594c"} Nov 24 11:11:28 crc kubenswrapper[5072]: I1124 11:11:28.794429 5072 patch_prober.go:28] interesting pod/downloads-7954f5f757-fpxll container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Nov 24 11:11:28 crc kubenswrapper[5072]: I1124 11:11:28.794472 5072 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fpxll" podUID="1cd359a9-17ba-43c9-8cb3-7c786777226b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Nov 24 11:11:28 crc kubenswrapper[5072]: I1124 11:11:28.805643 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-l28pf" Nov 24 11:11:28 crc kubenswrapper[5072]: I1124 11:11:28.816646 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mzvpf" Nov 24 11:11:28 crc kubenswrapper[5072]: I1124 11:11:28.830281 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kcz78" podStartSLOduration=117.830265776 podStartE2EDuration="1m57.830265776s" podCreationTimestamp="2025-11-24 11:09:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:11:28.829616268 +0000 UTC m=+140.541140744" watchObservedRunningTime="2025-11-24 11:11:28.830265776 +0000 UTC m=+140.541790252" Nov 24 11:11:28 crc kubenswrapper[5072]: I1124 11:11:28.855992 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:11:28 crc kubenswrapper[5072]: E1124 11:11:28.856299 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:11:29.356285052 +0000 UTC m=+141.067809528 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:11:28 crc kubenswrapper[5072]: I1124 11:11:28.962342 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9w2qz\" (UID: \"d68516ef-c18f-4d3f-bc80-71739e73cee1\") " pod="openshift-image-registry/image-registry-697d97f7c8-9w2qz" Nov 24 11:11:28 crc kubenswrapper[5072]: E1124 11:11:28.969496 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:11:29.469481527 +0000 UTC m=+141.181006003 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9w2qz" (UID: "d68516ef-c18f-4d3f-bc80-71739e73cee1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:11:28 crc kubenswrapper[5072]: I1124 11:11:28.988592 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2jj65" podStartSLOduration=117.988568544 podStartE2EDuration="1m57.988568544s" podCreationTimestamp="2025-11-24 11:09:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:11:28.986498175 +0000 UTC m=+140.698022651" watchObservedRunningTime="2025-11-24 11:11:28.988568544 +0000 UTC m=+140.700093020" Nov 24 11:11:29 crc kubenswrapper[5072]: I1124 11:11:29.041691 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fvnl4" podStartSLOduration=118.041674316 podStartE2EDuration="1m58.041674316s" podCreationTimestamp="2025-11-24 11:09:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:11:29.010724219 +0000 UTC m=+140.722248695" watchObservedRunningTime="2025-11-24 11:11:29.041674316 +0000 UTC m=+140.753198782" Nov 24 11:11:29 crc kubenswrapper[5072]: I1124 11:11:29.064754 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:11:29 crc kubenswrapper[5072]: E1124 11:11:29.065076 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:11:29.565062257 +0000 UTC m=+141.276586733 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:11:29 crc kubenswrapper[5072]: I1124 11:11:29.071251 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-rmzh4" podStartSLOduration=118.071233173 podStartE2EDuration="1m58.071233173s" podCreationTimestamp="2025-11-24 11:09:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:11:29.043445947 +0000 UTC m=+140.754970423" watchObservedRunningTime="2025-11-24 11:11:29.071233173 +0000 UTC m=+140.782757639" Nov 24 11:11:29 crc kubenswrapper[5072]: I1124 11:11:29.107182 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-9prrw" podStartSLOduration=7.107164493 podStartE2EDuration="7.107164493s" podCreationTimestamp="2025-11-24 11:11:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:11:29.072934192 +0000 UTC m=+140.784458658" watchObservedRunningTime="2025-11-24 11:11:29.107164493 +0000 UTC m=+140.818688969" Nov 24 11:11:29 crc kubenswrapper[5072]: I1124 11:11:29.145834 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-h6q9x" podStartSLOduration=118.145813921 podStartE2EDuration="1m58.145813921s" podCreationTimestamp="2025-11-24 11:09:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:11:29.108748709 +0000 UTC m=+140.820273185" watchObservedRunningTime="2025-11-24 11:11:29.145813921 +0000 UTC m=+140.857338397" Nov 24 11:11:29 crc kubenswrapper[5072]: I1124 11:11:29.169157 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9w2qz\" (UID: \"d68516ef-c18f-4d3f-bc80-71739e73cee1\") " pod="openshift-image-registry/image-registry-697d97f7c8-9w2qz" Nov 24 11:11:29 crc kubenswrapper[5072]: E1124 11:11:29.169594 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:11:29.669578232 +0000 UTC m=+141.381102708 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9w2qz" (UID: "d68516ef-c18f-4d3f-bc80-71739e73cee1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:11:29 crc kubenswrapper[5072]: I1124 11:11:29.270667 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:11:29 crc kubenswrapper[5072]: E1124 11:11:29.271055 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:11:29.771040661 +0000 UTC m=+141.482565137 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:11:29 crc kubenswrapper[5072]: I1124 11:11:29.371997 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9w2qz\" (UID: \"d68516ef-c18f-4d3f-bc80-71739e73cee1\") " pod="openshift-image-registry/image-registry-697d97f7c8-9w2qz" Nov 24 11:11:29 crc kubenswrapper[5072]: E1124 11:11:29.373065 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:11:29.873052385 +0000 UTC m=+141.584576861 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9w2qz" (UID: "d68516ef-c18f-4d3f-bc80-71739e73cee1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:11:29 crc kubenswrapper[5072]: I1124 11:11:29.473728 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:11:29 crc kubenswrapper[5072]: E1124 11:11:29.474009 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:11:29.973995328 +0000 UTC m=+141.685519804 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:11:29 crc kubenswrapper[5072]: I1124 11:11:29.501495 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-wxc9p" Nov 24 11:11:29 crc kubenswrapper[5072]: I1124 11:11:29.506170 5072 patch_prober.go:28] interesting pod/router-default-5444994796-wxc9p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 11:11:29 crc kubenswrapper[5072]: [-]has-synced failed: reason withheld Nov 24 11:11:29 crc kubenswrapper[5072]: [+]process-running ok Nov 24 11:11:29 crc kubenswrapper[5072]: healthz check failed Nov 24 11:11:29 crc kubenswrapper[5072]: I1124 11:11:29.506203 5072 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wxc9p" podUID="8ef682f0-d784-48ac-83f3-4c718f34edaf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 11:11:29 crc kubenswrapper[5072]: I1124 11:11:29.575224 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9w2qz\" (UID: \"d68516ef-c18f-4d3f-bc80-71739e73cee1\") " pod="openshift-image-registry/image-registry-697d97f7c8-9w2qz" Nov 24 11:11:29 crc kubenswrapper[5072]: E1124 11:11:29.575573 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:11:30.075562269 +0000 UTC m=+141.787086745 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9w2qz" (UID: "d68516ef-c18f-4d3f-bc80-71739e73cee1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:11:29 crc kubenswrapper[5072]: I1124 11:11:29.676520 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:11:29 crc kubenswrapper[5072]: E1124 11:11:29.676822 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:11:30.176789271 +0000 UTC m=+141.888313747 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:11:29 crc kubenswrapper[5072]: I1124 11:11:29.676919 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9w2qz\" (UID: \"d68516ef-c18f-4d3f-bc80-71739e73cee1\") " pod="openshift-image-registry/image-registry-697d97f7c8-9w2qz" Nov 24 11:11:29 crc kubenswrapper[5072]: E1124 11:11:29.677207 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:11:30.177195802 +0000 UTC m=+141.888720278 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9w2qz" (UID: "d68516ef-c18f-4d3f-bc80-71739e73cee1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:11:29 crc kubenswrapper[5072]: I1124 11:11:29.777460 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:11:29 crc kubenswrapper[5072]: E1124 11:11:29.777886 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:11:30.277866198 +0000 UTC m=+141.989390674 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:11:29 crc kubenswrapper[5072]: I1124 11:11:29.797351 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-dxmxv" event={"ID":"91d52696-3096-4d21-b1b5-8e0abab2b1ba","Type":"ContainerStarted","Data":"5cbfc8165945a151c84f04ea439a007c97aa86210c6975f534aa61e6dd326cec"} Nov 24 11:11:29 crc kubenswrapper[5072]: I1124 11:11:29.797405 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-dxmxv" event={"ID":"91d52696-3096-4d21-b1b5-8e0abab2b1ba","Type":"ContainerStarted","Data":"dccbdfc247f3d9189ccb1f120d8ac74a846f300ef5291e1d89276be35d9477c1"} Nov 24 11:11:29 crc kubenswrapper[5072]: I1124 11:11:29.797570 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-dxmxv" Nov 24 11:11:29 crc kubenswrapper[5072]: I1124 11:11:29.798735 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-f8msc" event={"ID":"8aabc0b3-9299-4b7b-8d00-310cad0b4d63","Type":"ContainerStarted","Data":"3f43c5ca9f9d85409d69b30ea3629e31be62f705d7a300a03f17454bca09bd43"} Nov 24 11:11:29 crc kubenswrapper[5072]: I1124 11:11:29.800482 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8hq7n" event={"ID":"311af931-95d6-429a-a86a-f54ab066747f","Type":"ContainerStarted","Data":"4fe783545eadaaf9bba945fbe8cbb39a9c72ac88a43f3dfac61faf260697d8c9"} Nov 24 11:11:29 crc kubenswrapper[5072]: I1124 11:11:29.802092 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-cztzr" event={"ID":"04426b83-61f0-4c87-b0e7-f175836692df","Type":"ContainerStarted","Data":"75464afe0d29e33a96b038db6f0801ee0a6bbe98568a148b370889249ce423b9"} Nov 24 11:11:29 crc kubenswrapper[5072]: I1124 11:11:29.803000 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7bjm7" event={"ID":"119d4f92-5b02-4cc7-bb41-adcc78ccb157","Type":"ContainerStarted","Data":"3d57e4adab70691e8322f169599c31f40140ac06a29f3682db46e53a1c2d11fa"} Nov 24 11:11:29 crc kubenswrapper[5072]: I1124 11:11:29.804960 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ms2fp" event={"ID":"042c5da0-34af-4413-af57-feb5f484bfc3","Type":"ContainerStarted","Data":"fa6c9da5b19ac56e6a93ae313c4630ad07d501e04cb3db5f38354d9de8a3b78a"} Nov 24 11:11:29 crc kubenswrapper[5072]: I1124 11:11:29.805279 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ms2fp" Nov 24 11:11:29 crc kubenswrapper[5072]: I1124 11:11:29.806209 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2jj65" event={"ID":"2837271a-7003-4e16-aa64-432493decb73","Type":"ContainerStarted","Data":"5425c0478eed12be9f7c135b9f881a74c7b065aee42d517b0087e78e30c8c04c"} Nov 24 11:11:29 crc kubenswrapper[5072]: I1124 11:11:29.808785 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ln5s8" event={"ID":"f662a10c-20f8-49b5-9a41-6a17e156038b","Type":"ContainerStarted","Data":"d874346b02b4801488ebcafddd6975b8057ebf8447a08f7d74d790fd2fdc80cc"} Nov 24 11:11:29 crc kubenswrapper[5072]: I1124 11:11:29.808825 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ln5s8" event={"ID":"f662a10c-20f8-49b5-9a41-6a17e156038b","Type":"ContainerStarted","Data":"a3e35721408d632ba1f0d957ca71ac70d291219e2b7eeeacf358b62985958dd2"} Nov 24 11:11:29 crc kubenswrapper[5072]: I1124 11:11:29.809843 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nwsjb" event={"ID":"7b8bcc47-53bd-45a5-937f-b515a314f662","Type":"ContainerStarted","Data":"a43f9a866a60390092572bb8c1e2db6642a3bcae4a979d72af8d16bab47f1016"} Nov 24 11:11:29 crc kubenswrapper[5072]: I1124 11:11:29.811201 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399700-hnjjf" event={"ID":"96be0671-6ddf-4af0-8989-da8c4a4dcfa7","Type":"ContainerStarted","Data":"c48dcbaf38f2a63fd2677bbd5dc38e2f921e4b8b27185ac7837b2e5a55a30906"} Nov 24 11:11:29 crc kubenswrapper[5072]: I1124 11:11:29.814301 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-dxmxv" podStartSLOduration=7.814291612 podStartE2EDuration="7.814291612s" podCreationTimestamp="2025-11-24 11:11:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:11:29.813588432 +0000 UTC m=+141.525112908" watchObservedRunningTime="2025-11-24 11:11:29.814291612 +0000 UTC m=+141.525816088" Nov 24 11:11:29 crc kubenswrapper[5072]: I1124 11:11:29.817483 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-x6g8r" event={"ID":"613216b8-2838-4eb4-8635-9aa0e797d101","Type":"ContainerStarted","Data":"7d94c1d7c8f9dae1ad5050df2abfce5899d6aa4a4d3d771de6471ab5634f7ea9"} Nov 24 11:11:29 crc kubenswrapper[5072]: I1124 11:11:29.819755 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fvnl4" event={"ID":"7fd28f12-f21e-4050-9102-45579a294fac","Type":"ContainerStarted","Data":"3f4786a1afaec9694511bddbfe4de55d91c7c960e8740dad850f927108c8e6b4"} Nov 24 11:11:29 crc kubenswrapper[5072]: I1124 11:11:29.819833 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fvnl4" event={"ID":"7fd28f12-f21e-4050-9102-45579a294fac","Type":"ContainerStarted","Data":"b590925c6176be49f76e0413c5ef34b0daed304a4c8de9c84a93c07ab342b324"} Nov 24 11:11:29 crc kubenswrapper[5072]: I1124 11:11:29.820843 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2jj65" Nov 24 11:11:29 crc kubenswrapper[5072]: I1124 11:11:29.821272 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j5sfl" event={"ID":"70a53cfd-05d8-426e-9b52-55af67b9c200","Type":"ContainerStarted","Data":"2850eb41957ed348f6e65d2d0033109687faf564d4c5ad36c9afd3e495fe8c8e"} Nov 24 11:11:29 crc kubenswrapper[5072]: I1124 11:11:29.823161 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-rmzh4" event={"ID":"24b0c90f-a223-41e9-beb5-619fdeaf49c1","Type":"ContainerStarted","Data":"62a8e62eb260a135e32af8f660d17c5f9f11d42b25579d112d2a9b26f14a7a11"} Nov 24 11:11:29 crc kubenswrapper[5072]: I1124 11:11:29.824856 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5d2ld" event={"ID":"815768ad-2984-4e34-afb0-4e98c3f0373f","Type":"ContainerStarted","Data":"824f25486580e1ac0234f477b4f4a51f12fe588570475d8c66886d65892c426b"} Nov 24 11:11:29 crc kubenswrapper[5072]: I1124 11:11:29.826557 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-m47n7" event={"ID":"d5fa82d2-0cf9-46d0-b319-45a36d14a3af","Type":"ContainerStarted","Data":"169a5b446810dfb1273b0fd9969725f02d1c905d7cf10f5668d5532bc23f1768"} Nov 24 11:11:29 crc kubenswrapper[5072]: I1124 11:11:29.828907 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9x4dl" event={"ID":"c0a68115-9754-4071-b421-d9627182ff91","Type":"ContainerStarted","Data":"ae229c71ece866e6ea4c53a73c5874d6243cf46d2f3be5581a449cd6a1a4290c"} Nov 24 11:11:29 crc kubenswrapper[5072]: I1124 11:11:29.828957 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9x4dl" event={"ID":"c0a68115-9754-4071-b421-d9627182ff91","Type":"ContainerStarted","Data":"8766370f3cde27abad6e7294d08825c4b370bb74e9a94f460701c3f3395ceae3"} Nov 24 11:11:29 crc kubenswrapper[5072]: I1124 11:11:29.829454 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9x4dl" Nov 24 11:11:29 crc kubenswrapper[5072]: I1124 11:11:29.831359 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nldcl" event={"ID":"b2182353-061f-40bf-8f81-1cb1aaaf1b97","Type":"ContainerStarted","Data":"228277bc76f5be3f34f026555b2a3d7aeca21e0a7cfdc6f3dbce7bd54475726d"} Nov 24 11:11:29 crc kubenswrapper[5072]: I1124 11:11:29.833861 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-dzh8r" event={"ID":"bcbc6938-ae1b-4306-a73d-7f2c5dc64047","Type":"ContainerStarted","Data":"715373328c9ac2ae7a817673531e9d860f0e345a669f30a33e1e5cdcc0362d3d"} Nov 24 11:11:29 crc kubenswrapper[5072]: I1124 11:11:29.836749 5072 generic.go:334] "Generic (PLEG): container finished" podID="2b4f223b-f1f8-4e6b-ae06-519bc73d38ea" containerID="883e5482dddbe9cc93ab5ecb385da6b73eeb15dc80d3b16abdcb19ab10f7eb68" exitCode=0 Nov 24 11:11:29 crc kubenswrapper[5072]: I1124 11:11:29.836819 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-4qrkp" event={"ID":"2b4f223b-f1f8-4e6b-ae06-519bc73d38ea","Type":"ContainerDied","Data":"883e5482dddbe9cc93ab5ecb385da6b73eeb15dc80d3b16abdcb19ab10f7eb68"} Nov 24 11:11:29 crc kubenswrapper[5072]: I1124 11:11:29.836854 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-4qrkp" event={"ID":"2b4f223b-f1f8-4e6b-ae06-519bc73d38ea","Type":"ContainerStarted","Data":"e072d3c0a819c7080049a9638877dfdf843954826d0170feb8265dad7bb1347e"} Nov 24 11:11:29 crc kubenswrapper[5072]: I1124 11:11:29.836867 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-4qrkp" event={"ID":"2b4f223b-f1f8-4e6b-ae06-519bc73d38ea","Type":"ContainerStarted","Data":"6487a9442a43a7b5d1db1f6d795c6e248ec05e095ad2caf5e72692e2d036d10b"} Nov 24 11:11:29 crc kubenswrapper[5072]: I1124 11:11:29.839201 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-9prrw" event={"ID":"e58dd08a-2f64-4b2f-8779-3ea2e4088142","Type":"ContainerStarted","Data":"b340f051d433011e290c479f472a713809a5ca51f2695134a263a2d9b50108fe"} Nov 24 11:11:29 crc kubenswrapper[5072]: I1124 11:11:29.841964 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-hv7lg" event={"ID":"d2ba5157-b42f-477c-9db4-84b325960b47","Type":"ContainerStarted","Data":"f06b194735157b18c7b36f6259c9ba43d879aa35b9dd963cc2419144c79f3af5"} Nov 24 11:11:29 crc kubenswrapper[5072]: I1124 11:11:29.844459 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vftrc" event={"ID":"164c7d70-1b80-415a-8a7b-fbb1001b1286","Type":"ContainerStarted","Data":"fb90de6ce6daf2540fc568b6ca7a25da9b83eaaf83914342032935d542ecc629"} Nov 24 11:11:29 crc kubenswrapper[5072]: I1124 11:11:29.846540 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-ztvf4" event={"ID":"ff258f9c-6ace-46bf-8228-05668edcbdd6","Type":"ContainerStarted","Data":"ccd408d15620e17218e4114f89aed9a7d363d8d800cebf9fed86e85667326a17"} Nov 24 11:11:29 crc kubenswrapper[5072]: I1124 11:11:29.862397 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ln5s8" podStartSLOduration=118.86236116 podStartE2EDuration="1m58.86236116s" podCreationTimestamp="2025-11-24 11:09:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:11:29.861594928 +0000 UTC m=+141.573119414" watchObservedRunningTime="2025-11-24 11:11:29.86236116 +0000 UTC m=+141.573885636" Nov 24 11:11:29 crc kubenswrapper[5072]: I1124 11:11:29.866159 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-4qrkp" Nov 24 11:11:29 crc kubenswrapper[5072]: I1124 11:11:29.866210 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-4qrkp" Nov 24 11:11:29 crc kubenswrapper[5072]: I1124 11:11:29.867460 5072 patch_prober.go:28] interesting pod/apiserver-76f77b778f-4qrkp container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.44:8443/livez\": dial tcp 10.217.0.44:8443: connect: connection refused" start-of-body= Nov 24 11:11:29 crc kubenswrapper[5072]: I1124 11:11:29.867528 5072 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-4qrkp" podUID="2b4f223b-f1f8-4e6b-ae06-519bc73d38ea" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.44:8443/livez\": dial tcp 10.217.0.44:8443: connect: connection refused" Nov 24 11:11:29 crc kubenswrapper[5072]: I1124 11:11:29.881555 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9w2qz\" (UID: \"d68516ef-c18f-4d3f-bc80-71739e73cee1\") " pod="openshift-image-registry/image-registry-697d97f7c8-9w2qz" Nov 24 11:11:29 crc kubenswrapper[5072]: E1124 11:11:29.890771 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:11:30.390748613 +0000 UTC m=+142.102273179 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9w2qz" (UID: "d68516ef-c18f-4d3f-bc80-71739e73cee1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:11:29 crc kubenswrapper[5072]: I1124 11:11:29.908421 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kcz78" Nov 24 11:11:29 crc kubenswrapper[5072]: I1124 11:11:29.908687 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kcz78" Nov 24 11:11:29 crc kubenswrapper[5072]: I1124 11:11:29.980743 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j5sfl" Nov 24 11:11:29 crc kubenswrapper[5072]: I1124 11:11:29.981974 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-f8msc" podStartSLOduration=7.981955718 podStartE2EDuration="7.981955718s" podCreationTimestamp="2025-11-24 11:11:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:11:29.980954979 +0000 UTC m=+141.692479455" watchObservedRunningTime="2025-11-24 11:11:29.981955718 +0000 UTC m=+141.693480184" Nov 24 11:11:29 crc kubenswrapper[5072]: I1124 11:11:29.982651 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:11:29 crc kubenswrapper[5072]: I1124 11:11:29.982718 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nwsjb" podStartSLOduration=118.982712239 podStartE2EDuration="1m58.982712239s" podCreationTimestamp="2025-11-24 11:09:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:11:29.942012053 +0000 UTC m=+141.653536529" watchObservedRunningTime="2025-11-24 11:11:29.982712239 +0000 UTC m=+141.694236715" Nov 24 11:11:29 crc kubenswrapper[5072]: E1124 11:11:29.982980 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:11:30.482961957 +0000 UTC m=+142.194486433 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:11:30 crc kubenswrapper[5072]: I1124 11:11:30.054851 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ms2fp" podStartSLOduration=119.054830256 podStartE2EDuration="1m59.054830256s" podCreationTimestamp="2025-11-24 11:09:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:11:30.042708329 +0000 UTC m=+141.754232805" watchObservedRunningTime="2025-11-24 11:11:30.054830256 +0000 UTC m=+141.766354733" Nov 24 11:11:30 crc kubenswrapper[5072]: I1124 11:11:30.070796 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" Nov 24 11:11:30 crc kubenswrapper[5072]: I1124 11:11:30.072078 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-m47n7" podStartSLOduration=119.072063161 podStartE2EDuration="1m59.072063161s" podCreationTimestamp="2025-11-24 11:09:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:11:30.071491034 +0000 UTC m=+141.783015510" watchObservedRunningTime="2025-11-24 11:11:30.072063161 +0000 UTC m=+141.783587637" Nov 24 11:11:30 crc kubenswrapper[5072]: I1124 11:11:30.087955 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9w2qz\" (UID: \"d68516ef-c18f-4d3f-bc80-71739e73cee1\") " pod="openshift-image-registry/image-registry-697d97f7c8-9w2qz" Nov 24 11:11:30 crc kubenswrapper[5072]: E1124 11:11:30.088359 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:11:30.588344317 +0000 UTC m=+142.299868793 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9w2qz" (UID: "d68516ef-c18f-4d3f-bc80-71739e73cee1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:11:30 crc kubenswrapper[5072]: I1124 11:11:30.154140 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-4qrkp" podStartSLOduration=119.154125683 podStartE2EDuration="1m59.154125683s" podCreationTimestamp="2025-11-24 11:09:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:11:30.142843259 +0000 UTC m=+141.854367735" watchObservedRunningTime="2025-11-24 11:11:30.154125683 +0000 UTC m=+141.865650159" Nov 24 11:11:30 crc kubenswrapper[5072]: I1124 11:11:30.198122 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:11:30 crc kubenswrapper[5072]: E1124 11:11:30.198758 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:11:30.698731711 +0000 UTC m=+142.410256227 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:11:30 crc kubenswrapper[5072]: I1124 11:11:30.299729 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9w2qz\" (UID: \"d68516ef-c18f-4d3f-bc80-71739e73cee1\") " pod="openshift-image-registry/image-registry-697d97f7c8-9w2qz" Nov 24 11:11:30 crc kubenswrapper[5072]: E1124 11:11:30.300105 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:11:30.800089246 +0000 UTC m=+142.511613722 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9w2qz" (UID: "d68516ef-c18f-4d3f-bc80-71739e73cee1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:11:30 crc kubenswrapper[5072]: I1124 11:11:30.333247 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9x4dl" podStartSLOduration=119.333227326 podStartE2EDuration="1m59.333227326s" podCreationTimestamp="2025-11-24 11:09:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:11:30.31068736 +0000 UTC m=+142.022211836" watchObservedRunningTime="2025-11-24 11:11:30.333227326 +0000 UTC m=+142.044751802" Nov 24 11:11:30 crc kubenswrapper[5072]: I1124 11:11:30.335176 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5d2ld" podStartSLOduration=119.335170892 podStartE2EDuration="1m59.335170892s" podCreationTimestamp="2025-11-24 11:09:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:11:30.333123873 +0000 UTC m=+142.044648359" watchObservedRunningTime="2025-11-24 11:11:30.335170892 +0000 UTC m=+142.046695368" Nov 24 11:11:30 crc kubenswrapper[5072]: I1124 11:11:30.351808 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-hv7lg" podStartSLOduration=119.351794148 podStartE2EDuration="1m59.351794148s" podCreationTimestamp="2025-11-24 11:09:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:11:30.350481741 +0000 UTC m=+142.062006217" watchObservedRunningTime="2025-11-24 11:11:30.351794148 +0000 UTC m=+142.063318614" Nov 24 11:11:30 crc kubenswrapper[5072]: I1124 11:11:30.373302 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-ztvf4" podStartSLOduration=119.373285194 podStartE2EDuration="1m59.373285194s" podCreationTimestamp="2025-11-24 11:09:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:11:30.370595707 +0000 UTC m=+142.082120193" watchObservedRunningTime="2025-11-24 11:11:30.373285194 +0000 UTC m=+142.084809670" Nov 24 11:11:30 crc kubenswrapper[5072]: I1124 11:11:30.401402 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:11:30 crc kubenswrapper[5072]: E1124 11:11:30.401612 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:11:30.901587066 +0000 UTC m=+142.613111542 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:11:30 crc kubenswrapper[5072]: I1124 11:11:30.401709 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9w2qz\" (UID: \"d68516ef-c18f-4d3f-bc80-71739e73cee1\") " pod="openshift-image-registry/image-registry-697d97f7c8-9w2qz" Nov 24 11:11:30 crc kubenswrapper[5072]: E1124 11:11:30.402038 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:11:30.902028368 +0000 UTC m=+142.613552844 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9w2qz" (UID: "d68516ef-c18f-4d3f-bc80-71739e73cee1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:11:30 crc kubenswrapper[5072]: I1124 11:11:30.492870 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kcz78" Nov 24 11:11:30 crc kubenswrapper[5072]: I1124 11:11:30.502543 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:11:30 crc kubenswrapper[5072]: E1124 11:11:30.503125 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:11:31.003093585 +0000 UTC m=+142.714618051 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:11:30 crc kubenswrapper[5072]: I1124 11:11:30.509246 5072 patch_prober.go:28] interesting pod/router-default-5444994796-wxc9p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 11:11:30 crc kubenswrapper[5072]: [-]has-synced failed: reason withheld Nov 24 11:11:30 crc kubenswrapper[5072]: [+]process-running ok Nov 24 11:11:30 crc kubenswrapper[5072]: healthz check failed Nov 24 11:11:30 crc kubenswrapper[5072]: I1124 11:11:30.509299 5072 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wxc9p" podUID="8ef682f0-d784-48ac-83f3-4c718f34edaf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 11:11:30 crc kubenswrapper[5072]: I1124 11:11:30.604318 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9w2qz\" (UID: \"d68516ef-c18f-4d3f-bc80-71739e73cee1\") " pod="openshift-image-registry/image-registry-697d97f7c8-9w2qz" Nov 24 11:11:30 crc kubenswrapper[5072]: E1124 11:11:30.604658 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:11:31.104644456 +0000 UTC m=+142.816168932 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9w2qz" (UID: "d68516ef-c18f-4d3f-bc80-71739e73cee1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:11:30 crc kubenswrapper[5072]: I1124 11:11:30.706331 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:11:30 crc kubenswrapper[5072]: E1124 11:11:30.706491 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:11:31.206466395 +0000 UTC m=+142.917990871 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:11:30 crc kubenswrapper[5072]: I1124 11:11:30.706638 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9w2qz\" (UID: \"d68516ef-c18f-4d3f-bc80-71739e73cee1\") " pod="openshift-image-registry/image-registry-697d97f7c8-9w2qz" Nov 24 11:11:30 crc kubenswrapper[5072]: E1124 11:11:30.706948 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:11:31.206939558 +0000 UTC m=+142.918464034 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9w2qz" (UID: "d68516ef-c18f-4d3f-bc80-71739e73cee1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:11:30 crc kubenswrapper[5072]: I1124 11:11:30.715009 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5k5rr" Nov 24 11:11:30 crc kubenswrapper[5072]: I1124 11:11:30.807142 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:11:30 crc kubenswrapper[5072]: E1124 11:11:30.807309 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:11:31.307283834 +0000 UTC m=+143.018808310 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:11:30 crc kubenswrapper[5072]: I1124 11:11:30.807666 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9w2qz\" (UID: \"d68516ef-c18f-4d3f-bc80-71739e73cee1\") " pod="openshift-image-registry/image-registry-697d97f7c8-9w2qz" Nov 24 11:11:30 crc kubenswrapper[5072]: E1124 11:11:30.808134 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:11:31.308117698 +0000 UTC m=+143.019642174 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9w2qz" (UID: "d68516ef-c18f-4d3f-bc80-71739e73cee1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:11:30 crc kubenswrapper[5072]: I1124 11:11:30.858932 5072 generic.go:334] "Generic (PLEG): container finished" podID="96be0671-6ddf-4af0-8989-da8c4a4dcfa7" containerID="c48dcbaf38f2a63fd2677bbd5dc38e2f921e4b8b27185ac7837b2e5a55a30906" exitCode=0 Nov 24 11:11:30 crc kubenswrapper[5072]: I1124 11:11:30.859141 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399700-hnjjf" event={"ID":"96be0671-6ddf-4af0-8989-da8c4a4dcfa7","Type":"ContainerDied","Data":"c48dcbaf38f2a63fd2677bbd5dc38e2f921e4b8b27185ac7837b2e5a55a30906"} Nov 24 11:11:30 crc kubenswrapper[5072]: I1124 11:11:30.862906 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-ztvf4" Nov 24 11:11:30 crc kubenswrapper[5072]: I1124 11:11:30.863073 5072 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-ztvf4 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/healthz\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Nov 24 11:11:30 crc kubenswrapper[5072]: I1124 11:11:30.863115 5072 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-ztvf4" podUID="ff258f9c-6ace-46bf-8228-05668edcbdd6" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.29:8080/healthz\": dial tcp 10.217.0.29:8080: connect: connection refused" Nov 24 11:11:30 crc kubenswrapper[5072]: I1124 11:11:30.869849 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kcz78" Nov 24 11:11:30 crc kubenswrapper[5072]: I1124 11:11:30.908753 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:11:30 crc kubenswrapper[5072]: E1124 11:11:30.911682 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:11:31.411662886 +0000 UTC m=+143.123187362 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:11:31 crc kubenswrapper[5072]: I1124 11:11:31.010562 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9w2qz\" (UID: \"d68516ef-c18f-4d3f-bc80-71739e73cee1\") " pod="openshift-image-registry/image-registry-697d97f7c8-9w2qz" Nov 24 11:11:31 crc kubenswrapper[5072]: E1124 11:11:31.010973 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:11:31.510960852 +0000 UTC m=+143.222485328 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9w2qz" (UID: "d68516ef-c18f-4d3f-bc80-71739e73cee1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:11:31 crc kubenswrapper[5072]: I1124 11:11:31.112115 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:11:31 crc kubenswrapper[5072]: E1124 11:11:31.112290 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:11:31.612266205 +0000 UTC m=+143.323790681 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:11:31 crc kubenswrapper[5072]: I1124 11:11:31.112694 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9w2qz\" (UID: \"d68516ef-c18f-4d3f-bc80-71739e73cee1\") " pod="openshift-image-registry/image-registry-697d97f7c8-9w2qz" Nov 24 11:11:31 crc kubenswrapper[5072]: E1124 11:11:31.113036 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:11:31.613004376 +0000 UTC m=+143.324528852 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9w2qz" (UID: "d68516ef-c18f-4d3f-bc80-71739e73cee1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:11:31 crc kubenswrapper[5072]: I1124 11:11:31.213479 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:11:31 crc kubenswrapper[5072]: E1124 11:11:31.213666 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:11:31.713637361 +0000 UTC m=+143.425161837 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:11:31 crc kubenswrapper[5072]: I1124 11:11:31.314805 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9w2qz\" (UID: \"d68516ef-c18f-4d3f-bc80-71739e73cee1\") " pod="openshift-image-registry/image-registry-697d97f7c8-9w2qz" Nov 24 11:11:31 crc kubenswrapper[5072]: E1124 11:11:31.315150 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:11:31.81513881 +0000 UTC m=+143.526663286 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9w2qz" (UID: "d68516ef-c18f-4d3f-bc80-71739e73cee1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:11:31 crc kubenswrapper[5072]: I1124 11:11:31.415434 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:11:31 crc kubenswrapper[5072]: E1124 11:11:31.415589 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:11:31.915572529 +0000 UTC m=+143.627097005 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:11:31 crc kubenswrapper[5072]: I1124 11:11:31.415648 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9w2qz\" (UID: \"d68516ef-c18f-4d3f-bc80-71739e73cee1\") " pod="openshift-image-registry/image-registry-697d97f7c8-9w2qz" Nov 24 11:11:31 crc kubenswrapper[5072]: E1124 11:11:31.415932 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:11:31.915925219 +0000 UTC m=+143.627449695 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9w2qz" (UID: "d68516ef-c18f-4d3f-bc80-71739e73cee1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:11:31 crc kubenswrapper[5072]: I1124 11:11:31.425812 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-slkhf"] Nov 24 11:11:31 crc kubenswrapper[5072]: I1124 11:11:31.427076 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-slkhf" Nov 24 11:11:31 crc kubenswrapper[5072]: I1124 11:11:31.437148 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 24 11:11:31 crc kubenswrapper[5072]: I1124 11:11:31.462511 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-slkhf"] Nov 24 11:11:31 crc kubenswrapper[5072]: I1124 11:11:31.511213 5072 patch_prober.go:28] interesting pod/router-default-5444994796-wxc9p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 11:11:31 crc kubenswrapper[5072]: [-]has-synced failed: reason withheld Nov 24 11:11:31 crc kubenswrapper[5072]: [+]process-running ok Nov 24 11:11:31 crc kubenswrapper[5072]: healthz check failed Nov 24 11:11:31 crc kubenswrapper[5072]: I1124 11:11:31.511277 5072 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wxc9p" podUID="8ef682f0-d784-48ac-83f3-4c718f34edaf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 11:11:31 crc kubenswrapper[5072]: I1124 11:11:31.516349 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:11:31 crc kubenswrapper[5072]: E1124 11:11:31.516512 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:11:32.016488721 +0000 UTC m=+143.728013197 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:11:31 crc kubenswrapper[5072]: I1124 11:11:31.516566 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbeb508a-245e-4c6c-9d4f-6f6f330cea5d-utilities\") pod \"certified-operators-slkhf\" (UID: \"cbeb508a-245e-4c6c-9d4f-6f6f330cea5d\") " pod="openshift-marketplace/certified-operators-slkhf" Nov 24 11:11:31 crc kubenswrapper[5072]: I1124 11:11:31.516689 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqcm9\" (UniqueName: \"kubernetes.io/projected/cbeb508a-245e-4c6c-9d4f-6f6f330cea5d-kube-api-access-wqcm9\") pod \"certified-operators-slkhf\" (UID: \"cbeb508a-245e-4c6c-9d4f-6f6f330cea5d\") " pod="openshift-marketplace/certified-operators-slkhf" Nov 24 11:11:31 crc kubenswrapper[5072]: I1124 11:11:31.516715 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbeb508a-245e-4c6c-9d4f-6f6f330cea5d-catalog-content\") pod \"certified-operators-slkhf\" (UID: \"cbeb508a-245e-4c6c-9d4f-6f6f330cea5d\") " pod="openshift-marketplace/certified-operators-slkhf" Nov 24 11:11:31 crc kubenswrapper[5072]: I1124 11:11:31.516813 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9w2qz\" (UID: \"d68516ef-c18f-4d3f-bc80-71739e73cee1\") " pod="openshift-image-registry/image-registry-697d97f7c8-9w2qz" Nov 24 11:11:31 crc kubenswrapper[5072]: E1124 11:11:31.517099 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:11:32.017087399 +0000 UTC m=+143.728611875 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9w2qz" (UID: "d68516ef-c18f-4d3f-bc80-71739e73cee1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:11:31 crc kubenswrapper[5072]: I1124 11:11:31.581854 5072 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Nov 24 11:11:31 crc kubenswrapper[5072]: I1124 11:11:31.601874 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pvs9g"] Nov 24 11:11:31 crc kubenswrapper[5072]: I1124 11:11:31.602843 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pvs9g" Nov 24 11:11:31 crc kubenswrapper[5072]: I1124 11:11:31.604343 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 24 11:11:31 crc kubenswrapper[5072]: I1124 11:11:31.615753 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pvs9g"] Nov 24 11:11:31 crc kubenswrapper[5072]: I1124 11:11:31.617579 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:11:31 crc kubenswrapper[5072]: I1124 11:11:31.617655 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbeb508a-245e-4c6c-9d4f-6f6f330cea5d-utilities\") pod \"certified-operators-slkhf\" (UID: \"cbeb508a-245e-4c6c-9d4f-6f6f330cea5d\") " pod="openshift-marketplace/certified-operators-slkhf" Nov 24 11:11:31 crc kubenswrapper[5072]: I1124 11:11:31.617689 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f57ff17-1692-4fef-ba23-2b510f5a748b-catalog-content\") pod \"community-operators-pvs9g\" (UID: \"2f57ff17-1692-4fef-ba23-2b510f5a748b\") " pod="openshift-marketplace/community-operators-pvs9g" Nov 24 11:11:31 crc kubenswrapper[5072]: E1124 11:11:31.617729 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:11:32.117709573 +0000 UTC m=+143.829234049 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:11:31 crc kubenswrapper[5072]: I1124 11:11:31.617755 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqcm9\" (UniqueName: \"kubernetes.io/projected/cbeb508a-245e-4c6c-9d4f-6f6f330cea5d-kube-api-access-wqcm9\") pod \"certified-operators-slkhf\" (UID: \"cbeb508a-245e-4c6c-9d4f-6f6f330cea5d\") " pod="openshift-marketplace/certified-operators-slkhf" Nov 24 11:11:31 crc kubenswrapper[5072]: I1124 11:11:31.617801 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbeb508a-245e-4c6c-9d4f-6f6f330cea5d-catalog-content\") pod \"certified-operators-slkhf\" (UID: \"cbeb508a-245e-4c6c-9d4f-6f6f330cea5d\") " pod="openshift-marketplace/certified-operators-slkhf" Nov 24 11:11:31 crc kubenswrapper[5072]: I1124 11:11:31.617869 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9w2qz\" (UID: \"d68516ef-c18f-4d3f-bc80-71739e73cee1\") " pod="openshift-image-registry/image-registry-697d97f7c8-9w2qz" Nov 24 11:11:31 crc kubenswrapper[5072]: I1124 11:11:31.617921 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nm6n\" (UniqueName: \"kubernetes.io/projected/2f57ff17-1692-4fef-ba23-2b510f5a748b-kube-api-access-2nm6n\") pod \"community-operators-pvs9g\" (UID: \"2f57ff17-1692-4fef-ba23-2b510f5a748b\") " pod="openshift-marketplace/community-operators-pvs9g" Nov 24 11:11:31 crc kubenswrapper[5072]: I1124 11:11:31.617966 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f57ff17-1692-4fef-ba23-2b510f5a748b-utilities\") pod \"community-operators-pvs9g\" (UID: \"2f57ff17-1692-4fef-ba23-2b510f5a748b\") " pod="openshift-marketplace/community-operators-pvs9g" Nov 24 11:11:31 crc kubenswrapper[5072]: I1124 11:11:31.618173 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbeb508a-245e-4c6c-9d4f-6f6f330cea5d-utilities\") pod \"certified-operators-slkhf\" (UID: \"cbeb508a-245e-4c6c-9d4f-6f6f330cea5d\") " pod="openshift-marketplace/certified-operators-slkhf" Nov 24 11:11:31 crc kubenswrapper[5072]: E1124 11:11:31.618300 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:11:32.118292019 +0000 UTC m=+143.829816495 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9w2qz" (UID: "d68516ef-c18f-4d3f-bc80-71739e73cee1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:11:31 crc kubenswrapper[5072]: I1124 11:11:31.618315 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbeb508a-245e-4c6c-9d4f-6f6f330cea5d-catalog-content\") pod \"certified-operators-slkhf\" (UID: \"cbeb508a-245e-4c6c-9d4f-6f6f330cea5d\") " pod="openshift-marketplace/certified-operators-slkhf" Nov 24 11:11:31 crc kubenswrapper[5072]: I1124 11:11:31.641723 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqcm9\" (UniqueName: \"kubernetes.io/projected/cbeb508a-245e-4c6c-9d4f-6f6f330cea5d-kube-api-access-wqcm9\") pod \"certified-operators-slkhf\" (UID: \"cbeb508a-245e-4c6c-9d4f-6f6f330cea5d\") " pod="openshift-marketplace/certified-operators-slkhf" Nov 24 11:11:31 crc kubenswrapper[5072]: I1124 11:11:31.718739 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:11:31 crc kubenswrapper[5072]: E1124 11:11:31.719451 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:11:32.219437879 +0000 UTC m=+143.930962355 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:11:31 crc kubenswrapper[5072]: I1124 11:11:31.719468 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nm6n\" (UniqueName: \"kubernetes.io/projected/2f57ff17-1692-4fef-ba23-2b510f5a748b-kube-api-access-2nm6n\") pod \"community-operators-pvs9g\" (UID: \"2f57ff17-1692-4fef-ba23-2b510f5a748b\") " pod="openshift-marketplace/community-operators-pvs9g" Nov 24 11:11:31 crc kubenswrapper[5072]: I1124 11:11:31.719507 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f57ff17-1692-4fef-ba23-2b510f5a748b-utilities\") pod \"community-operators-pvs9g\" (UID: \"2f57ff17-1692-4fef-ba23-2b510f5a748b\") " pod="openshift-marketplace/community-operators-pvs9g" Nov 24 11:11:31 crc kubenswrapper[5072]: I1124 11:11:31.719551 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f57ff17-1692-4fef-ba23-2b510f5a748b-catalog-content\") pod \"community-operators-pvs9g\" (UID: \"2f57ff17-1692-4fef-ba23-2b510f5a748b\") " pod="openshift-marketplace/community-operators-pvs9g" Nov 24 11:11:31 crc kubenswrapper[5072]: I1124 11:11:31.719609 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9w2qz\" (UID: \"d68516ef-c18f-4d3f-bc80-71739e73cee1\") " pod="openshift-image-registry/image-registry-697d97f7c8-9w2qz" Nov 24 11:11:31 crc kubenswrapper[5072]: E1124 11:11:31.719805 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:11:32.219798659 +0000 UTC m=+143.931323135 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9w2qz" (UID: "d68516ef-c18f-4d3f-bc80-71739e73cee1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:11:31 crc kubenswrapper[5072]: I1124 11:11:31.720273 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f57ff17-1692-4fef-ba23-2b510f5a748b-utilities\") pod \"community-operators-pvs9g\" (UID: \"2f57ff17-1692-4fef-ba23-2b510f5a748b\") " pod="openshift-marketplace/community-operators-pvs9g" Nov 24 11:11:31 crc kubenswrapper[5072]: I1124 11:11:31.720501 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f57ff17-1692-4fef-ba23-2b510f5a748b-catalog-content\") pod \"community-operators-pvs9g\" (UID: \"2f57ff17-1692-4fef-ba23-2b510f5a748b\") " pod="openshift-marketplace/community-operators-pvs9g" Nov 24 11:11:31 crc kubenswrapper[5072]: I1124 11:11:31.736799 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nm6n\" (UniqueName: \"kubernetes.io/projected/2f57ff17-1692-4fef-ba23-2b510f5a748b-kube-api-access-2nm6n\") pod \"community-operators-pvs9g\" (UID: \"2f57ff17-1692-4fef-ba23-2b510f5a748b\") " pod="openshift-marketplace/community-operators-pvs9g" Nov 24 11:11:31 crc kubenswrapper[5072]: I1124 11:11:31.749621 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-slkhf" Nov 24 11:11:31 crc kubenswrapper[5072]: I1124 11:11:31.808081 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lsrl7"] Nov 24 11:11:31 crc kubenswrapper[5072]: I1124 11:11:31.808988 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lsrl7" Nov 24 11:11:31 crc kubenswrapper[5072]: I1124 11:11:31.820543 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:11:31 crc kubenswrapper[5072]: E1124 11:11:31.820914 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:11:32.320899997 +0000 UTC m=+144.032424473 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:11:31 crc kubenswrapper[5072]: I1124 11:11:31.822468 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lsrl7"] Nov 24 11:11:31 crc kubenswrapper[5072]: I1124 11:11:31.871349 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-cztzr" event={"ID":"04426b83-61f0-4c87-b0e7-f175836692df","Type":"ContainerStarted","Data":"0df44a95357db3bd22d4b6b87cae55bd4ccbf01f135c0f18016e2430753cac11"} Nov 24 11:11:31 crc kubenswrapper[5072]: I1124 11:11:31.871643 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-cztzr" event={"ID":"04426b83-61f0-4c87-b0e7-f175836692df","Type":"ContainerStarted","Data":"5efd7fd06846390614660ba195b83f767bab3009bd28db9fc5180d9f6f234f7b"} Nov 24 11:11:31 crc kubenswrapper[5072]: I1124 11:11:31.871654 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-cztzr" event={"ID":"04426b83-61f0-4c87-b0e7-f175836692df","Type":"ContainerStarted","Data":"3c7827d6461dfc1ff00f355397b03735236f9bb449c10f4c3db3dc1d1028e899"} Nov 24 11:11:31 crc kubenswrapper[5072]: I1124 11:11:31.879151 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ms2fp" Nov 24 11:11:31 crc kubenswrapper[5072]: I1124 11:11:31.882692 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-ztvf4" Nov 24 11:11:31 crc kubenswrapper[5072]: I1124 11:11:31.894626 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-cztzr" podStartSLOduration=9.89461175 podStartE2EDuration="9.89461175s" podCreationTimestamp="2025-11-24 11:11:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:11:31.89321981 +0000 UTC m=+143.604744286" watchObservedRunningTime="2025-11-24 11:11:31.89461175 +0000 UTC m=+143.606136226" Nov 24 11:11:31 crc kubenswrapper[5072]: I1124 11:11:31.914745 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pvs9g" Nov 24 11:11:31 crc kubenswrapper[5072]: I1124 11:11:31.921718 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b22a28d-845b-4cc5-a4d6-bd747cf5c958-utilities\") pod \"certified-operators-lsrl7\" (UID: \"7b22a28d-845b-4cc5-a4d6-bd747cf5c958\") " pod="openshift-marketplace/certified-operators-lsrl7" Nov 24 11:11:31 crc kubenswrapper[5072]: I1124 11:11:31.921811 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9w2qz\" (UID: \"d68516ef-c18f-4d3f-bc80-71739e73cee1\") " pod="openshift-image-registry/image-registry-697d97f7c8-9w2qz" Nov 24 11:11:31 crc kubenswrapper[5072]: I1124 11:11:31.921899 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmjlc\" (UniqueName: \"kubernetes.io/projected/7b22a28d-845b-4cc5-a4d6-bd747cf5c958-kube-api-access-vmjlc\") pod \"certified-operators-lsrl7\" (UID: \"7b22a28d-845b-4cc5-a4d6-bd747cf5c958\") " pod="openshift-marketplace/certified-operators-lsrl7" Nov 24 11:11:31 crc kubenswrapper[5072]: I1124 11:11:31.922004 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b22a28d-845b-4cc5-a4d6-bd747cf5c958-catalog-content\") pod \"certified-operators-lsrl7\" (UID: \"7b22a28d-845b-4cc5-a4d6-bd747cf5c958\") " pod="openshift-marketplace/certified-operators-lsrl7" Nov 24 11:11:31 crc kubenswrapper[5072]: E1124 11:11:31.922297 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:11:32.422285863 +0000 UTC m=+144.133810339 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9w2qz" (UID: "d68516ef-c18f-4d3f-bc80-71739e73cee1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:11:32 crc kubenswrapper[5072]: I1124 11:11:32.002463 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-s9t8g"] Nov 24 11:11:32 crc kubenswrapper[5072]: I1124 11:11:32.003309 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s9t8g" Nov 24 11:11:32 crc kubenswrapper[5072]: I1124 11:11:32.014092 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s9t8g"] Nov 24 11:11:32 crc kubenswrapper[5072]: I1124 11:11:32.025923 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:11:32 crc kubenswrapper[5072]: I1124 11:11:32.026116 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b22a28d-845b-4cc5-a4d6-bd747cf5c958-catalog-content\") pod \"certified-operators-lsrl7\" (UID: \"7b22a28d-845b-4cc5-a4d6-bd747cf5c958\") " pod="openshift-marketplace/certified-operators-lsrl7" Nov 24 11:11:32 crc kubenswrapper[5072]: I1124 11:11:32.026161 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b22a28d-845b-4cc5-a4d6-bd747cf5c958-utilities\") pod \"certified-operators-lsrl7\" (UID: \"7b22a28d-845b-4cc5-a4d6-bd747cf5c958\") " pod="openshift-marketplace/certified-operators-lsrl7" Nov 24 11:11:32 crc kubenswrapper[5072]: I1124 11:11:32.026449 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmjlc\" (UniqueName: \"kubernetes.io/projected/7b22a28d-845b-4cc5-a4d6-bd747cf5c958-kube-api-access-vmjlc\") pod \"certified-operators-lsrl7\" (UID: \"7b22a28d-845b-4cc5-a4d6-bd747cf5c958\") " pod="openshift-marketplace/certified-operators-lsrl7" Nov 24 11:11:32 crc kubenswrapper[5072]: E1124 11:11:32.028144 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:11:32.528111506 +0000 UTC m=+144.239635982 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:11:32 crc kubenswrapper[5072]: I1124 11:11:32.030486 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b22a28d-845b-4cc5-a4d6-bd747cf5c958-catalog-content\") pod \"certified-operators-lsrl7\" (UID: \"7b22a28d-845b-4cc5-a4d6-bd747cf5c958\") " pod="openshift-marketplace/certified-operators-lsrl7" Nov 24 11:11:32 crc kubenswrapper[5072]: I1124 11:11:32.031645 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b22a28d-845b-4cc5-a4d6-bd747cf5c958-utilities\") pod \"certified-operators-lsrl7\" (UID: \"7b22a28d-845b-4cc5-a4d6-bd747cf5c958\") " pod="openshift-marketplace/certified-operators-lsrl7" Nov 24 11:11:32 crc kubenswrapper[5072]: I1124 11:11:32.069356 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmjlc\" (UniqueName: \"kubernetes.io/projected/7b22a28d-845b-4cc5-a4d6-bd747cf5c958-kube-api-access-vmjlc\") pod \"certified-operators-lsrl7\" (UID: \"7b22a28d-845b-4cc5-a4d6-bd747cf5c958\") " pod="openshift-marketplace/certified-operators-lsrl7" Nov 24 11:11:32 crc kubenswrapper[5072]: I1124 11:11:32.122162 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lsrl7" Nov 24 11:11:32 crc kubenswrapper[5072]: I1124 11:11:32.128269 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f53d96c-25ab-4cc4-ac1a-84ae05681d4b-catalog-content\") pod \"community-operators-s9t8g\" (UID: \"2f53d96c-25ab-4cc4-ac1a-84ae05681d4b\") " pod="openshift-marketplace/community-operators-s9t8g" Nov 24 11:11:32 crc kubenswrapper[5072]: I1124 11:11:32.128327 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xrfd\" (UniqueName: \"kubernetes.io/projected/2f53d96c-25ab-4cc4-ac1a-84ae05681d4b-kube-api-access-9xrfd\") pod \"community-operators-s9t8g\" (UID: \"2f53d96c-25ab-4cc4-ac1a-84ae05681d4b\") " pod="openshift-marketplace/community-operators-s9t8g" Nov 24 11:11:32 crc kubenswrapper[5072]: I1124 11:11:32.128353 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9w2qz\" (UID: \"d68516ef-c18f-4d3f-bc80-71739e73cee1\") " pod="openshift-image-registry/image-registry-697d97f7c8-9w2qz" Nov 24 11:11:32 crc kubenswrapper[5072]: I1124 11:11:32.128406 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f53d96c-25ab-4cc4-ac1a-84ae05681d4b-utilities\") pod \"community-operators-s9t8g\" (UID: \"2f53d96c-25ab-4cc4-ac1a-84ae05681d4b\") " pod="openshift-marketplace/community-operators-s9t8g" Nov 24 11:11:32 crc kubenswrapper[5072]: E1124 11:11:32.128706 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:11:32.628690719 +0000 UTC m=+144.340215195 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9w2qz" (UID: "d68516ef-c18f-4d3f-bc80-71739e73cee1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:11:32 crc kubenswrapper[5072]: I1124 11:11:32.181502 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399700-hnjjf" Nov 24 11:11:32 crc kubenswrapper[5072]: I1124 11:11:32.205096 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-slkhf"] Nov 24 11:11:32 crc kubenswrapper[5072]: I1124 11:11:32.214871 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pvs9g"] Nov 24 11:11:32 crc kubenswrapper[5072]: I1124 11:11:32.217645 5072 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-11-24T11:11:31.581874255Z","Handler":null,"Name":""} Nov 24 11:11:32 crc kubenswrapper[5072]: I1124 11:11:32.221256 5072 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Nov 24 11:11:32 crc kubenswrapper[5072]: I1124 11:11:32.221297 5072 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Nov 24 11:11:32 crc kubenswrapper[5072]: I1124 11:11:32.230741 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:11:32 crc kubenswrapper[5072]: I1124 11:11:32.230922 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f53d96c-25ab-4cc4-ac1a-84ae05681d4b-catalog-content\") pod \"community-operators-s9t8g\" (UID: \"2f53d96c-25ab-4cc4-ac1a-84ae05681d4b\") " pod="openshift-marketplace/community-operators-s9t8g" Nov 24 11:11:32 crc kubenswrapper[5072]: I1124 11:11:32.230991 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xrfd\" (UniqueName: \"kubernetes.io/projected/2f53d96c-25ab-4cc4-ac1a-84ae05681d4b-kube-api-access-9xrfd\") pod \"community-operators-s9t8g\" (UID: \"2f53d96c-25ab-4cc4-ac1a-84ae05681d4b\") " pod="openshift-marketplace/community-operators-s9t8g" Nov 24 11:11:32 crc kubenswrapper[5072]: I1124 11:11:32.231037 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f53d96c-25ab-4cc4-ac1a-84ae05681d4b-utilities\") pod \"community-operators-s9t8g\" (UID: \"2f53d96c-25ab-4cc4-ac1a-84ae05681d4b\") " pod="openshift-marketplace/community-operators-s9t8g" Nov 24 11:11:32 crc kubenswrapper[5072]: I1124 11:11:32.231497 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f53d96c-25ab-4cc4-ac1a-84ae05681d4b-utilities\") pod \"community-operators-s9t8g\" (UID: \"2f53d96c-25ab-4cc4-ac1a-84ae05681d4b\") " pod="openshift-marketplace/community-operators-s9t8g" Nov 24 11:11:32 crc kubenswrapper[5072]: I1124 11:11:32.231745 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f53d96c-25ab-4cc4-ac1a-84ae05681d4b-catalog-content\") pod \"community-operators-s9t8g\" (UID: \"2f53d96c-25ab-4cc4-ac1a-84ae05681d4b\") " pod="openshift-marketplace/community-operators-s9t8g" Nov 24 11:11:32 crc kubenswrapper[5072]: W1124 11:11:32.231943 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcbeb508a_245e_4c6c_9d4f_6f6f330cea5d.slice/crio-939d608df208286fe427568c919fe8ba318dc489192c59c701db33fcaec1bfc5 WatchSource:0}: Error finding container 939d608df208286fe427568c919fe8ba318dc489192c59c701db33fcaec1bfc5: Status 404 returned error can't find the container with id 939d608df208286fe427568c919fe8ba318dc489192c59c701db33fcaec1bfc5 Nov 24 11:11:32 crc kubenswrapper[5072]: W1124 11:11:32.232335 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2f57ff17_1692_4fef_ba23_2b510f5a748b.slice/crio-0f9de5a99d4455e5c05febd476c92ccf2c123f3d8fc7dfc232c2e217c3b74b9c WatchSource:0}: Error finding container 0f9de5a99d4455e5c05febd476c92ccf2c123f3d8fc7dfc232c2e217c3b74b9c: Status 404 returned error can't find the container with id 0f9de5a99d4455e5c05febd476c92ccf2c123f3d8fc7dfc232c2e217c3b74b9c Nov 24 11:11:32 crc kubenswrapper[5072]: I1124 11:11:32.240798 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 24 11:11:32 crc kubenswrapper[5072]: I1124 11:11:32.256029 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xrfd\" (UniqueName: \"kubernetes.io/projected/2f53d96c-25ab-4cc4-ac1a-84ae05681d4b-kube-api-access-9xrfd\") pod \"community-operators-s9t8g\" (UID: \"2f53d96c-25ab-4cc4-ac1a-84ae05681d4b\") " pod="openshift-marketplace/community-operators-s9t8g" Nov 24 11:11:32 crc kubenswrapper[5072]: I1124 11:11:32.332033 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/96be0671-6ddf-4af0-8989-da8c4a4dcfa7-config-volume\") pod \"96be0671-6ddf-4af0-8989-da8c4a4dcfa7\" (UID: \"96be0671-6ddf-4af0-8989-da8c4a4dcfa7\") " Nov 24 11:11:32 crc kubenswrapper[5072]: I1124 11:11:32.332311 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/96be0671-6ddf-4af0-8989-da8c4a4dcfa7-secret-volume\") pod \"96be0671-6ddf-4af0-8989-da8c4a4dcfa7\" (UID: \"96be0671-6ddf-4af0-8989-da8c4a4dcfa7\") " Nov 24 11:11:32 crc kubenswrapper[5072]: I1124 11:11:32.332366 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-46mrm\" (UniqueName: \"kubernetes.io/projected/96be0671-6ddf-4af0-8989-da8c4a4dcfa7-kube-api-access-46mrm\") pod \"96be0671-6ddf-4af0-8989-da8c4a4dcfa7\" (UID: \"96be0671-6ddf-4af0-8989-da8c4a4dcfa7\") " Nov 24 11:11:32 crc kubenswrapper[5072]: I1124 11:11:32.332509 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96be0671-6ddf-4af0-8989-da8c4a4dcfa7-config-volume" (OuterVolumeSpecName: "config-volume") pod "96be0671-6ddf-4af0-8989-da8c4a4dcfa7" (UID: "96be0671-6ddf-4af0-8989-da8c4a4dcfa7"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:11:32 crc kubenswrapper[5072]: I1124 11:11:32.332613 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9w2qz\" (UID: \"d68516ef-c18f-4d3f-bc80-71739e73cee1\") " pod="openshift-image-registry/image-registry-697d97f7c8-9w2qz" Nov 24 11:11:32 crc kubenswrapper[5072]: I1124 11:11:32.332882 5072 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/96be0671-6ddf-4af0-8989-da8c4a4dcfa7-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 11:11:32 crc kubenswrapper[5072]: I1124 11:11:32.335593 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96be0671-6ddf-4af0-8989-da8c4a4dcfa7-kube-api-access-46mrm" (OuterVolumeSpecName: "kube-api-access-46mrm") pod "96be0671-6ddf-4af0-8989-da8c4a4dcfa7" (UID: "96be0671-6ddf-4af0-8989-da8c4a4dcfa7"). InnerVolumeSpecName "kube-api-access-46mrm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:11:32 crc kubenswrapper[5072]: I1124 11:11:32.340893 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96be0671-6ddf-4af0-8989-da8c4a4dcfa7-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "96be0671-6ddf-4af0-8989-da8c4a4dcfa7" (UID: "96be0671-6ddf-4af0-8989-da8c4a4dcfa7"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:11:32 crc kubenswrapper[5072]: I1124 11:11:32.350391 5072 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 24 11:11:32 crc kubenswrapper[5072]: I1124 11:11:32.350432 5072 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9w2qz\" (UID: \"d68516ef-c18f-4d3f-bc80-71739e73cee1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-9w2qz" Nov 24 11:11:32 crc kubenswrapper[5072]: I1124 11:11:32.370065 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lsrl7"] Nov 24 11:11:32 crc kubenswrapper[5072]: I1124 11:11:32.392083 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s9t8g" Nov 24 11:11:32 crc kubenswrapper[5072]: W1124 11:11:32.393199 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7b22a28d_845b_4cc5_a4d6_bd747cf5c958.slice/crio-88e1b2c3feec9a70de81c0dcaed4a38cac26a211393cf43a12816f6ab6466bd1 WatchSource:0}: Error finding container 88e1b2c3feec9a70de81c0dcaed4a38cac26a211393cf43a12816f6ab6466bd1: Status 404 returned error can't find the container with id 88e1b2c3feec9a70de81c0dcaed4a38cac26a211393cf43a12816f6ab6466bd1 Nov 24 11:11:32 crc kubenswrapper[5072]: I1124 11:11:32.395335 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9w2qz\" (UID: \"d68516ef-c18f-4d3f-bc80-71739e73cee1\") " pod="openshift-image-registry/image-registry-697d97f7c8-9w2qz" Nov 24 11:11:32 crc kubenswrapper[5072]: I1124 11:11:32.434414 5072 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/96be0671-6ddf-4af0-8989-da8c4a4dcfa7-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 24 11:11:32 crc kubenswrapper[5072]: I1124 11:11:32.434445 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-46mrm\" (UniqueName: \"kubernetes.io/projected/96be0671-6ddf-4af0-8989-da8c4a4dcfa7-kube-api-access-46mrm\") on node \"crc\" DevicePath \"\"" Nov 24 11:11:32 crc kubenswrapper[5072]: I1124 11:11:32.504440 5072 patch_prober.go:28] interesting pod/router-default-5444994796-wxc9p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 11:11:32 crc kubenswrapper[5072]: [-]has-synced failed: reason withheld Nov 24 11:11:32 crc kubenswrapper[5072]: [+]process-running ok Nov 24 11:11:32 crc kubenswrapper[5072]: healthz check failed Nov 24 11:11:32 crc kubenswrapper[5072]: I1124 11:11:32.504486 5072 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wxc9p" podUID="8ef682f0-d784-48ac-83f3-4c718f34edaf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 11:11:32 crc kubenswrapper[5072]: I1124 11:11:32.545018 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s9t8g"] Nov 24 11:11:32 crc kubenswrapper[5072]: W1124 11:11:32.548952 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2f53d96c_25ab_4cc4_ac1a_84ae05681d4b.slice/crio-d9d7b2fa5e1972d8057f9526bdd9a37c72d0aa7fe4171d65b4204568541cdcbc WatchSource:0}: Error finding container d9d7b2fa5e1972d8057f9526bdd9a37c72d0aa7fe4171d65b4204568541cdcbc: Status 404 returned error can't find the container with id d9d7b2fa5e1972d8057f9526bdd9a37c72d0aa7fe4171d65b4204568541cdcbc Nov 24 11:11:32 crc kubenswrapper[5072]: I1124 11:11:32.674596 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-9w2qz" Nov 24 11:11:32 crc kubenswrapper[5072]: I1124 11:11:32.877060 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399700-hnjjf" event={"ID":"96be0671-6ddf-4af0-8989-da8c4a4dcfa7","Type":"ContainerDied","Data":"1bbd92c18eed9b8aa9b2cbef824a3e735cb2c807fe195c897f054d67e71f219d"} Nov 24 11:11:32 crc kubenswrapper[5072]: I1124 11:11:32.877282 5072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1bbd92c18eed9b8aa9b2cbef824a3e735cb2c807fe195c897f054d67e71f219d" Nov 24 11:11:32 crc kubenswrapper[5072]: I1124 11:11:32.877164 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399700-hnjjf" Nov 24 11:11:32 crc kubenswrapper[5072]: I1124 11:11:32.878517 5072 generic.go:334] "Generic (PLEG): container finished" podID="7b22a28d-845b-4cc5-a4d6-bd747cf5c958" containerID="0be346f6f2d879cfafecf2452a8bc82f4b4975e5615bc3f2d57fdbe08fe0ab2c" exitCode=0 Nov 24 11:11:32 crc kubenswrapper[5072]: I1124 11:11:32.878586 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lsrl7" event={"ID":"7b22a28d-845b-4cc5-a4d6-bd747cf5c958","Type":"ContainerDied","Data":"0be346f6f2d879cfafecf2452a8bc82f4b4975e5615bc3f2d57fdbe08fe0ab2c"} Nov 24 11:11:32 crc kubenswrapper[5072]: I1124 11:11:32.878614 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lsrl7" event={"ID":"7b22a28d-845b-4cc5-a4d6-bd747cf5c958","Type":"ContainerStarted","Data":"88e1b2c3feec9a70de81c0dcaed4a38cac26a211393cf43a12816f6ab6466bd1"} Nov 24 11:11:32 crc kubenswrapper[5072]: I1124 11:11:32.880013 5072 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 11:11:32 crc kubenswrapper[5072]: I1124 11:11:32.899223 5072 generic.go:334] "Generic (PLEG): container finished" podID="2f53d96c-25ab-4cc4-ac1a-84ae05681d4b" containerID="5469b2e215c556c9886ad852585a71558523ba4b0812c6d3c6342ca207daeace" exitCode=0 Nov 24 11:11:32 crc kubenswrapper[5072]: I1124 11:11:32.899305 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s9t8g" event={"ID":"2f53d96c-25ab-4cc4-ac1a-84ae05681d4b","Type":"ContainerDied","Data":"5469b2e215c556c9886ad852585a71558523ba4b0812c6d3c6342ca207daeace"} Nov 24 11:11:32 crc kubenswrapper[5072]: I1124 11:11:32.899328 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s9t8g" event={"ID":"2f53d96c-25ab-4cc4-ac1a-84ae05681d4b","Type":"ContainerStarted","Data":"d9d7b2fa5e1972d8057f9526bdd9a37c72d0aa7fe4171d65b4204568541cdcbc"} Nov 24 11:11:32 crc kubenswrapper[5072]: I1124 11:11:32.915328 5072 generic.go:334] "Generic (PLEG): container finished" podID="2f57ff17-1692-4fef-ba23-2b510f5a748b" containerID="14e7ad0acb5f9b40b7aac0926e576bfa93a8d825bd873ca1062264032f09368e" exitCode=0 Nov 24 11:11:32 crc kubenswrapper[5072]: I1124 11:11:32.915563 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pvs9g" event={"ID":"2f57ff17-1692-4fef-ba23-2b510f5a748b","Type":"ContainerDied","Data":"14e7ad0acb5f9b40b7aac0926e576bfa93a8d825bd873ca1062264032f09368e"} Nov 24 11:11:32 crc kubenswrapper[5072]: I1124 11:11:32.915595 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pvs9g" event={"ID":"2f57ff17-1692-4fef-ba23-2b510f5a748b","Type":"ContainerStarted","Data":"0f9de5a99d4455e5c05febd476c92ccf2c123f3d8fc7dfc232c2e217c3b74b9c"} Nov 24 11:11:32 crc kubenswrapper[5072]: I1124 11:11:32.917031 5072 generic.go:334] "Generic (PLEG): container finished" podID="cbeb508a-245e-4c6c-9d4f-6f6f330cea5d" containerID="273c8e1614c8796c7f274fc3178d7508cc1dc89246aaaf2d29d8b8c30f5833da" exitCode=0 Nov 24 11:11:32 crc kubenswrapper[5072]: I1124 11:11:32.917976 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-slkhf" event={"ID":"cbeb508a-245e-4c6c-9d4f-6f6f330cea5d","Type":"ContainerDied","Data":"273c8e1614c8796c7f274fc3178d7508cc1dc89246aaaf2d29d8b8c30f5833da"} Nov 24 11:11:32 crc kubenswrapper[5072]: I1124 11:11:32.917994 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-slkhf" event={"ID":"cbeb508a-245e-4c6c-9d4f-6f6f330cea5d","Type":"ContainerStarted","Data":"939d608df208286fe427568c919fe8ba318dc489192c59c701db33fcaec1bfc5"} Nov 24 11:11:32 crc kubenswrapper[5072]: I1124 11:11:32.943474 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9w2qz"] Nov 24 11:11:33 crc kubenswrapper[5072]: I1124 11:11:33.030045 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Nov 24 11:11:33 crc kubenswrapper[5072]: I1124 11:11:33.504861 5072 patch_prober.go:28] interesting pod/router-default-5444994796-wxc9p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 11:11:33 crc kubenswrapper[5072]: [-]has-synced failed: reason withheld Nov 24 11:11:33 crc kubenswrapper[5072]: [+]process-running ok Nov 24 11:11:33 crc kubenswrapper[5072]: healthz check failed Nov 24 11:11:33 crc kubenswrapper[5072]: I1124 11:11:33.505270 5072 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wxc9p" podUID="8ef682f0-d784-48ac-83f3-4c718f34edaf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 11:11:33 crc kubenswrapper[5072]: I1124 11:11:33.616140 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-cvm5b"] Nov 24 11:11:33 crc kubenswrapper[5072]: E1124 11:11:33.616503 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96be0671-6ddf-4af0-8989-da8c4a4dcfa7" containerName="collect-profiles" Nov 24 11:11:33 crc kubenswrapper[5072]: I1124 11:11:33.616526 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="96be0671-6ddf-4af0-8989-da8c4a4dcfa7" containerName="collect-profiles" Nov 24 11:11:33 crc kubenswrapper[5072]: I1124 11:11:33.616806 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="96be0671-6ddf-4af0-8989-da8c4a4dcfa7" containerName="collect-profiles" Nov 24 11:11:33 crc kubenswrapper[5072]: I1124 11:11:33.619291 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cvm5b" Nov 24 11:11:33 crc kubenswrapper[5072]: I1124 11:11:33.621865 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 24 11:11:33 crc kubenswrapper[5072]: I1124 11:11:33.625857 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cvm5b"] Nov 24 11:11:33 crc kubenswrapper[5072]: I1124 11:11:33.749280 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9b1a9a7-8932-4045-bd63-bbc4d796d018-catalog-content\") pod \"redhat-marketplace-cvm5b\" (UID: \"f9b1a9a7-8932-4045-bd63-bbc4d796d018\") " pod="openshift-marketplace/redhat-marketplace-cvm5b" Nov 24 11:11:33 crc kubenswrapper[5072]: I1124 11:11:33.749385 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9b1a9a7-8932-4045-bd63-bbc4d796d018-utilities\") pod \"redhat-marketplace-cvm5b\" (UID: \"f9b1a9a7-8932-4045-bd63-bbc4d796d018\") " pod="openshift-marketplace/redhat-marketplace-cvm5b" Nov 24 11:11:33 crc kubenswrapper[5072]: I1124 11:11:33.749504 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnvd5\" (UniqueName: \"kubernetes.io/projected/f9b1a9a7-8932-4045-bd63-bbc4d796d018-kube-api-access-gnvd5\") pod \"redhat-marketplace-cvm5b\" (UID: \"f9b1a9a7-8932-4045-bd63-bbc4d796d018\") " pod="openshift-marketplace/redhat-marketplace-cvm5b" Nov 24 11:11:33 crc kubenswrapper[5072]: I1124 11:11:33.851358 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9b1a9a7-8932-4045-bd63-bbc4d796d018-catalog-content\") pod \"redhat-marketplace-cvm5b\" (UID: \"f9b1a9a7-8932-4045-bd63-bbc4d796d018\") " pod="openshift-marketplace/redhat-marketplace-cvm5b" Nov 24 11:11:33 crc kubenswrapper[5072]: I1124 11:11:33.851485 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9b1a9a7-8932-4045-bd63-bbc4d796d018-utilities\") pod \"redhat-marketplace-cvm5b\" (UID: \"f9b1a9a7-8932-4045-bd63-bbc4d796d018\") " pod="openshift-marketplace/redhat-marketplace-cvm5b" Nov 24 11:11:33 crc kubenswrapper[5072]: I1124 11:11:33.851555 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnvd5\" (UniqueName: \"kubernetes.io/projected/f9b1a9a7-8932-4045-bd63-bbc4d796d018-kube-api-access-gnvd5\") pod \"redhat-marketplace-cvm5b\" (UID: \"f9b1a9a7-8932-4045-bd63-bbc4d796d018\") " pod="openshift-marketplace/redhat-marketplace-cvm5b" Nov 24 11:11:33 crc kubenswrapper[5072]: I1124 11:11:33.858628 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9b1a9a7-8932-4045-bd63-bbc4d796d018-catalog-content\") pod \"redhat-marketplace-cvm5b\" (UID: \"f9b1a9a7-8932-4045-bd63-bbc4d796d018\") " pod="openshift-marketplace/redhat-marketplace-cvm5b" Nov 24 11:11:33 crc kubenswrapper[5072]: I1124 11:11:33.859817 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9b1a9a7-8932-4045-bd63-bbc4d796d018-utilities\") pod \"redhat-marketplace-cvm5b\" (UID: \"f9b1a9a7-8932-4045-bd63-bbc4d796d018\") " pod="openshift-marketplace/redhat-marketplace-cvm5b" Nov 24 11:11:33 crc kubenswrapper[5072]: I1124 11:11:33.873283 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnvd5\" (UniqueName: \"kubernetes.io/projected/f9b1a9a7-8932-4045-bd63-bbc4d796d018-kube-api-access-gnvd5\") pod \"redhat-marketplace-cvm5b\" (UID: \"f9b1a9a7-8932-4045-bd63-bbc4d796d018\") " pod="openshift-marketplace/redhat-marketplace-cvm5b" Nov 24 11:11:33 crc kubenswrapper[5072]: I1124 11:11:33.923838 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-9w2qz" event={"ID":"d68516ef-c18f-4d3f-bc80-71739e73cee1","Type":"ContainerStarted","Data":"bc443c4756d71119b2cb06fe4b2b1fcc698178d163338849422cedc0d20f7424"} Nov 24 11:11:33 crc kubenswrapper[5072]: I1124 11:11:33.923887 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-9w2qz" event={"ID":"d68516ef-c18f-4d3f-bc80-71739e73cee1","Type":"ContainerStarted","Data":"b50e3edb3e87ac26b6fadae92cd538b42386f7ce95e0f359f3a5ea97a6809f73"} Nov 24 11:11:33 crc kubenswrapper[5072]: I1124 11:11:33.924761 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-9w2qz" Nov 24 11:11:33 crc kubenswrapper[5072]: I1124 11:11:33.938772 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cvm5b" Nov 24 11:11:33 crc kubenswrapper[5072]: I1124 11:11:33.944621 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-9w2qz" podStartSLOduration=122.944601099 podStartE2EDuration="2m2.944601099s" podCreationTimestamp="2025-11-24 11:09:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:11:33.940335017 +0000 UTC m=+145.651859493" watchObservedRunningTime="2025-11-24 11:11:33.944601099 +0000 UTC m=+145.656125575" Nov 24 11:11:33 crc kubenswrapper[5072]: I1124 11:11:33.975093 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 24 11:11:33 crc kubenswrapper[5072]: I1124 11:11:33.975791 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 11:11:33 crc kubenswrapper[5072]: I1124 11:11:33.979578 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Nov 24 11:11:33 crc kubenswrapper[5072]: I1124 11:11:33.980611 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Nov 24 11:11:34 crc kubenswrapper[5072]: I1124 11:11:34.009744 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 24 11:11:34 crc kubenswrapper[5072]: I1124 11:11:34.024361 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hjbg7"] Nov 24 11:11:34 crc kubenswrapper[5072]: I1124 11:11:34.025567 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hjbg7" Nov 24 11:11:34 crc kubenswrapper[5072]: I1124 11:11:34.035904 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hjbg7"] Nov 24 11:11:34 crc kubenswrapper[5072]: I1124 11:11:34.159343 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7f9bfc36-3741-4e93-8356-f4fa8d8920a4-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"7f9bfc36-3741-4e93-8356-f4fa8d8920a4\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 11:11:34 crc kubenswrapper[5072]: I1124 11:11:34.159618 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcmbx\" (UniqueName: \"kubernetes.io/projected/f157ffe3-63a8-4ad9-a432-d65de31b5e8f-kube-api-access-hcmbx\") pod \"redhat-marketplace-hjbg7\" (UID: \"f157ffe3-63a8-4ad9-a432-d65de31b5e8f\") " pod="openshift-marketplace/redhat-marketplace-hjbg7" Nov 24 11:11:34 crc kubenswrapper[5072]: I1124 11:11:34.159672 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f157ffe3-63a8-4ad9-a432-d65de31b5e8f-catalog-content\") pod \"redhat-marketplace-hjbg7\" (UID: \"f157ffe3-63a8-4ad9-a432-d65de31b5e8f\") " pod="openshift-marketplace/redhat-marketplace-hjbg7" Nov 24 11:11:34 crc kubenswrapper[5072]: I1124 11:11:34.159696 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f157ffe3-63a8-4ad9-a432-d65de31b5e8f-utilities\") pod \"redhat-marketplace-hjbg7\" (UID: \"f157ffe3-63a8-4ad9-a432-d65de31b5e8f\") " pod="openshift-marketplace/redhat-marketplace-hjbg7" Nov 24 11:11:34 crc kubenswrapper[5072]: I1124 11:11:34.159752 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7f9bfc36-3741-4e93-8356-f4fa8d8920a4-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"7f9bfc36-3741-4e93-8356-f4fa8d8920a4\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 11:11:34 crc kubenswrapper[5072]: I1124 11:11:34.260488 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7f9bfc36-3741-4e93-8356-f4fa8d8920a4-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"7f9bfc36-3741-4e93-8356-f4fa8d8920a4\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 11:11:34 crc kubenswrapper[5072]: I1124 11:11:34.260587 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7f9bfc36-3741-4e93-8356-f4fa8d8920a4-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"7f9bfc36-3741-4e93-8356-f4fa8d8920a4\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 11:11:34 crc kubenswrapper[5072]: I1124 11:11:34.260669 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcmbx\" (UniqueName: \"kubernetes.io/projected/f157ffe3-63a8-4ad9-a432-d65de31b5e8f-kube-api-access-hcmbx\") pod \"redhat-marketplace-hjbg7\" (UID: \"f157ffe3-63a8-4ad9-a432-d65de31b5e8f\") " pod="openshift-marketplace/redhat-marketplace-hjbg7" Nov 24 11:11:34 crc kubenswrapper[5072]: I1124 11:11:34.260759 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f157ffe3-63a8-4ad9-a432-d65de31b5e8f-catalog-content\") pod \"redhat-marketplace-hjbg7\" (UID: \"f157ffe3-63a8-4ad9-a432-d65de31b5e8f\") " pod="openshift-marketplace/redhat-marketplace-hjbg7" Nov 24 11:11:34 crc kubenswrapper[5072]: I1124 11:11:34.260779 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7f9bfc36-3741-4e93-8356-f4fa8d8920a4-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"7f9bfc36-3741-4e93-8356-f4fa8d8920a4\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 11:11:34 crc kubenswrapper[5072]: I1124 11:11:34.260785 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f157ffe3-63a8-4ad9-a432-d65de31b5e8f-utilities\") pod \"redhat-marketplace-hjbg7\" (UID: \"f157ffe3-63a8-4ad9-a432-d65de31b5e8f\") " pod="openshift-marketplace/redhat-marketplace-hjbg7" Nov 24 11:11:34 crc kubenswrapper[5072]: I1124 11:11:34.261223 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f157ffe3-63a8-4ad9-a432-d65de31b5e8f-catalog-content\") pod \"redhat-marketplace-hjbg7\" (UID: \"f157ffe3-63a8-4ad9-a432-d65de31b5e8f\") " pod="openshift-marketplace/redhat-marketplace-hjbg7" Nov 24 11:11:34 crc kubenswrapper[5072]: I1124 11:11:34.261314 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f157ffe3-63a8-4ad9-a432-d65de31b5e8f-utilities\") pod \"redhat-marketplace-hjbg7\" (UID: \"f157ffe3-63a8-4ad9-a432-d65de31b5e8f\") " pod="openshift-marketplace/redhat-marketplace-hjbg7" Nov 24 11:11:34 crc kubenswrapper[5072]: I1124 11:11:34.284576 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcmbx\" (UniqueName: \"kubernetes.io/projected/f157ffe3-63a8-4ad9-a432-d65de31b5e8f-kube-api-access-hcmbx\") pod \"redhat-marketplace-hjbg7\" (UID: \"f157ffe3-63a8-4ad9-a432-d65de31b5e8f\") " pod="openshift-marketplace/redhat-marketplace-hjbg7" Nov 24 11:11:34 crc kubenswrapper[5072]: I1124 11:11:34.285837 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7f9bfc36-3741-4e93-8356-f4fa8d8920a4-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"7f9bfc36-3741-4e93-8356-f4fa8d8920a4\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 11:11:34 crc kubenswrapper[5072]: I1124 11:11:34.305693 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 11:11:34 crc kubenswrapper[5072]: I1124 11:11:34.376916 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cvm5b"] Nov 24 11:11:34 crc kubenswrapper[5072]: I1124 11:11:34.382923 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hjbg7" Nov 24 11:11:34 crc kubenswrapper[5072]: W1124 11:11:34.405112 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9b1a9a7_8932_4045_bd63_bbc4d796d018.slice/crio-3648b5f00ab456e28452b7792d6bb6ffd2765ec564499205b70a7999ac33cb85 WatchSource:0}: Error finding container 3648b5f00ab456e28452b7792d6bb6ffd2765ec564499205b70a7999ac33cb85: Status 404 returned error can't find the container with id 3648b5f00ab456e28452b7792d6bb6ffd2765ec564499205b70a7999ac33cb85 Nov 24 11:11:34 crc kubenswrapper[5072]: I1124 11:11:34.468205 5072 patch_prober.go:28] interesting pod/downloads-7954f5f757-fpxll container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Nov 24 11:11:34 crc kubenswrapper[5072]: I1124 11:11:34.468258 5072 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-fpxll" podUID="1cd359a9-17ba-43c9-8cb3-7c786777226b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Nov 24 11:11:34 crc kubenswrapper[5072]: I1124 11:11:34.468302 5072 patch_prober.go:28] interesting pod/downloads-7954f5f757-fpxll container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Nov 24 11:11:34 crc kubenswrapper[5072]: I1124 11:11:34.468354 5072 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fpxll" podUID="1cd359a9-17ba-43c9-8cb3-7c786777226b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Nov 24 11:11:34 crc kubenswrapper[5072]: I1124 11:11:34.500646 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-798pd" Nov 24 11:11:34 crc kubenswrapper[5072]: I1124 11:11:34.500679 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-798pd" Nov 24 11:11:34 crc kubenswrapper[5072]: I1124 11:11:34.503656 5072 patch_prober.go:28] interesting pod/console-f9d7485db-798pd container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Nov 24 11:11:34 crc kubenswrapper[5072]: I1124 11:11:34.503706 5072 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-798pd" podUID="9d30ed7a-3577-40f4-8d32-eec9f851ab19" containerName="console" probeResult="failure" output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" Nov 24 11:11:34 crc kubenswrapper[5072]: I1124 11:11:34.506026 5072 patch_prober.go:28] interesting pod/router-default-5444994796-wxc9p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 11:11:34 crc kubenswrapper[5072]: [-]has-synced failed: reason withheld Nov 24 11:11:34 crc kubenswrapper[5072]: [+]process-running ok Nov 24 11:11:34 crc kubenswrapper[5072]: healthz check failed Nov 24 11:11:34 crc kubenswrapper[5072]: I1124 11:11:34.506080 5072 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wxc9p" podUID="8ef682f0-d784-48ac-83f3-4c718f34edaf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 11:11:34 crc kubenswrapper[5072]: I1124 11:11:34.611869 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 24 11:11:34 crc kubenswrapper[5072]: I1124 11:11:34.617930 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cngqk"] Nov 24 11:11:34 crc kubenswrapper[5072]: I1124 11:11:34.619156 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cngqk" Nov 24 11:11:34 crc kubenswrapper[5072]: I1124 11:11:34.621173 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 24 11:11:34 crc kubenswrapper[5072]: I1124 11:11:34.622062 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cngqk"] Nov 24 11:11:34 crc kubenswrapper[5072]: I1124 11:11:34.729195 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hjbg7"] Nov 24 11:11:34 crc kubenswrapper[5072]: W1124 11:11:34.758287 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf157ffe3_63a8_4ad9_a432_d65de31b5e8f.slice/crio-12c57c989f84b8e30e961b8761672e70985fa2fff5a3de6ecb39e30c6a9f261f WatchSource:0}: Error finding container 12c57c989f84b8e30e961b8761672e70985fa2fff5a3de6ecb39e30c6a9f261f: Status 404 returned error can't find the container with id 12c57c989f84b8e30e961b8761672e70985fa2fff5a3de6ecb39e30c6a9f261f Nov 24 11:11:34 crc kubenswrapper[5072]: I1124 11:11:34.772645 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b89b78a-9da6-40b4-8285-4311083ba178-catalog-content\") pod \"redhat-operators-cngqk\" (UID: \"2b89b78a-9da6-40b4-8285-4311083ba178\") " pod="openshift-marketplace/redhat-operators-cngqk" Nov 24 11:11:34 crc kubenswrapper[5072]: I1124 11:11:34.772809 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cfrh\" (UniqueName: \"kubernetes.io/projected/2b89b78a-9da6-40b4-8285-4311083ba178-kube-api-access-2cfrh\") pod \"redhat-operators-cngqk\" (UID: \"2b89b78a-9da6-40b4-8285-4311083ba178\") " pod="openshift-marketplace/redhat-operators-cngqk" Nov 24 11:11:34 crc kubenswrapper[5072]: I1124 11:11:34.772992 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b89b78a-9da6-40b4-8285-4311083ba178-utilities\") pod \"redhat-operators-cngqk\" (UID: \"2b89b78a-9da6-40b4-8285-4311083ba178\") " pod="openshift-marketplace/redhat-operators-cngqk" Nov 24 11:11:34 crc kubenswrapper[5072]: I1124 11:11:34.873780 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-4qrkp" Nov 24 11:11:34 crc kubenswrapper[5072]: I1124 11:11:34.873926 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b89b78a-9da6-40b4-8285-4311083ba178-utilities\") pod \"redhat-operators-cngqk\" (UID: \"2b89b78a-9da6-40b4-8285-4311083ba178\") " pod="openshift-marketplace/redhat-operators-cngqk" Nov 24 11:11:34 crc kubenswrapper[5072]: I1124 11:11:34.873994 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b89b78a-9da6-40b4-8285-4311083ba178-catalog-content\") pod \"redhat-operators-cngqk\" (UID: \"2b89b78a-9da6-40b4-8285-4311083ba178\") " pod="openshift-marketplace/redhat-operators-cngqk" Nov 24 11:11:34 crc kubenswrapper[5072]: I1124 11:11:34.874025 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cfrh\" (UniqueName: \"kubernetes.io/projected/2b89b78a-9da6-40b4-8285-4311083ba178-kube-api-access-2cfrh\") pod \"redhat-operators-cngqk\" (UID: \"2b89b78a-9da6-40b4-8285-4311083ba178\") " pod="openshift-marketplace/redhat-operators-cngqk" Nov 24 11:11:34 crc kubenswrapper[5072]: I1124 11:11:34.874649 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b89b78a-9da6-40b4-8285-4311083ba178-utilities\") pod \"redhat-operators-cngqk\" (UID: \"2b89b78a-9da6-40b4-8285-4311083ba178\") " pod="openshift-marketplace/redhat-operators-cngqk" Nov 24 11:11:34 crc kubenswrapper[5072]: I1124 11:11:34.875126 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b89b78a-9da6-40b4-8285-4311083ba178-catalog-content\") pod \"redhat-operators-cngqk\" (UID: \"2b89b78a-9da6-40b4-8285-4311083ba178\") " pod="openshift-marketplace/redhat-operators-cngqk" Nov 24 11:11:34 crc kubenswrapper[5072]: I1124 11:11:34.887124 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-4qrkp" Nov 24 11:11:34 crc kubenswrapper[5072]: I1124 11:11:34.895942 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cfrh\" (UniqueName: \"kubernetes.io/projected/2b89b78a-9da6-40b4-8285-4311083ba178-kube-api-access-2cfrh\") pod \"redhat-operators-cngqk\" (UID: \"2b89b78a-9da6-40b4-8285-4311083ba178\") " pod="openshift-marketplace/redhat-operators-cngqk" Nov 24 11:11:34 crc kubenswrapper[5072]: I1124 11:11:34.954148 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cngqk" Nov 24 11:11:34 crc kubenswrapper[5072]: I1124 11:11:34.971095 5072 generic.go:334] "Generic (PLEG): container finished" podID="f9b1a9a7-8932-4045-bd63-bbc4d796d018" containerID="cfa1f17f667120865c41ae475f888857d09a6046a2db2a5e183afb10aa27917a" exitCode=0 Nov 24 11:11:34 crc kubenswrapper[5072]: I1124 11:11:34.971172 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cvm5b" event={"ID":"f9b1a9a7-8932-4045-bd63-bbc4d796d018","Type":"ContainerDied","Data":"cfa1f17f667120865c41ae475f888857d09a6046a2db2a5e183afb10aa27917a"} Nov 24 11:11:34 crc kubenswrapper[5072]: I1124 11:11:34.971195 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cvm5b" event={"ID":"f9b1a9a7-8932-4045-bd63-bbc4d796d018","Type":"ContainerStarted","Data":"3648b5f00ab456e28452b7792d6bb6ffd2765ec564499205b70a7999ac33cb85"} Nov 24 11:11:34 crc kubenswrapper[5072]: I1124 11:11:34.982799 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hjbg7" event={"ID":"f157ffe3-63a8-4ad9-a432-d65de31b5e8f","Type":"ContainerStarted","Data":"12c57c989f84b8e30e961b8761672e70985fa2fff5a3de6ecb39e30c6a9f261f"} Nov 24 11:11:34 crc kubenswrapper[5072]: I1124 11:11:34.985177 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"7f9bfc36-3741-4e93-8356-f4fa8d8920a4","Type":"ContainerStarted","Data":"b82b5aa1f17ca755184b25b015c6aa8976bee64f94b7e5fe54c4cfbd276e1903"} Nov 24 11:11:35 crc kubenswrapper[5072]: I1124 11:11:35.090954 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-82xhn"] Nov 24 11:11:35 crc kubenswrapper[5072]: I1124 11:11:35.092720 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-82xhn"] Nov 24 11:11:35 crc kubenswrapper[5072]: I1124 11:11:35.112931 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-82xhn" Nov 24 11:11:35 crc kubenswrapper[5072]: I1124 11:11:35.293805 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pssm\" (UniqueName: \"kubernetes.io/projected/0e24c213-2ec7-48d9-a18c-bc0457d2a8a3-kube-api-access-8pssm\") pod \"redhat-operators-82xhn\" (UID: \"0e24c213-2ec7-48d9-a18c-bc0457d2a8a3\") " pod="openshift-marketplace/redhat-operators-82xhn" Nov 24 11:11:35 crc kubenswrapper[5072]: I1124 11:11:35.296152 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e24c213-2ec7-48d9-a18c-bc0457d2a8a3-utilities\") pod \"redhat-operators-82xhn\" (UID: \"0e24c213-2ec7-48d9-a18c-bc0457d2a8a3\") " pod="openshift-marketplace/redhat-operators-82xhn" Nov 24 11:11:35 crc kubenswrapper[5072]: I1124 11:11:35.296643 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e24c213-2ec7-48d9-a18c-bc0457d2a8a3-catalog-content\") pod \"redhat-operators-82xhn\" (UID: \"0e24c213-2ec7-48d9-a18c-bc0457d2a8a3\") " pod="openshift-marketplace/redhat-operators-82xhn" Nov 24 11:11:35 crc kubenswrapper[5072]: I1124 11:11:35.342696 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cngqk"] Nov 24 11:11:35 crc kubenswrapper[5072]: I1124 11:11:35.400309 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pssm\" (UniqueName: \"kubernetes.io/projected/0e24c213-2ec7-48d9-a18c-bc0457d2a8a3-kube-api-access-8pssm\") pod \"redhat-operators-82xhn\" (UID: \"0e24c213-2ec7-48d9-a18c-bc0457d2a8a3\") " pod="openshift-marketplace/redhat-operators-82xhn" Nov 24 11:11:35 crc kubenswrapper[5072]: I1124 11:11:35.400398 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e24c213-2ec7-48d9-a18c-bc0457d2a8a3-utilities\") pod \"redhat-operators-82xhn\" (UID: \"0e24c213-2ec7-48d9-a18c-bc0457d2a8a3\") " pod="openshift-marketplace/redhat-operators-82xhn" Nov 24 11:11:35 crc kubenswrapper[5072]: I1124 11:11:35.400429 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e24c213-2ec7-48d9-a18c-bc0457d2a8a3-catalog-content\") pod \"redhat-operators-82xhn\" (UID: \"0e24c213-2ec7-48d9-a18c-bc0457d2a8a3\") " pod="openshift-marketplace/redhat-operators-82xhn" Nov 24 11:11:35 crc kubenswrapper[5072]: I1124 11:11:35.401027 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e24c213-2ec7-48d9-a18c-bc0457d2a8a3-catalog-content\") pod \"redhat-operators-82xhn\" (UID: \"0e24c213-2ec7-48d9-a18c-bc0457d2a8a3\") " pod="openshift-marketplace/redhat-operators-82xhn" Nov 24 11:11:35 crc kubenswrapper[5072]: I1124 11:11:35.401650 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e24c213-2ec7-48d9-a18c-bc0457d2a8a3-utilities\") pod \"redhat-operators-82xhn\" (UID: \"0e24c213-2ec7-48d9-a18c-bc0457d2a8a3\") " pod="openshift-marketplace/redhat-operators-82xhn" Nov 24 11:11:35 crc kubenswrapper[5072]: I1124 11:11:35.427941 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pssm\" (UniqueName: \"kubernetes.io/projected/0e24c213-2ec7-48d9-a18c-bc0457d2a8a3-kube-api-access-8pssm\") pod \"redhat-operators-82xhn\" (UID: \"0e24c213-2ec7-48d9-a18c-bc0457d2a8a3\") " pod="openshift-marketplace/redhat-operators-82xhn" Nov 24 11:11:35 crc kubenswrapper[5072]: I1124 11:11:35.471582 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-82xhn" Nov 24 11:11:35 crc kubenswrapper[5072]: I1124 11:11:35.503095 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-wxc9p" Nov 24 11:11:35 crc kubenswrapper[5072]: I1124 11:11:35.506303 5072 patch_prober.go:28] interesting pod/router-default-5444994796-wxc9p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 11:11:35 crc kubenswrapper[5072]: [-]has-synced failed: reason withheld Nov 24 11:11:35 crc kubenswrapper[5072]: [+]process-running ok Nov 24 11:11:35 crc kubenswrapper[5072]: healthz check failed Nov 24 11:11:35 crc kubenswrapper[5072]: I1124 11:11:35.506358 5072 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wxc9p" podUID="8ef682f0-d784-48ac-83f3-4c718f34edaf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 11:11:35 crc kubenswrapper[5072]: I1124 11:11:35.715213 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-82xhn"] Nov 24 11:11:35 crc kubenswrapper[5072]: W1124 11:11:35.742684 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e24c213_2ec7_48d9_a18c_bc0457d2a8a3.slice/crio-95144dadec8b8bb33ed1da974e3b47c64b4b4311e95e48bdce4b2d67e2de9bf0 WatchSource:0}: Error finding container 95144dadec8b8bb33ed1da974e3b47c64b4b4311e95e48bdce4b2d67e2de9bf0: Status 404 returned error can't find the container with id 95144dadec8b8bb33ed1da974e3b47c64b4b4311e95e48bdce4b2d67e2de9bf0 Nov 24 11:11:35 crc kubenswrapper[5072]: I1124 11:11:35.995997 5072 generic.go:334] "Generic (PLEG): container finished" podID="2b89b78a-9da6-40b4-8285-4311083ba178" containerID="4b1e65418291db316bdd7bc4ef4f404e1ad9a81e7fbf5b403e62a7d339755957" exitCode=0 Nov 24 11:11:35 crc kubenswrapper[5072]: I1124 11:11:35.996234 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cngqk" event={"ID":"2b89b78a-9da6-40b4-8285-4311083ba178","Type":"ContainerDied","Data":"4b1e65418291db316bdd7bc4ef4f404e1ad9a81e7fbf5b403e62a7d339755957"} Nov 24 11:11:35 crc kubenswrapper[5072]: I1124 11:11:35.996278 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cngqk" event={"ID":"2b89b78a-9da6-40b4-8285-4311083ba178","Type":"ContainerStarted","Data":"81521d2fdd979fcbd96bbf586e97e673fb8f4467d6ce38c732f32584fb89cf1b"} Nov 24 11:11:36 crc kubenswrapper[5072]: I1124 11:11:36.012638 5072 generic.go:334] "Generic (PLEG): container finished" podID="7f9bfc36-3741-4e93-8356-f4fa8d8920a4" containerID="fde93511229922a25cb69599f4cec126d7a9343de4de08f04d32840a6ab871c0" exitCode=0 Nov 24 11:11:36 crc kubenswrapper[5072]: I1124 11:11:36.012825 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"7f9bfc36-3741-4e93-8356-f4fa8d8920a4","Type":"ContainerDied","Data":"fde93511229922a25cb69599f4cec126d7a9343de4de08f04d32840a6ab871c0"} Nov 24 11:11:36 crc kubenswrapper[5072]: I1124 11:11:36.025601 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-82xhn" event={"ID":"0e24c213-2ec7-48d9-a18c-bc0457d2a8a3","Type":"ContainerStarted","Data":"95144dadec8b8bb33ed1da974e3b47c64b4b4311e95e48bdce4b2d67e2de9bf0"} Nov 24 11:11:36 crc kubenswrapper[5072]: I1124 11:11:36.028427 5072 generic.go:334] "Generic (PLEG): container finished" podID="f157ffe3-63a8-4ad9-a432-d65de31b5e8f" containerID="26e221e74930b55ce43c59b0808d81eacbf39c5454f7eef8fcc15701002913b1" exitCode=0 Nov 24 11:11:36 crc kubenswrapper[5072]: I1124 11:11:36.028940 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hjbg7" event={"ID":"f157ffe3-63a8-4ad9-a432-d65de31b5e8f","Type":"ContainerDied","Data":"26e221e74930b55ce43c59b0808d81eacbf39c5454f7eef8fcc15701002913b1"} Nov 24 11:11:36 crc kubenswrapper[5072]: I1124 11:11:36.505644 5072 patch_prober.go:28] interesting pod/router-default-5444994796-wxc9p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 11:11:36 crc kubenswrapper[5072]: [-]has-synced failed: reason withheld Nov 24 11:11:36 crc kubenswrapper[5072]: [+]process-running ok Nov 24 11:11:36 crc kubenswrapper[5072]: healthz check failed Nov 24 11:11:36 crc kubenswrapper[5072]: I1124 11:11:36.505716 5072 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wxc9p" podUID="8ef682f0-d784-48ac-83f3-4c718f34edaf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 11:11:36 crc kubenswrapper[5072]: I1124 11:11:36.618358 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 24 11:11:36 crc kubenswrapper[5072]: I1124 11:11:36.619632 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 11:11:36 crc kubenswrapper[5072]: I1124 11:11:36.623517 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 24 11:11:36 crc kubenswrapper[5072]: I1124 11:11:36.625067 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 24 11:11:36 crc kubenswrapper[5072]: I1124 11:11:36.626326 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 24 11:11:36 crc kubenswrapper[5072]: I1124 11:11:36.720425 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/525fe918-d559-44d0-b583-0347bbd7424c-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"525fe918-d559-44d0-b583-0347bbd7424c\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 11:11:36 crc kubenswrapper[5072]: I1124 11:11:36.720613 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/525fe918-d559-44d0-b583-0347bbd7424c-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"525fe918-d559-44d0-b583-0347bbd7424c\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 11:11:36 crc kubenswrapper[5072]: I1124 11:11:36.821985 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/525fe918-d559-44d0-b583-0347bbd7424c-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"525fe918-d559-44d0-b583-0347bbd7424c\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 11:11:36 crc kubenswrapper[5072]: I1124 11:11:36.822051 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/525fe918-d559-44d0-b583-0347bbd7424c-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"525fe918-d559-44d0-b583-0347bbd7424c\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 11:11:36 crc kubenswrapper[5072]: I1124 11:11:36.822158 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/525fe918-d559-44d0-b583-0347bbd7424c-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"525fe918-d559-44d0-b583-0347bbd7424c\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 11:11:36 crc kubenswrapper[5072]: I1124 11:11:36.848238 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/525fe918-d559-44d0-b583-0347bbd7424c-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"525fe918-d559-44d0-b583-0347bbd7424c\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 11:11:36 crc kubenswrapper[5072]: I1124 11:11:36.938331 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 11:11:37 crc kubenswrapper[5072]: I1124 11:11:37.036135 5072 generic.go:334] "Generic (PLEG): container finished" podID="0e24c213-2ec7-48d9-a18c-bc0457d2a8a3" containerID="58396fa2b0b653dd60c59ae33a144a3218aaa9ce45c5fdea0a31a519cd4d8d3d" exitCode=0 Nov 24 11:11:37 crc kubenswrapper[5072]: I1124 11:11:37.037756 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-82xhn" event={"ID":"0e24c213-2ec7-48d9-a18c-bc0457d2a8a3","Type":"ContainerDied","Data":"58396fa2b0b653dd60c59ae33a144a3218aaa9ce45c5fdea0a31a519cd4d8d3d"} Nov 24 11:11:37 crc kubenswrapper[5072]: I1124 11:11:37.253839 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 11:11:37 crc kubenswrapper[5072]: I1124 11:11:37.433228 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7f9bfc36-3741-4e93-8356-f4fa8d8920a4-kube-api-access\") pod \"7f9bfc36-3741-4e93-8356-f4fa8d8920a4\" (UID: \"7f9bfc36-3741-4e93-8356-f4fa8d8920a4\") " Nov 24 11:11:37 crc kubenswrapper[5072]: I1124 11:11:37.433549 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7f9bfc36-3741-4e93-8356-f4fa8d8920a4-kubelet-dir\") pod \"7f9bfc36-3741-4e93-8356-f4fa8d8920a4\" (UID: \"7f9bfc36-3741-4e93-8356-f4fa8d8920a4\") " Nov 24 11:11:37 crc kubenswrapper[5072]: I1124 11:11:37.433828 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f9bfc36-3741-4e93-8356-f4fa8d8920a4-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "7f9bfc36-3741-4e93-8356-f4fa8d8920a4" (UID: "7f9bfc36-3741-4e93-8356-f4fa8d8920a4"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 11:11:37 crc kubenswrapper[5072]: I1124 11:11:37.439680 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f9bfc36-3741-4e93-8356-f4fa8d8920a4-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7f9bfc36-3741-4e93-8356-f4fa8d8920a4" (UID: "7f9bfc36-3741-4e93-8356-f4fa8d8920a4"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:11:37 crc kubenswrapper[5072]: I1124 11:11:37.491036 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 24 11:11:37 crc kubenswrapper[5072]: I1124 11:11:37.505874 5072 patch_prober.go:28] interesting pod/router-default-5444994796-wxc9p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 11:11:37 crc kubenswrapper[5072]: [-]has-synced failed: reason withheld Nov 24 11:11:37 crc kubenswrapper[5072]: [+]process-running ok Nov 24 11:11:37 crc kubenswrapper[5072]: healthz check failed Nov 24 11:11:37 crc kubenswrapper[5072]: I1124 11:11:37.505921 5072 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wxc9p" podUID="8ef682f0-d784-48ac-83f3-4c718f34edaf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 11:11:37 crc kubenswrapper[5072]: I1124 11:11:37.534524 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7f9bfc36-3741-4e93-8356-f4fa8d8920a4-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 24 11:11:37 crc kubenswrapper[5072]: I1124 11:11:37.534556 5072 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7f9bfc36-3741-4e93-8356-f4fa8d8920a4-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 24 11:11:37 crc kubenswrapper[5072]: I1124 11:11:37.938840 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:11:37 crc kubenswrapper[5072]: I1124 11:11:37.945141 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:11:38 crc kubenswrapper[5072]: I1124 11:11:38.039848 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:11:38 crc kubenswrapper[5072]: I1124 11:11:38.039936 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:11:38 crc kubenswrapper[5072]: I1124 11:11:38.039961 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:11:38 crc kubenswrapper[5072]: I1124 11:11:38.041047 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:11:38 crc kubenswrapper[5072]: I1124 11:11:38.044104 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:11:38 crc kubenswrapper[5072]: I1124 11:11:38.044316 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:11:38 crc kubenswrapper[5072]: I1124 11:11:38.045197 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:11:38 crc kubenswrapper[5072]: I1124 11:11:38.052141 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"7f9bfc36-3741-4e93-8356-f4fa8d8920a4","Type":"ContainerDied","Data":"b82b5aa1f17ca755184b25b015c6aa8976bee64f94b7e5fe54c4cfbd276e1903"} Nov 24 11:11:38 crc kubenswrapper[5072]: I1124 11:11:38.052182 5072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b82b5aa1f17ca755184b25b015c6aa8976bee64f94b7e5fe54c4cfbd276e1903" Nov 24 11:11:38 crc kubenswrapper[5072]: I1124 11:11:38.052229 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 11:11:38 crc kubenswrapper[5072]: I1124 11:11:38.059944 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"525fe918-d559-44d0-b583-0347bbd7424c","Type":"ContainerStarted","Data":"686410ee31b263b37b7fc40cca24520568b052f0e43be96e0db9ebc4affbdcf8"} Nov 24 11:11:38 crc kubenswrapper[5072]: I1124 11:11:38.059990 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"525fe918-d559-44d0-b583-0347bbd7424c","Type":"ContainerStarted","Data":"ae5ccbd71237ad3f56f0fe2e721097fa712b5770f3ab3fb8d1dbd8166f95582f"} Nov 24 11:11:38 crc kubenswrapper[5072]: I1124 11:11:38.084121 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:11:38 crc kubenswrapper[5072]: I1124 11:11:38.086945 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.086926962 podStartE2EDuration="2.086926962s" podCreationTimestamp="2025-11-24 11:11:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:11:38.084114301 +0000 UTC m=+149.795638777" watchObservedRunningTime="2025-11-24 11:11:38.086926962 +0000 UTC m=+149.798451438" Nov 24 11:11:38 crc kubenswrapper[5072]: I1124 11:11:38.090358 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:11:38 crc kubenswrapper[5072]: I1124 11:11:38.505282 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-wxc9p" Nov 24 11:11:38 crc kubenswrapper[5072]: I1124 11:11:38.507991 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-wxc9p" Nov 24 11:11:39 crc kubenswrapper[5072]: I1124 11:11:39.075417 5072 generic.go:334] "Generic (PLEG): container finished" podID="525fe918-d559-44d0-b583-0347bbd7424c" containerID="686410ee31b263b37b7fc40cca24520568b052f0e43be96e0db9ebc4affbdcf8" exitCode=0 Nov 24 11:11:39 crc kubenswrapper[5072]: I1124 11:11:39.075504 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"525fe918-d559-44d0-b583-0347bbd7424c","Type":"ContainerDied","Data":"686410ee31b263b37b7fc40cca24520568b052f0e43be96e0db9ebc4affbdcf8"} Nov 24 11:11:40 crc kubenswrapper[5072]: I1124 11:11:40.690947 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-dxmxv" Nov 24 11:11:41 crc kubenswrapper[5072]: I1124 11:11:41.799345 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 11:11:41 crc kubenswrapper[5072]: I1124 11:11:41.905159 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/525fe918-d559-44d0-b583-0347bbd7424c-kube-api-access\") pod \"525fe918-d559-44d0-b583-0347bbd7424c\" (UID: \"525fe918-d559-44d0-b583-0347bbd7424c\") " Nov 24 11:11:41 crc kubenswrapper[5072]: I1124 11:11:41.905220 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/525fe918-d559-44d0-b583-0347bbd7424c-kubelet-dir\") pod \"525fe918-d559-44d0-b583-0347bbd7424c\" (UID: \"525fe918-d559-44d0-b583-0347bbd7424c\") " Nov 24 11:11:41 crc kubenswrapper[5072]: I1124 11:11:41.905592 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/525fe918-d559-44d0-b583-0347bbd7424c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "525fe918-d559-44d0-b583-0347bbd7424c" (UID: "525fe918-d559-44d0-b583-0347bbd7424c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 11:11:41 crc kubenswrapper[5072]: I1124 11:11:41.911740 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/525fe918-d559-44d0-b583-0347bbd7424c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "525fe918-d559-44d0-b583-0347bbd7424c" (UID: "525fe918-d559-44d0-b583-0347bbd7424c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:11:42 crc kubenswrapper[5072]: I1124 11:11:42.007596 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/525fe918-d559-44d0-b583-0347bbd7424c-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 24 11:11:42 crc kubenswrapper[5072]: I1124 11:11:42.007650 5072 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/525fe918-d559-44d0-b583-0347bbd7424c-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 24 11:11:42 crc kubenswrapper[5072]: I1124 11:11:42.095060 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"525fe918-d559-44d0-b583-0347bbd7424c","Type":"ContainerDied","Data":"ae5ccbd71237ad3f56f0fe2e721097fa712b5770f3ab3fb8d1dbd8166f95582f"} Nov 24 11:11:42 crc kubenswrapper[5072]: I1124 11:11:42.095098 5072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae5ccbd71237ad3f56f0fe2e721097fa712b5770f3ab3fb8d1dbd8166f95582f" Nov 24 11:11:42 crc kubenswrapper[5072]: I1124 11:11:42.095209 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 11:11:43 crc kubenswrapper[5072]: I1124 11:11:43.645033 5072 patch_prober.go:28] interesting pod/machine-config-daemon-jfxnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 11:11:43 crc kubenswrapper[5072]: I1124 11:11:43.645147 5072 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 11:11:44 crc kubenswrapper[5072]: I1124 11:11:44.470030 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-fpxll" Nov 24 11:11:44 crc kubenswrapper[5072]: I1124 11:11:44.504633 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-798pd" Nov 24 11:11:44 crc kubenswrapper[5072]: I1124 11:11:44.511935 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-798pd" Nov 24 11:11:52 crc kubenswrapper[5072]: I1124 11:11:52.683523 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-9w2qz" Nov 24 11:11:53 crc kubenswrapper[5072]: I1124 11:11:53.679949 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/60100e7d-c8b1-4b18-8567-46e21096fa0f-metrics-certs\") pod \"network-metrics-daemon-nnrv7\" (UID: \"60100e7d-c8b1-4b18-8567-46e21096fa0f\") " pod="openshift-multus/network-metrics-daemon-nnrv7" Nov 24 11:11:53 crc kubenswrapper[5072]: I1124 11:11:53.689281 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/60100e7d-c8b1-4b18-8567-46e21096fa0f-metrics-certs\") pod \"network-metrics-daemon-nnrv7\" (UID: \"60100e7d-c8b1-4b18-8567-46e21096fa0f\") " pod="openshift-multus/network-metrics-daemon-nnrv7" Nov 24 11:11:53 crc kubenswrapper[5072]: I1124 11:11:53.965631 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nnrv7" Nov 24 11:11:57 crc kubenswrapper[5072]: E1124 11:11:57.226113 5072 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Nov 24 11:11:57 crc kubenswrapper[5072]: E1124 11:11:57.226462 5072 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gnvd5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-cvm5b_openshift-marketplace(f9b1a9a7-8932-4045-bd63-bbc4d796d018): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 24 11:11:57 crc kubenswrapper[5072]: E1124 11:11:57.236178 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-cvm5b" podUID="f9b1a9a7-8932-4045-bd63-bbc4d796d018" Nov 24 11:11:58 crc kubenswrapper[5072]: E1124 11:11:58.870283 5072 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Nov 24 11:11:58 crc kubenswrapper[5072]: E1124 11:11:58.870957 5072 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9xrfd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-s9t8g_openshift-marketplace(2f53d96c-25ab-4cc4-ac1a-84ae05681d4b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 24 11:11:58 crc kubenswrapper[5072]: E1124 11:11:58.872126 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-s9t8g" podUID="2f53d96c-25ab-4cc4-ac1a-84ae05681d4b" Nov 24 11:11:58 crc kubenswrapper[5072]: E1124 11:11:58.903212 5072 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Nov 24 11:11:58 crc kubenswrapper[5072]: E1124 11:11:58.905223 5072 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vmjlc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-lsrl7_openshift-marketplace(7b22a28d-845b-4cc5-a4d6-bd747cf5c958): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 24 11:11:58 crc kubenswrapper[5072]: E1124 11:11:58.907083 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-lsrl7" podUID="7b22a28d-845b-4cc5-a4d6-bd747cf5c958" Nov 24 11:11:58 crc kubenswrapper[5072]: E1124 11:11:58.918686 5072 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Nov 24 11:11:58 crc kubenswrapper[5072]: E1124 11:11:58.918828 5072 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wqcm9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-slkhf_openshift-marketplace(cbeb508a-245e-4c6c-9d4f-6f6f330cea5d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 24 11:11:58 crc kubenswrapper[5072]: E1124 11:11:58.920310 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-slkhf" podUID="cbeb508a-245e-4c6c-9d4f-6f6f330cea5d" Nov 24 11:12:01 crc kubenswrapper[5072]: E1124 11:12:01.720683 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-s9t8g" podUID="2f53d96c-25ab-4cc4-ac1a-84ae05681d4b" Nov 24 11:12:01 crc kubenswrapper[5072]: E1124 11:12:01.720742 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-lsrl7" podUID="7b22a28d-845b-4cc5-a4d6-bd747cf5c958" Nov 24 11:12:01 crc kubenswrapper[5072]: E1124 11:12:01.720797 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-slkhf" podUID="cbeb508a-245e-4c6c-9d4f-6f6f330cea5d" Nov 24 11:12:01 crc kubenswrapper[5072]: E1124 11:12:01.720858 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-cvm5b" podUID="f9b1a9a7-8932-4045-bd63-bbc4d796d018" Nov 24 11:12:01 crc kubenswrapper[5072]: E1124 11:12:01.747939 5072 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Nov 24 11:12:01 crc kubenswrapper[5072]: E1124 11:12:01.748061 5072 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2cfrh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-cngqk_openshift-marketplace(2b89b78a-9da6-40b4-8285-4311083ba178): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 24 11:12:01 crc kubenswrapper[5072]: E1124 11:12:01.749177 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-cngqk" podUID="2b89b78a-9da6-40b4-8285-4311083ba178" Nov 24 11:12:01 crc kubenswrapper[5072]: E1124 11:12:01.754080 5072 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Nov 24 11:12:01 crc kubenswrapper[5072]: E1124 11:12:01.754263 5072 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8pssm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-82xhn_openshift-marketplace(0e24c213-2ec7-48d9-a18c-bc0457d2a8a3): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 24 11:12:01 crc kubenswrapper[5072]: E1124 11:12:01.755820 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-82xhn" podUID="0e24c213-2ec7-48d9-a18c-bc0457d2a8a3" Nov 24 11:12:01 crc kubenswrapper[5072]: W1124 11:12:01.841617 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-e58c48ba1adb4f0d4816196cf821d7fd8743a15316045ca6daceea26b34ef0ba WatchSource:0}: Error finding container e58c48ba1adb4f0d4816196cf821d7fd8743a15316045ca6daceea26b34ef0ba: Status 404 returned error can't find the container with id e58c48ba1adb4f0d4816196cf821d7fd8743a15316045ca6daceea26b34ef0ba Nov 24 11:12:02 crc kubenswrapper[5072]: I1124 11:12:02.204552 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pvs9g" event={"ID":"2f57ff17-1692-4fef-ba23-2b510f5a748b","Type":"ContainerStarted","Data":"354161dd7b5489d7b2051618e9e789bb0fd65b0a4002cd0ed1de42a154b8cf81"} Nov 24 11:12:02 crc kubenswrapper[5072]: I1124 11:12:02.206552 5072 generic.go:334] "Generic (PLEG): container finished" podID="f157ffe3-63a8-4ad9-a432-d65de31b5e8f" containerID="926781167cbf3f5ff2b0d72ba501d8c872d04267256370a6583cc2754f794f36" exitCode=0 Nov 24 11:12:02 crc kubenswrapper[5072]: I1124 11:12:02.206641 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hjbg7" event={"ID":"f157ffe3-63a8-4ad9-a432-d65de31b5e8f","Type":"ContainerDied","Data":"926781167cbf3f5ff2b0d72ba501d8c872d04267256370a6583cc2754f794f36"} Nov 24 11:12:02 crc kubenswrapper[5072]: I1124 11:12:02.208598 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"8dbcae64ad8ccfccbf69e741110cbb1baa8a3a94596ac0ae9d3eae1c51493629"} Nov 24 11:12:02 crc kubenswrapper[5072]: I1124 11:12:02.208660 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"e58c48ba1adb4f0d4816196cf821d7fd8743a15316045ca6daceea26b34ef0ba"} Nov 24 11:12:02 crc kubenswrapper[5072]: I1124 11:12:02.209991 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"7f68888e5fdc8095a9144e4708fe259062391c0951faf81b30fe943034e72ca6"} Nov 24 11:12:02 crc kubenswrapper[5072]: I1124 11:12:02.210062 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"7f4a64b305829e2bc73b2f150403b20243bff4737a4666829547b3ee7f221984"} Nov 24 11:12:02 crc kubenswrapper[5072]: E1124 11:12:02.215065 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-cngqk" podUID="2b89b78a-9da6-40b4-8285-4311083ba178" Nov 24 11:12:02 crc kubenswrapper[5072]: E1124 11:12:02.215609 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-82xhn" podUID="0e24c213-2ec7-48d9-a18c-bc0457d2a8a3" Nov 24 11:12:02 crc kubenswrapper[5072]: I1124 11:12:02.276935 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-nnrv7"] Nov 24 11:12:02 crc kubenswrapper[5072]: W1124 11:12:02.343651 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod60100e7d_c8b1_4b18_8567_46e21096fa0f.slice/crio-201e7f860cd00b24c46ffb88b3505ff0cc11975ba9dc4026a32f00a801a9e332 WatchSource:0}: Error finding container 201e7f860cd00b24c46ffb88b3505ff0cc11975ba9dc4026a32f00a801a9e332: Status 404 returned error can't find the container with id 201e7f860cd00b24c46ffb88b3505ff0cc11975ba9dc4026a32f00a801a9e332 Nov 24 11:12:03 crc kubenswrapper[5072]: I1124 11:12:03.221544 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-nnrv7" event={"ID":"60100e7d-c8b1-4b18-8567-46e21096fa0f","Type":"ContainerStarted","Data":"f1bce583a13ee8537c542bb01ab69534dd9ae68d6a8dce96906d21b49e835409"} Nov 24 11:12:03 crc kubenswrapper[5072]: I1124 11:12:03.221845 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-nnrv7" event={"ID":"60100e7d-c8b1-4b18-8567-46e21096fa0f","Type":"ContainerStarted","Data":"90298a0d8fb9fd0f0593ecb3c04b17603e1c921f4fa1a58e0e89966c544e171f"} Nov 24 11:12:03 crc kubenswrapper[5072]: I1124 11:12:03.221859 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-nnrv7" event={"ID":"60100e7d-c8b1-4b18-8567-46e21096fa0f","Type":"ContainerStarted","Data":"201e7f860cd00b24c46ffb88b3505ff0cc11975ba9dc4026a32f00a801a9e332"} Nov 24 11:12:03 crc kubenswrapper[5072]: I1124 11:12:03.225883 5072 generic.go:334] "Generic (PLEG): container finished" podID="2f57ff17-1692-4fef-ba23-2b510f5a748b" containerID="354161dd7b5489d7b2051618e9e789bb0fd65b0a4002cd0ed1de42a154b8cf81" exitCode=0 Nov 24 11:12:03 crc kubenswrapper[5072]: I1124 11:12:03.225992 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pvs9g" event={"ID":"2f57ff17-1692-4fef-ba23-2b510f5a748b","Type":"ContainerDied","Data":"354161dd7b5489d7b2051618e9e789bb0fd65b0a4002cd0ed1de42a154b8cf81"} Nov 24 11:12:03 crc kubenswrapper[5072]: I1124 11:12:03.229290 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"a0d45b259dcb2cb341274cb88c80769defb7d4bf03c7e8a0fcd473cb9d674d8c"} Nov 24 11:12:03 crc kubenswrapper[5072]: I1124 11:12:03.229327 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"432d499cdb794fa5968399c0a3aa73ae76aa4f4abe32919ebc03d4e37a036f33"} Nov 24 11:12:03 crc kubenswrapper[5072]: I1124 11:12:03.233184 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hjbg7" event={"ID":"f157ffe3-63a8-4ad9-a432-d65de31b5e8f","Type":"ContainerStarted","Data":"f33cb13a848f6a4b137000f5be855f9b0234677e8ce807ddb685487d4920076a"} Nov 24 11:12:03 crc kubenswrapper[5072]: I1124 11:12:03.249826 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-nnrv7" podStartSLOduration=152.249793584 podStartE2EDuration="2m32.249793584s" podCreationTimestamp="2025-11-24 11:09:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:12:03.240843388 +0000 UTC m=+174.952367904" watchObservedRunningTime="2025-11-24 11:12:03.249793584 +0000 UTC m=+174.961318100" Nov 24 11:12:03 crc kubenswrapper[5072]: I1124 11:12:03.264929 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hjbg7" podStartSLOduration=3.592358881 podStartE2EDuration="30.264913267s" podCreationTimestamp="2025-11-24 11:11:33 +0000 UTC" firstStartedPulling="2025-11-24 11:11:36.033419041 +0000 UTC m=+147.744943517" lastFinishedPulling="2025-11-24 11:12:02.705973387 +0000 UTC m=+174.417497903" observedRunningTime="2025-11-24 11:12:03.264627889 +0000 UTC m=+174.976152365" watchObservedRunningTime="2025-11-24 11:12:03.264913267 +0000 UTC m=+174.976437743" Nov 24 11:12:04 crc kubenswrapper[5072]: I1124 11:12:04.384015 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hjbg7" Nov 24 11:12:04 crc kubenswrapper[5072]: I1124 11:12:04.384525 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hjbg7" Nov 24 11:12:05 crc kubenswrapper[5072]: I1124 11:12:05.249946 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pvs9g" event={"ID":"2f57ff17-1692-4fef-ba23-2b510f5a748b","Type":"ContainerStarted","Data":"b0c70158acfffa159f35a11f033f93bcec4e3685da783bab60c14a51202ff508"} Nov 24 11:12:05 crc kubenswrapper[5072]: I1124 11:12:05.275266 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pvs9g" podStartSLOduration=3.117031902 podStartE2EDuration="34.275229689s" podCreationTimestamp="2025-11-24 11:11:31 +0000 UTC" firstStartedPulling="2025-11-24 11:11:32.918262221 +0000 UTC m=+144.629786737" lastFinishedPulling="2025-11-24 11:12:04.076460038 +0000 UTC m=+175.787984524" observedRunningTime="2025-11-24 11:12:05.272759048 +0000 UTC m=+176.984283524" watchObservedRunningTime="2025-11-24 11:12:05.275229689 +0000 UTC m=+176.986754175" Nov 24 11:12:05 crc kubenswrapper[5072]: I1124 11:12:05.567920 5072 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-hjbg7" podUID="f157ffe3-63a8-4ad9-a432-d65de31b5e8f" containerName="registry-server" probeResult="failure" output=< Nov 24 11:12:05 crc kubenswrapper[5072]: timeout: failed to connect service ":50051" within 1s Nov 24 11:12:05 crc kubenswrapper[5072]: > Nov 24 11:12:05 crc kubenswrapper[5072]: I1124 11:12:05.627134 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9x4dl" Nov 24 11:12:08 crc kubenswrapper[5072]: I1124 11:12:08.091331 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:12:11 crc kubenswrapper[5072]: I1124 11:12:11.916155 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pvs9g" Nov 24 11:12:11 crc kubenswrapper[5072]: I1124 11:12:11.918057 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pvs9g" Nov 24 11:12:11 crc kubenswrapper[5072]: I1124 11:12:11.962365 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pvs9g" Nov 24 11:12:12 crc kubenswrapper[5072]: I1124 11:12:12.330330 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pvs9g" Nov 24 11:12:13 crc kubenswrapper[5072]: I1124 11:12:13.295189 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s9t8g" event={"ID":"2f53d96c-25ab-4cc4-ac1a-84ae05681d4b","Type":"ContainerStarted","Data":"701bd6988ffe295e89b6c4bef96834bac5af6f5d6f9eb6e5c97f232f2c9949db"} Nov 24 11:12:13 crc kubenswrapper[5072]: I1124 11:12:13.645167 5072 patch_prober.go:28] interesting pod/machine-config-daemon-jfxnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 11:12:13 crc kubenswrapper[5072]: I1124 11:12:13.645246 5072 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 11:12:14 crc kubenswrapper[5072]: I1124 11:12:14.302878 5072 generic.go:334] "Generic (PLEG): container finished" podID="2f53d96c-25ab-4cc4-ac1a-84ae05681d4b" containerID="701bd6988ffe295e89b6c4bef96834bac5af6f5d6f9eb6e5c97f232f2c9949db" exitCode=0 Nov 24 11:12:14 crc kubenswrapper[5072]: I1124 11:12:14.303002 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s9t8g" event={"ID":"2f53d96c-25ab-4cc4-ac1a-84ae05681d4b","Type":"ContainerDied","Data":"701bd6988ffe295e89b6c4bef96834bac5af6f5d6f9eb6e5c97f232f2c9949db"} Nov 24 11:12:14 crc kubenswrapper[5072]: I1124 11:12:14.428019 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hjbg7" Nov 24 11:12:14 crc kubenswrapper[5072]: I1124 11:12:14.469589 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hjbg7" Nov 24 11:12:15 crc kubenswrapper[5072]: I1124 11:12:15.317778 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s9t8g" event={"ID":"2f53d96c-25ab-4cc4-ac1a-84ae05681d4b","Type":"ContainerStarted","Data":"ed2e4d83f4fa80775433491d51c7a567d7e92bcf8a05603cb3072f98e7abe540"} Nov 24 11:12:15 crc kubenswrapper[5072]: I1124 11:12:15.320453 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cngqk" event={"ID":"2b89b78a-9da6-40b4-8285-4311083ba178","Type":"ContainerStarted","Data":"8e1303b29ce7f1b5915d240567474fc8af18ff77b9f8c7a0d27f35cee2ddb9a7"} Nov 24 11:12:15 crc kubenswrapper[5072]: I1124 11:12:15.322397 5072 generic.go:334] "Generic (PLEG): container finished" podID="7b22a28d-845b-4cc5-a4d6-bd747cf5c958" containerID="6af5f65c6ce8cec9254a56e74b780f8e38f884ec8b232b1ea427824f8af2ae83" exitCode=0 Nov 24 11:12:15 crc kubenswrapper[5072]: I1124 11:12:15.322777 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lsrl7" event={"ID":"7b22a28d-845b-4cc5-a4d6-bd747cf5c958","Type":"ContainerDied","Data":"6af5f65c6ce8cec9254a56e74b780f8e38f884ec8b232b1ea427824f8af2ae83"} Nov 24 11:12:15 crc kubenswrapper[5072]: I1124 11:12:15.338919 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-s9t8g" podStartSLOduration=2.53226496 podStartE2EDuration="44.338901284s" podCreationTimestamp="2025-11-24 11:11:31 +0000 UTC" firstStartedPulling="2025-11-24 11:11:32.904798085 +0000 UTC m=+144.616322571" lastFinishedPulling="2025-11-24 11:12:14.711434409 +0000 UTC m=+186.422958895" observedRunningTime="2025-11-24 11:12:15.337329189 +0000 UTC m=+187.048853665" watchObservedRunningTime="2025-11-24 11:12:15.338901284 +0000 UTC m=+187.050425760" Nov 24 11:12:16 crc kubenswrapper[5072]: I1124 11:12:16.331304 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lsrl7" event={"ID":"7b22a28d-845b-4cc5-a4d6-bd747cf5c958","Type":"ContainerStarted","Data":"21b764d02d436eae6d6cca4920fc2c2f6f69e24249fe40ad370f70e08df49260"} Nov 24 11:12:16 crc kubenswrapper[5072]: I1124 11:12:16.334036 5072 generic.go:334] "Generic (PLEG): container finished" podID="2b89b78a-9da6-40b4-8285-4311083ba178" containerID="8e1303b29ce7f1b5915d240567474fc8af18ff77b9f8c7a0d27f35cee2ddb9a7" exitCode=0 Nov 24 11:12:16 crc kubenswrapper[5072]: I1124 11:12:16.334068 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cngqk" event={"ID":"2b89b78a-9da6-40b4-8285-4311083ba178","Type":"ContainerDied","Data":"8e1303b29ce7f1b5915d240567474fc8af18ff77b9f8c7a0d27f35cee2ddb9a7"} Nov 24 11:12:16 crc kubenswrapper[5072]: I1124 11:12:16.368133 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lsrl7" podStartSLOduration=2.401620396 podStartE2EDuration="45.368115595s" podCreationTimestamp="2025-11-24 11:11:31 +0000 UTC" firstStartedPulling="2025-11-24 11:11:32.879735707 +0000 UTC m=+144.591260183" lastFinishedPulling="2025-11-24 11:12:15.846230906 +0000 UTC m=+187.557755382" observedRunningTime="2025-11-24 11:12:16.367063914 +0000 UTC m=+188.078588390" watchObservedRunningTime="2025-11-24 11:12:16.368115595 +0000 UTC m=+188.079640061" Nov 24 11:12:17 crc kubenswrapper[5072]: I1124 11:12:17.343678 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cngqk" event={"ID":"2b89b78a-9da6-40b4-8285-4311083ba178","Type":"ContainerStarted","Data":"8f89d74e598ced8e066ace7c2cf527cfcb24ff775d2a3f4c544b4faa5280cb00"} Nov 24 11:12:17 crc kubenswrapper[5072]: I1124 11:12:17.345678 5072 generic.go:334] "Generic (PLEG): container finished" podID="f9b1a9a7-8932-4045-bd63-bbc4d796d018" containerID="b801c1017f6294f4297f2b42cec67a18b4deaf2e731e8ff53b3741d589d06f0f" exitCode=0 Nov 24 11:12:17 crc kubenswrapper[5072]: I1124 11:12:17.345711 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cvm5b" event={"ID":"f9b1a9a7-8932-4045-bd63-bbc4d796d018","Type":"ContainerDied","Data":"b801c1017f6294f4297f2b42cec67a18b4deaf2e731e8ff53b3741d589d06f0f"} Nov 24 11:12:17 crc kubenswrapper[5072]: I1124 11:12:17.379128 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cngqk" podStartSLOduration=2.658128658 podStartE2EDuration="43.379100323s" podCreationTimestamp="2025-11-24 11:11:34 +0000 UTC" firstStartedPulling="2025-11-24 11:11:36.00372806 +0000 UTC m=+147.715252536" lastFinishedPulling="2025-11-24 11:12:16.724699725 +0000 UTC m=+188.436224201" observedRunningTime="2025-11-24 11:12:17.375593572 +0000 UTC m=+189.087118048" watchObservedRunningTime="2025-11-24 11:12:17.379100323 +0000 UTC m=+189.090624819" Nov 24 11:12:18 crc kubenswrapper[5072]: I1124 11:12:18.359769 5072 generic.go:334] "Generic (PLEG): container finished" podID="0e24c213-2ec7-48d9-a18c-bc0457d2a8a3" containerID="2cc2b3e1d86a70c2cb2cf7832218ca0b55cc5923f241eb1b4b0f880994e53788" exitCode=0 Nov 24 11:12:18 crc kubenswrapper[5072]: I1124 11:12:18.359851 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-82xhn" event={"ID":"0e24c213-2ec7-48d9-a18c-bc0457d2a8a3","Type":"ContainerDied","Data":"2cc2b3e1d86a70c2cb2cf7832218ca0b55cc5923f241eb1b4b0f880994e53788"} Nov 24 11:12:18 crc kubenswrapper[5072]: I1124 11:12:18.365174 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cvm5b" event={"ID":"f9b1a9a7-8932-4045-bd63-bbc4d796d018","Type":"ContainerStarted","Data":"49bd04fc0e832d07318d2e881f19773a33521b47733fd5c3f1a726310283faed"} Nov 24 11:12:18 crc kubenswrapper[5072]: I1124 11:12:18.367681 5072 generic.go:334] "Generic (PLEG): container finished" podID="cbeb508a-245e-4c6c-9d4f-6f6f330cea5d" containerID="cf85a81334a29ad002bd0dff52348cc2d895a47547614886a3821fbe67aeebce" exitCode=0 Nov 24 11:12:18 crc kubenswrapper[5072]: I1124 11:12:18.367729 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-slkhf" event={"ID":"cbeb508a-245e-4c6c-9d4f-6f6f330cea5d","Type":"ContainerDied","Data":"cf85a81334a29ad002bd0dff52348cc2d895a47547614886a3821fbe67aeebce"} Nov 24 11:12:18 crc kubenswrapper[5072]: I1124 11:12:18.398692 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hjbg7"] Nov 24 11:12:18 crc kubenswrapper[5072]: I1124 11:12:18.398934 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hjbg7" podUID="f157ffe3-63a8-4ad9-a432-d65de31b5e8f" containerName="registry-server" containerID="cri-o://f33cb13a848f6a4b137000f5be855f9b0234677e8ce807ddb685487d4920076a" gracePeriod=2 Nov 24 11:12:18 crc kubenswrapper[5072]: I1124 11:12:18.400826 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-cvm5b" podStartSLOduration=2.678490597 podStartE2EDuration="45.400807527s" podCreationTimestamp="2025-11-24 11:11:33 +0000 UTC" firstStartedPulling="2025-11-24 11:11:35.01746575 +0000 UTC m=+146.728990226" lastFinishedPulling="2025-11-24 11:12:17.73978268 +0000 UTC m=+189.451307156" observedRunningTime="2025-11-24 11:12:18.394569518 +0000 UTC m=+190.106093994" watchObservedRunningTime="2025-11-24 11:12:18.400807527 +0000 UTC m=+190.112332013" Nov 24 11:12:18 crc kubenswrapper[5072]: I1124 11:12:18.770317 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hjbg7" Nov 24 11:12:18 crc kubenswrapper[5072]: I1124 11:12:18.907810 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f157ffe3-63a8-4ad9-a432-d65de31b5e8f-catalog-content\") pod \"f157ffe3-63a8-4ad9-a432-d65de31b5e8f\" (UID: \"f157ffe3-63a8-4ad9-a432-d65de31b5e8f\") " Nov 24 11:12:18 crc kubenswrapper[5072]: I1124 11:12:18.907846 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f157ffe3-63a8-4ad9-a432-d65de31b5e8f-utilities\") pod \"f157ffe3-63a8-4ad9-a432-d65de31b5e8f\" (UID: \"f157ffe3-63a8-4ad9-a432-d65de31b5e8f\") " Nov 24 11:12:18 crc kubenswrapper[5072]: I1124 11:12:18.907961 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hcmbx\" (UniqueName: \"kubernetes.io/projected/f157ffe3-63a8-4ad9-a432-d65de31b5e8f-kube-api-access-hcmbx\") pod \"f157ffe3-63a8-4ad9-a432-d65de31b5e8f\" (UID: \"f157ffe3-63a8-4ad9-a432-d65de31b5e8f\") " Nov 24 11:12:18 crc kubenswrapper[5072]: I1124 11:12:18.908818 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f157ffe3-63a8-4ad9-a432-d65de31b5e8f-utilities" (OuterVolumeSpecName: "utilities") pod "f157ffe3-63a8-4ad9-a432-d65de31b5e8f" (UID: "f157ffe3-63a8-4ad9-a432-d65de31b5e8f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:12:18 crc kubenswrapper[5072]: I1124 11:12:18.915491 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f157ffe3-63a8-4ad9-a432-d65de31b5e8f-kube-api-access-hcmbx" (OuterVolumeSpecName: "kube-api-access-hcmbx") pod "f157ffe3-63a8-4ad9-a432-d65de31b5e8f" (UID: "f157ffe3-63a8-4ad9-a432-d65de31b5e8f"). InnerVolumeSpecName "kube-api-access-hcmbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:12:18 crc kubenswrapper[5072]: I1124 11:12:18.935921 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f157ffe3-63a8-4ad9-a432-d65de31b5e8f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f157ffe3-63a8-4ad9-a432-d65de31b5e8f" (UID: "f157ffe3-63a8-4ad9-a432-d65de31b5e8f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:12:19 crc kubenswrapper[5072]: I1124 11:12:19.009002 5072 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f157ffe3-63a8-4ad9-a432-d65de31b5e8f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 11:12:19 crc kubenswrapper[5072]: I1124 11:12:19.009035 5072 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f157ffe3-63a8-4ad9-a432-d65de31b5e8f-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 11:12:19 crc kubenswrapper[5072]: I1124 11:12:19.009044 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hcmbx\" (UniqueName: \"kubernetes.io/projected/f157ffe3-63a8-4ad9-a432-d65de31b5e8f-kube-api-access-hcmbx\") on node \"crc\" DevicePath \"\"" Nov 24 11:12:19 crc kubenswrapper[5072]: I1124 11:12:19.375220 5072 generic.go:334] "Generic (PLEG): container finished" podID="f157ffe3-63a8-4ad9-a432-d65de31b5e8f" containerID="f33cb13a848f6a4b137000f5be855f9b0234677e8ce807ddb685487d4920076a" exitCode=0 Nov 24 11:12:19 crc kubenswrapper[5072]: I1124 11:12:19.375291 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hjbg7" event={"ID":"f157ffe3-63a8-4ad9-a432-d65de31b5e8f","Type":"ContainerDied","Data":"f33cb13a848f6a4b137000f5be855f9b0234677e8ce807ddb685487d4920076a"} Nov 24 11:12:19 crc kubenswrapper[5072]: I1124 11:12:19.375320 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hjbg7" event={"ID":"f157ffe3-63a8-4ad9-a432-d65de31b5e8f","Type":"ContainerDied","Data":"12c57c989f84b8e30e961b8761672e70985fa2fff5a3de6ecb39e30c6a9f261f"} Nov 24 11:12:19 crc kubenswrapper[5072]: I1124 11:12:19.375337 5072 scope.go:117] "RemoveContainer" containerID="f33cb13a848f6a4b137000f5be855f9b0234677e8ce807ddb685487d4920076a" Nov 24 11:12:19 crc kubenswrapper[5072]: I1124 11:12:19.375883 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hjbg7" Nov 24 11:12:19 crc kubenswrapper[5072]: I1124 11:12:19.378675 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-slkhf" event={"ID":"cbeb508a-245e-4c6c-9d4f-6f6f330cea5d","Type":"ContainerStarted","Data":"7df137ea95a12b501b439a5b62bf06a9d5c8c8b3977525854a05582d5d5ed4e2"} Nov 24 11:12:19 crc kubenswrapper[5072]: I1124 11:12:19.381899 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-82xhn" event={"ID":"0e24c213-2ec7-48d9-a18c-bc0457d2a8a3","Type":"ContainerStarted","Data":"f0871c23ea8d3840ad5cd29b621e88438f177d73c4780b447c4e1ddf323b728d"} Nov 24 11:12:19 crc kubenswrapper[5072]: I1124 11:12:19.391302 5072 scope.go:117] "RemoveContainer" containerID="926781167cbf3f5ff2b0d72ba501d8c872d04267256370a6583cc2754f794f36" Nov 24 11:12:19 crc kubenswrapper[5072]: I1124 11:12:19.391838 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hjbg7"] Nov 24 11:12:19 crc kubenswrapper[5072]: I1124 11:12:19.397470 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hjbg7"] Nov 24 11:12:19 crc kubenswrapper[5072]: I1124 11:12:19.403163 5072 scope.go:117] "RemoveContainer" containerID="26e221e74930b55ce43c59b0808d81eacbf39c5454f7eef8fcc15701002913b1" Nov 24 11:12:19 crc kubenswrapper[5072]: I1124 11:12:19.404992 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-slkhf" podStartSLOduration=2.535898164 podStartE2EDuration="48.40497463s" podCreationTimestamp="2025-11-24 11:11:31 +0000 UTC" firstStartedPulling="2025-11-24 11:11:32.91857809 +0000 UTC m=+144.630102556" lastFinishedPulling="2025-11-24 11:12:18.787654516 +0000 UTC m=+190.499179022" observedRunningTime="2025-11-24 11:12:19.404194278 +0000 UTC m=+191.115718754" watchObservedRunningTime="2025-11-24 11:12:19.40497463 +0000 UTC m=+191.116499106" Nov 24 11:12:19 crc kubenswrapper[5072]: I1124 11:12:19.419595 5072 scope.go:117] "RemoveContainer" containerID="f33cb13a848f6a4b137000f5be855f9b0234677e8ce807ddb685487d4920076a" Nov 24 11:12:19 crc kubenswrapper[5072]: E1124 11:12:19.420180 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f33cb13a848f6a4b137000f5be855f9b0234677e8ce807ddb685487d4920076a\": container with ID starting with f33cb13a848f6a4b137000f5be855f9b0234677e8ce807ddb685487d4920076a not found: ID does not exist" containerID="f33cb13a848f6a4b137000f5be855f9b0234677e8ce807ddb685487d4920076a" Nov 24 11:12:19 crc kubenswrapper[5072]: I1124 11:12:19.420219 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f33cb13a848f6a4b137000f5be855f9b0234677e8ce807ddb685487d4920076a"} err="failed to get container status \"f33cb13a848f6a4b137000f5be855f9b0234677e8ce807ddb685487d4920076a\": rpc error: code = NotFound desc = could not find container \"f33cb13a848f6a4b137000f5be855f9b0234677e8ce807ddb685487d4920076a\": container with ID starting with f33cb13a848f6a4b137000f5be855f9b0234677e8ce807ddb685487d4920076a not found: ID does not exist" Nov 24 11:12:19 crc kubenswrapper[5072]: I1124 11:12:19.420293 5072 scope.go:117] "RemoveContainer" containerID="926781167cbf3f5ff2b0d72ba501d8c872d04267256370a6583cc2754f794f36" Nov 24 11:12:19 crc kubenswrapper[5072]: E1124 11:12:19.420648 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"926781167cbf3f5ff2b0d72ba501d8c872d04267256370a6583cc2754f794f36\": container with ID starting with 926781167cbf3f5ff2b0d72ba501d8c872d04267256370a6583cc2754f794f36 not found: ID does not exist" containerID="926781167cbf3f5ff2b0d72ba501d8c872d04267256370a6583cc2754f794f36" Nov 24 11:12:19 crc kubenswrapper[5072]: I1124 11:12:19.420764 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"926781167cbf3f5ff2b0d72ba501d8c872d04267256370a6583cc2754f794f36"} err="failed to get container status \"926781167cbf3f5ff2b0d72ba501d8c872d04267256370a6583cc2754f794f36\": rpc error: code = NotFound desc = could not find container \"926781167cbf3f5ff2b0d72ba501d8c872d04267256370a6583cc2754f794f36\": container with ID starting with 926781167cbf3f5ff2b0d72ba501d8c872d04267256370a6583cc2754f794f36 not found: ID does not exist" Nov 24 11:12:19 crc kubenswrapper[5072]: I1124 11:12:19.420858 5072 scope.go:117] "RemoveContainer" containerID="26e221e74930b55ce43c59b0808d81eacbf39c5454f7eef8fcc15701002913b1" Nov 24 11:12:19 crc kubenswrapper[5072]: E1124 11:12:19.421311 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26e221e74930b55ce43c59b0808d81eacbf39c5454f7eef8fcc15701002913b1\": container with ID starting with 26e221e74930b55ce43c59b0808d81eacbf39c5454f7eef8fcc15701002913b1 not found: ID does not exist" containerID="26e221e74930b55ce43c59b0808d81eacbf39c5454f7eef8fcc15701002913b1" Nov 24 11:12:19 crc kubenswrapper[5072]: I1124 11:12:19.421343 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26e221e74930b55ce43c59b0808d81eacbf39c5454f7eef8fcc15701002913b1"} err="failed to get container status \"26e221e74930b55ce43c59b0808d81eacbf39c5454f7eef8fcc15701002913b1\": rpc error: code = NotFound desc = could not find container \"26e221e74930b55ce43c59b0808d81eacbf39c5454f7eef8fcc15701002913b1\": container with ID starting with 26e221e74930b55ce43c59b0808d81eacbf39c5454f7eef8fcc15701002913b1 not found: ID does not exist" Nov 24 11:12:19 crc kubenswrapper[5072]: I1124 11:12:19.427439 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-82xhn" podStartSLOduration=3.632578528 podStartE2EDuration="45.427428874s" podCreationTimestamp="2025-11-24 11:11:34 +0000 UTC" firstStartedPulling="2025-11-24 11:11:37.038263823 +0000 UTC m=+148.749788289" lastFinishedPulling="2025-11-24 11:12:18.833114159 +0000 UTC m=+190.544638635" observedRunningTime="2025-11-24 11:12:19.424904751 +0000 UTC m=+191.136429227" watchObservedRunningTime="2025-11-24 11:12:19.427428874 +0000 UTC m=+191.138953350" Nov 24 11:12:21 crc kubenswrapper[5072]: I1124 11:12:21.028294 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f157ffe3-63a8-4ad9-a432-d65de31b5e8f" path="/var/lib/kubelet/pods/f157ffe3-63a8-4ad9-a432-d65de31b5e8f/volumes" Nov 24 11:12:21 crc kubenswrapper[5072]: I1124 11:12:21.750549 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-slkhf" Nov 24 11:12:21 crc kubenswrapper[5072]: I1124 11:12:21.750649 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-slkhf" Nov 24 11:12:21 crc kubenswrapper[5072]: I1124 11:12:21.807492 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-slkhf" Nov 24 11:12:22 crc kubenswrapper[5072]: I1124 11:12:22.122686 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lsrl7" Nov 24 11:12:22 crc kubenswrapper[5072]: I1124 11:12:22.122856 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-lsrl7" Nov 24 11:12:22 crc kubenswrapper[5072]: I1124 11:12:22.167328 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lsrl7" Nov 24 11:12:22 crc kubenswrapper[5072]: I1124 11:12:22.393062 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-s9t8g" Nov 24 11:12:22 crc kubenswrapper[5072]: I1124 11:12:22.393104 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-s9t8g" Nov 24 11:12:22 crc kubenswrapper[5072]: I1124 11:12:22.434788 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-s9t8g" Nov 24 11:12:22 crc kubenswrapper[5072]: I1124 11:12:22.469857 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lsrl7" Nov 24 11:12:22 crc kubenswrapper[5072]: I1124 11:12:22.485889 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-s9t8g" Nov 24 11:12:23 crc kubenswrapper[5072]: I1124 11:12:23.939855 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-cvm5b" Nov 24 11:12:23 crc kubenswrapper[5072]: I1124 11:12:23.940306 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-cvm5b" Nov 24 11:12:24 crc kubenswrapper[5072]: I1124 11:12:24.010200 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-cvm5b" Nov 24 11:12:24 crc kubenswrapper[5072]: I1124 11:12:24.458560 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-cvm5b" Nov 24 11:12:24 crc kubenswrapper[5072]: I1124 11:12:24.799520 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s9t8g"] Nov 24 11:12:24 crc kubenswrapper[5072]: I1124 11:12:24.799848 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-s9t8g" podUID="2f53d96c-25ab-4cc4-ac1a-84ae05681d4b" containerName="registry-server" containerID="cri-o://ed2e4d83f4fa80775433491d51c7a567d7e92bcf8a05603cb3072f98e7abe540" gracePeriod=2 Nov 24 11:12:24 crc kubenswrapper[5072]: I1124 11:12:24.956730 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-cngqk" Nov 24 11:12:24 crc kubenswrapper[5072]: I1124 11:12:24.956800 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cngqk" Nov 24 11:12:25 crc kubenswrapper[5072]: I1124 11:12:25.027213 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cngqk" Nov 24 11:12:25 crc kubenswrapper[5072]: I1124 11:12:25.235200 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s9t8g" Nov 24 11:12:25 crc kubenswrapper[5072]: I1124 11:12:25.392448 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f53d96c-25ab-4cc4-ac1a-84ae05681d4b-catalog-content\") pod \"2f53d96c-25ab-4cc4-ac1a-84ae05681d4b\" (UID: \"2f53d96c-25ab-4cc4-ac1a-84ae05681d4b\") " Nov 24 11:12:25 crc kubenswrapper[5072]: I1124 11:12:25.392544 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f53d96c-25ab-4cc4-ac1a-84ae05681d4b-utilities\") pod \"2f53d96c-25ab-4cc4-ac1a-84ae05681d4b\" (UID: \"2f53d96c-25ab-4cc4-ac1a-84ae05681d4b\") " Nov 24 11:12:25 crc kubenswrapper[5072]: I1124 11:12:25.392596 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xrfd\" (UniqueName: \"kubernetes.io/projected/2f53d96c-25ab-4cc4-ac1a-84ae05681d4b-kube-api-access-9xrfd\") pod \"2f53d96c-25ab-4cc4-ac1a-84ae05681d4b\" (UID: \"2f53d96c-25ab-4cc4-ac1a-84ae05681d4b\") " Nov 24 11:12:25 crc kubenswrapper[5072]: I1124 11:12:25.394308 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f53d96c-25ab-4cc4-ac1a-84ae05681d4b-utilities" (OuterVolumeSpecName: "utilities") pod "2f53d96c-25ab-4cc4-ac1a-84ae05681d4b" (UID: "2f53d96c-25ab-4cc4-ac1a-84ae05681d4b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:12:25 crc kubenswrapper[5072]: I1124 11:12:25.397648 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f53d96c-25ab-4cc4-ac1a-84ae05681d4b-kube-api-access-9xrfd" (OuterVolumeSpecName: "kube-api-access-9xrfd") pod "2f53d96c-25ab-4cc4-ac1a-84ae05681d4b" (UID: "2f53d96c-25ab-4cc4-ac1a-84ae05681d4b"). InnerVolumeSpecName "kube-api-access-9xrfd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:12:25 crc kubenswrapper[5072]: I1124 11:12:25.415682 5072 generic.go:334] "Generic (PLEG): container finished" podID="2f53d96c-25ab-4cc4-ac1a-84ae05681d4b" containerID="ed2e4d83f4fa80775433491d51c7a567d7e92bcf8a05603cb3072f98e7abe540" exitCode=0 Nov 24 11:12:25 crc kubenswrapper[5072]: I1124 11:12:25.415739 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s9t8g" Nov 24 11:12:25 crc kubenswrapper[5072]: I1124 11:12:25.415764 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s9t8g" event={"ID":"2f53d96c-25ab-4cc4-ac1a-84ae05681d4b","Type":"ContainerDied","Data":"ed2e4d83f4fa80775433491d51c7a567d7e92bcf8a05603cb3072f98e7abe540"} Nov 24 11:12:25 crc kubenswrapper[5072]: I1124 11:12:25.415809 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s9t8g" event={"ID":"2f53d96c-25ab-4cc4-ac1a-84ae05681d4b","Type":"ContainerDied","Data":"d9d7b2fa5e1972d8057f9526bdd9a37c72d0aa7fe4171d65b4204568541cdcbc"} Nov 24 11:12:25 crc kubenswrapper[5072]: I1124 11:12:25.415828 5072 scope.go:117] "RemoveContainer" containerID="ed2e4d83f4fa80775433491d51c7a567d7e92bcf8a05603cb3072f98e7abe540" Nov 24 11:12:25 crc kubenswrapper[5072]: I1124 11:12:25.442631 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f53d96c-25ab-4cc4-ac1a-84ae05681d4b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2f53d96c-25ab-4cc4-ac1a-84ae05681d4b" (UID: "2f53d96c-25ab-4cc4-ac1a-84ae05681d4b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:12:25 crc kubenswrapper[5072]: I1124 11:12:25.448160 5072 scope.go:117] "RemoveContainer" containerID="701bd6988ffe295e89b6c4bef96834bac5af6f5d6f9eb6e5c97f232f2c9949db" Nov 24 11:12:25 crc kubenswrapper[5072]: I1124 11:12:25.462264 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cngqk" Nov 24 11:12:25 crc kubenswrapper[5072]: I1124 11:12:25.474012 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-82xhn" Nov 24 11:12:25 crc kubenswrapper[5072]: I1124 11:12:25.474053 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-82xhn" Nov 24 11:12:25 crc kubenswrapper[5072]: I1124 11:12:25.486025 5072 scope.go:117] "RemoveContainer" containerID="5469b2e215c556c9886ad852585a71558523ba4b0812c6d3c6342ca207daeace" Nov 24 11:12:25 crc kubenswrapper[5072]: I1124 11:12:25.494113 5072 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f53d96c-25ab-4cc4-ac1a-84ae05681d4b-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 11:12:25 crc kubenswrapper[5072]: I1124 11:12:25.494144 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xrfd\" (UniqueName: \"kubernetes.io/projected/2f53d96c-25ab-4cc4-ac1a-84ae05681d4b-kube-api-access-9xrfd\") on node \"crc\" DevicePath \"\"" Nov 24 11:12:25 crc kubenswrapper[5072]: I1124 11:12:25.494153 5072 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f53d96c-25ab-4cc4-ac1a-84ae05681d4b-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 11:12:25 crc kubenswrapper[5072]: I1124 11:12:25.498987 5072 scope.go:117] "RemoveContainer" containerID="ed2e4d83f4fa80775433491d51c7a567d7e92bcf8a05603cb3072f98e7abe540" Nov 24 11:12:25 crc kubenswrapper[5072]: E1124 11:12:25.499273 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed2e4d83f4fa80775433491d51c7a567d7e92bcf8a05603cb3072f98e7abe540\": container with ID starting with ed2e4d83f4fa80775433491d51c7a567d7e92bcf8a05603cb3072f98e7abe540 not found: ID does not exist" containerID="ed2e4d83f4fa80775433491d51c7a567d7e92bcf8a05603cb3072f98e7abe540" Nov 24 11:12:25 crc kubenswrapper[5072]: I1124 11:12:25.499403 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed2e4d83f4fa80775433491d51c7a567d7e92bcf8a05603cb3072f98e7abe540"} err="failed to get container status \"ed2e4d83f4fa80775433491d51c7a567d7e92bcf8a05603cb3072f98e7abe540\": rpc error: code = NotFound desc = could not find container \"ed2e4d83f4fa80775433491d51c7a567d7e92bcf8a05603cb3072f98e7abe540\": container with ID starting with ed2e4d83f4fa80775433491d51c7a567d7e92bcf8a05603cb3072f98e7abe540 not found: ID does not exist" Nov 24 11:12:25 crc kubenswrapper[5072]: I1124 11:12:25.499488 5072 scope.go:117] "RemoveContainer" containerID="701bd6988ffe295e89b6c4bef96834bac5af6f5d6f9eb6e5c97f232f2c9949db" Nov 24 11:12:25 crc kubenswrapper[5072]: E1124 11:12:25.499740 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"701bd6988ffe295e89b6c4bef96834bac5af6f5d6f9eb6e5c97f232f2c9949db\": container with ID starting with 701bd6988ffe295e89b6c4bef96834bac5af6f5d6f9eb6e5c97f232f2c9949db not found: ID does not exist" containerID="701bd6988ffe295e89b6c4bef96834bac5af6f5d6f9eb6e5c97f232f2c9949db" Nov 24 11:12:25 crc kubenswrapper[5072]: I1124 11:12:25.499835 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"701bd6988ffe295e89b6c4bef96834bac5af6f5d6f9eb6e5c97f232f2c9949db"} err="failed to get container status \"701bd6988ffe295e89b6c4bef96834bac5af6f5d6f9eb6e5c97f232f2c9949db\": rpc error: code = NotFound desc = could not find container \"701bd6988ffe295e89b6c4bef96834bac5af6f5d6f9eb6e5c97f232f2c9949db\": container with ID starting with 701bd6988ffe295e89b6c4bef96834bac5af6f5d6f9eb6e5c97f232f2c9949db not found: ID does not exist" Nov 24 11:12:25 crc kubenswrapper[5072]: I1124 11:12:25.499912 5072 scope.go:117] "RemoveContainer" containerID="5469b2e215c556c9886ad852585a71558523ba4b0812c6d3c6342ca207daeace" Nov 24 11:12:25 crc kubenswrapper[5072]: E1124 11:12:25.500169 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5469b2e215c556c9886ad852585a71558523ba4b0812c6d3c6342ca207daeace\": container with ID starting with 5469b2e215c556c9886ad852585a71558523ba4b0812c6d3c6342ca207daeace not found: ID does not exist" containerID="5469b2e215c556c9886ad852585a71558523ba4b0812c6d3c6342ca207daeace" Nov 24 11:12:25 crc kubenswrapper[5072]: I1124 11:12:25.500191 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5469b2e215c556c9886ad852585a71558523ba4b0812c6d3c6342ca207daeace"} err="failed to get container status \"5469b2e215c556c9886ad852585a71558523ba4b0812c6d3c6342ca207daeace\": rpc error: code = NotFound desc = could not find container \"5469b2e215c556c9886ad852585a71558523ba4b0812c6d3c6342ca207daeace\": container with ID starting with 5469b2e215c556c9886ad852585a71558523ba4b0812c6d3c6342ca207daeace not found: ID does not exist" Nov 24 11:12:25 crc kubenswrapper[5072]: I1124 11:12:25.759044 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s9t8g"] Nov 24 11:12:25 crc kubenswrapper[5072]: I1124 11:12:25.763936 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-s9t8g"] Nov 24 11:12:26 crc kubenswrapper[5072]: I1124 11:12:26.522731 5072 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-82xhn" podUID="0e24c213-2ec7-48d9-a18c-bc0457d2a8a3" containerName="registry-server" probeResult="failure" output=< Nov 24 11:12:26 crc kubenswrapper[5072]: timeout: failed to connect service ":50051" within 1s Nov 24 11:12:26 crc kubenswrapper[5072]: > Nov 24 11:12:27 crc kubenswrapper[5072]: I1124 11:12:27.027346 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f53d96c-25ab-4cc4-ac1a-84ae05681d4b" path="/var/lib/kubelet/pods/2f53d96c-25ab-4cc4-ac1a-84ae05681d4b/volumes" Nov 24 11:12:27 crc kubenswrapper[5072]: I1124 11:12:27.199774 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lsrl7"] Nov 24 11:12:27 crc kubenswrapper[5072]: I1124 11:12:27.200123 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-lsrl7" podUID="7b22a28d-845b-4cc5-a4d6-bd747cf5c958" containerName="registry-server" containerID="cri-o://21b764d02d436eae6d6cca4920fc2c2f6f69e24249fe40ad370f70e08df49260" gracePeriod=2 Nov 24 11:12:28 crc kubenswrapper[5072]: I1124 11:12:28.269987 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lsrl7" Nov 24 11:12:28 crc kubenswrapper[5072]: I1124 11:12:28.441258 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vmjlc\" (UniqueName: \"kubernetes.io/projected/7b22a28d-845b-4cc5-a4d6-bd747cf5c958-kube-api-access-vmjlc\") pod \"7b22a28d-845b-4cc5-a4d6-bd747cf5c958\" (UID: \"7b22a28d-845b-4cc5-a4d6-bd747cf5c958\") " Nov 24 11:12:28 crc kubenswrapper[5072]: I1124 11:12:28.441356 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b22a28d-845b-4cc5-a4d6-bd747cf5c958-utilities\") pod \"7b22a28d-845b-4cc5-a4d6-bd747cf5c958\" (UID: \"7b22a28d-845b-4cc5-a4d6-bd747cf5c958\") " Nov 24 11:12:28 crc kubenswrapper[5072]: I1124 11:12:28.441509 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b22a28d-845b-4cc5-a4d6-bd747cf5c958-catalog-content\") pod \"7b22a28d-845b-4cc5-a4d6-bd747cf5c958\" (UID: \"7b22a28d-845b-4cc5-a4d6-bd747cf5c958\") " Nov 24 11:12:28 crc kubenswrapper[5072]: I1124 11:12:28.443064 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b22a28d-845b-4cc5-a4d6-bd747cf5c958-utilities" (OuterVolumeSpecName: "utilities") pod "7b22a28d-845b-4cc5-a4d6-bd747cf5c958" (UID: "7b22a28d-845b-4cc5-a4d6-bd747cf5c958"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:12:28 crc kubenswrapper[5072]: I1124 11:12:28.449609 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b22a28d-845b-4cc5-a4d6-bd747cf5c958-kube-api-access-vmjlc" (OuterVolumeSpecName: "kube-api-access-vmjlc") pod "7b22a28d-845b-4cc5-a4d6-bd747cf5c958" (UID: "7b22a28d-845b-4cc5-a4d6-bd747cf5c958"). InnerVolumeSpecName "kube-api-access-vmjlc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:12:28 crc kubenswrapper[5072]: I1124 11:12:28.456887 5072 generic.go:334] "Generic (PLEG): container finished" podID="7b22a28d-845b-4cc5-a4d6-bd747cf5c958" containerID="21b764d02d436eae6d6cca4920fc2c2f6f69e24249fe40ad370f70e08df49260" exitCode=0 Nov 24 11:12:28 crc kubenswrapper[5072]: I1124 11:12:28.456945 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lsrl7" event={"ID":"7b22a28d-845b-4cc5-a4d6-bd747cf5c958","Type":"ContainerDied","Data":"21b764d02d436eae6d6cca4920fc2c2f6f69e24249fe40ad370f70e08df49260"} Nov 24 11:12:28 crc kubenswrapper[5072]: I1124 11:12:28.456983 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lsrl7" event={"ID":"7b22a28d-845b-4cc5-a4d6-bd747cf5c958","Type":"ContainerDied","Data":"88e1b2c3feec9a70de81c0dcaed4a38cac26a211393cf43a12816f6ab6466bd1"} Nov 24 11:12:28 crc kubenswrapper[5072]: I1124 11:12:28.457011 5072 scope.go:117] "RemoveContainer" containerID="21b764d02d436eae6d6cca4920fc2c2f6f69e24249fe40ad370f70e08df49260" Nov 24 11:12:28 crc kubenswrapper[5072]: I1124 11:12:28.457207 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lsrl7" Nov 24 11:12:28 crc kubenswrapper[5072]: I1124 11:12:28.476664 5072 scope.go:117] "RemoveContainer" containerID="6af5f65c6ce8cec9254a56e74b780f8e38f884ec8b232b1ea427824f8af2ae83" Nov 24 11:12:28 crc kubenswrapper[5072]: I1124 11:12:28.495189 5072 scope.go:117] "RemoveContainer" containerID="0be346f6f2d879cfafecf2452a8bc82f4b4975e5615bc3f2d57fdbe08fe0ab2c" Nov 24 11:12:28 crc kubenswrapper[5072]: I1124 11:12:28.515347 5072 scope.go:117] "RemoveContainer" containerID="21b764d02d436eae6d6cca4920fc2c2f6f69e24249fe40ad370f70e08df49260" Nov 24 11:12:28 crc kubenswrapper[5072]: E1124 11:12:28.515804 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21b764d02d436eae6d6cca4920fc2c2f6f69e24249fe40ad370f70e08df49260\": container with ID starting with 21b764d02d436eae6d6cca4920fc2c2f6f69e24249fe40ad370f70e08df49260 not found: ID does not exist" containerID="21b764d02d436eae6d6cca4920fc2c2f6f69e24249fe40ad370f70e08df49260" Nov 24 11:12:28 crc kubenswrapper[5072]: I1124 11:12:28.515866 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21b764d02d436eae6d6cca4920fc2c2f6f69e24249fe40ad370f70e08df49260"} err="failed to get container status \"21b764d02d436eae6d6cca4920fc2c2f6f69e24249fe40ad370f70e08df49260\": rpc error: code = NotFound desc = could not find container \"21b764d02d436eae6d6cca4920fc2c2f6f69e24249fe40ad370f70e08df49260\": container with ID starting with 21b764d02d436eae6d6cca4920fc2c2f6f69e24249fe40ad370f70e08df49260 not found: ID does not exist" Nov 24 11:12:28 crc kubenswrapper[5072]: I1124 11:12:28.515911 5072 scope.go:117] "RemoveContainer" containerID="6af5f65c6ce8cec9254a56e74b780f8e38f884ec8b232b1ea427824f8af2ae83" Nov 24 11:12:28 crc kubenswrapper[5072]: E1124 11:12:28.516226 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6af5f65c6ce8cec9254a56e74b780f8e38f884ec8b232b1ea427824f8af2ae83\": container with ID starting with 6af5f65c6ce8cec9254a56e74b780f8e38f884ec8b232b1ea427824f8af2ae83 not found: ID does not exist" containerID="6af5f65c6ce8cec9254a56e74b780f8e38f884ec8b232b1ea427824f8af2ae83" Nov 24 11:12:28 crc kubenswrapper[5072]: I1124 11:12:28.516252 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6af5f65c6ce8cec9254a56e74b780f8e38f884ec8b232b1ea427824f8af2ae83"} err="failed to get container status \"6af5f65c6ce8cec9254a56e74b780f8e38f884ec8b232b1ea427824f8af2ae83\": rpc error: code = NotFound desc = could not find container \"6af5f65c6ce8cec9254a56e74b780f8e38f884ec8b232b1ea427824f8af2ae83\": container with ID starting with 6af5f65c6ce8cec9254a56e74b780f8e38f884ec8b232b1ea427824f8af2ae83 not found: ID does not exist" Nov 24 11:12:28 crc kubenswrapper[5072]: I1124 11:12:28.516275 5072 scope.go:117] "RemoveContainer" containerID="0be346f6f2d879cfafecf2452a8bc82f4b4975e5615bc3f2d57fdbe08fe0ab2c" Nov 24 11:12:28 crc kubenswrapper[5072]: E1124 11:12:28.516553 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0be346f6f2d879cfafecf2452a8bc82f4b4975e5615bc3f2d57fdbe08fe0ab2c\": container with ID starting with 0be346f6f2d879cfafecf2452a8bc82f4b4975e5615bc3f2d57fdbe08fe0ab2c not found: ID does not exist" containerID="0be346f6f2d879cfafecf2452a8bc82f4b4975e5615bc3f2d57fdbe08fe0ab2c" Nov 24 11:12:28 crc kubenswrapper[5072]: I1124 11:12:28.516593 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0be346f6f2d879cfafecf2452a8bc82f4b4975e5615bc3f2d57fdbe08fe0ab2c"} err="failed to get container status \"0be346f6f2d879cfafecf2452a8bc82f4b4975e5615bc3f2d57fdbe08fe0ab2c\": rpc error: code = NotFound desc = could not find container \"0be346f6f2d879cfafecf2452a8bc82f4b4975e5615bc3f2d57fdbe08fe0ab2c\": container with ID starting with 0be346f6f2d879cfafecf2452a8bc82f4b4975e5615bc3f2d57fdbe08fe0ab2c not found: ID does not exist" Nov 24 11:12:28 crc kubenswrapper[5072]: I1124 11:12:28.518358 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b22a28d-845b-4cc5-a4d6-bd747cf5c958-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7b22a28d-845b-4cc5-a4d6-bd747cf5c958" (UID: "7b22a28d-845b-4cc5-a4d6-bd747cf5c958"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:12:28 crc kubenswrapper[5072]: I1124 11:12:28.543654 5072 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b22a28d-845b-4cc5-a4d6-bd747cf5c958-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 11:12:28 crc kubenswrapper[5072]: I1124 11:12:28.543698 5072 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b22a28d-845b-4cc5-a4d6-bd747cf5c958-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 11:12:28 crc kubenswrapper[5072]: I1124 11:12:28.543716 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vmjlc\" (UniqueName: \"kubernetes.io/projected/7b22a28d-845b-4cc5-a4d6-bd747cf5c958-kube-api-access-vmjlc\") on node \"crc\" DevicePath \"\"" Nov 24 11:12:28 crc kubenswrapper[5072]: I1124 11:12:28.791749 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lsrl7"] Nov 24 11:12:28 crc kubenswrapper[5072]: I1124 11:12:28.796563 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-lsrl7"] Nov 24 11:12:29 crc kubenswrapper[5072]: I1124 11:12:29.030843 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b22a28d-845b-4cc5-a4d6-bd747cf5c958" path="/var/lib/kubelet/pods/7b22a28d-845b-4cc5-a4d6-bd747cf5c958/volumes" Nov 24 11:12:31 crc kubenswrapper[5072]: I1124 11:12:31.823517 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-slkhf" Nov 24 11:12:33 crc kubenswrapper[5072]: I1124 11:12:33.989071 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-rxs28"] Nov 24 11:12:35 crc kubenswrapper[5072]: I1124 11:12:35.544294 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-82xhn" Nov 24 11:12:35 crc kubenswrapper[5072]: I1124 11:12:35.613107 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-82xhn" Nov 24 11:12:36 crc kubenswrapper[5072]: I1124 11:12:36.798060 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-82xhn"] Nov 24 11:12:37 crc kubenswrapper[5072]: I1124 11:12:37.521800 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-82xhn" podUID="0e24c213-2ec7-48d9-a18c-bc0457d2a8a3" containerName="registry-server" containerID="cri-o://f0871c23ea8d3840ad5cd29b621e88438f177d73c4780b447c4e1ddf323b728d" gracePeriod=2 Nov 24 11:12:37 crc kubenswrapper[5072]: I1124 11:12:37.938498 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-82xhn" Nov 24 11:12:38 crc kubenswrapper[5072]: I1124 11:12:38.090149 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pssm\" (UniqueName: \"kubernetes.io/projected/0e24c213-2ec7-48d9-a18c-bc0457d2a8a3-kube-api-access-8pssm\") pod \"0e24c213-2ec7-48d9-a18c-bc0457d2a8a3\" (UID: \"0e24c213-2ec7-48d9-a18c-bc0457d2a8a3\") " Nov 24 11:12:38 crc kubenswrapper[5072]: I1124 11:12:38.090257 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e24c213-2ec7-48d9-a18c-bc0457d2a8a3-utilities\") pod \"0e24c213-2ec7-48d9-a18c-bc0457d2a8a3\" (UID: \"0e24c213-2ec7-48d9-a18c-bc0457d2a8a3\") " Nov 24 11:12:38 crc kubenswrapper[5072]: I1124 11:12:38.090369 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e24c213-2ec7-48d9-a18c-bc0457d2a8a3-catalog-content\") pod \"0e24c213-2ec7-48d9-a18c-bc0457d2a8a3\" (UID: \"0e24c213-2ec7-48d9-a18c-bc0457d2a8a3\") " Nov 24 11:12:38 crc kubenswrapper[5072]: I1124 11:12:38.093907 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e24c213-2ec7-48d9-a18c-bc0457d2a8a3-utilities" (OuterVolumeSpecName: "utilities") pod "0e24c213-2ec7-48d9-a18c-bc0457d2a8a3" (UID: "0e24c213-2ec7-48d9-a18c-bc0457d2a8a3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:12:38 crc kubenswrapper[5072]: I1124 11:12:38.104351 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:12:38 crc kubenswrapper[5072]: I1124 11:12:38.105593 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e24c213-2ec7-48d9-a18c-bc0457d2a8a3-kube-api-access-8pssm" (OuterVolumeSpecName: "kube-api-access-8pssm") pod "0e24c213-2ec7-48d9-a18c-bc0457d2a8a3" (UID: "0e24c213-2ec7-48d9-a18c-bc0457d2a8a3"). InnerVolumeSpecName "kube-api-access-8pssm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:12:38 crc kubenswrapper[5072]: I1124 11:12:38.193128 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8pssm\" (UniqueName: \"kubernetes.io/projected/0e24c213-2ec7-48d9-a18c-bc0457d2a8a3-kube-api-access-8pssm\") on node \"crc\" DevicePath \"\"" Nov 24 11:12:38 crc kubenswrapper[5072]: I1124 11:12:38.193472 5072 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e24c213-2ec7-48d9-a18c-bc0457d2a8a3-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 11:12:38 crc kubenswrapper[5072]: I1124 11:12:38.209240 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e24c213-2ec7-48d9-a18c-bc0457d2a8a3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0e24c213-2ec7-48d9-a18c-bc0457d2a8a3" (UID: "0e24c213-2ec7-48d9-a18c-bc0457d2a8a3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:12:38 crc kubenswrapper[5072]: I1124 11:12:38.295366 5072 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e24c213-2ec7-48d9-a18c-bc0457d2a8a3-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 11:12:38 crc kubenswrapper[5072]: I1124 11:12:38.533438 5072 generic.go:334] "Generic (PLEG): container finished" podID="0e24c213-2ec7-48d9-a18c-bc0457d2a8a3" containerID="f0871c23ea8d3840ad5cd29b621e88438f177d73c4780b447c4e1ddf323b728d" exitCode=0 Nov 24 11:12:38 crc kubenswrapper[5072]: I1124 11:12:38.533510 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-82xhn" event={"ID":"0e24c213-2ec7-48d9-a18c-bc0457d2a8a3","Type":"ContainerDied","Data":"f0871c23ea8d3840ad5cd29b621e88438f177d73c4780b447c4e1ddf323b728d"} Nov 24 11:12:38 crc kubenswrapper[5072]: I1124 11:12:38.533550 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-82xhn" event={"ID":"0e24c213-2ec7-48d9-a18c-bc0457d2a8a3","Type":"ContainerDied","Data":"95144dadec8b8bb33ed1da974e3b47c64b4b4311e95e48bdce4b2d67e2de9bf0"} Nov 24 11:12:38 crc kubenswrapper[5072]: I1124 11:12:38.533579 5072 scope.go:117] "RemoveContainer" containerID="f0871c23ea8d3840ad5cd29b621e88438f177d73c4780b447c4e1ddf323b728d" Nov 24 11:12:38 crc kubenswrapper[5072]: I1124 11:12:38.533735 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-82xhn" Nov 24 11:12:38 crc kubenswrapper[5072]: I1124 11:12:38.566631 5072 scope.go:117] "RemoveContainer" containerID="2cc2b3e1d86a70c2cb2cf7832218ca0b55cc5923f241eb1b4b0f880994e53788" Nov 24 11:12:38 crc kubenswrapper[5072]: I1124 11:12:38.592090 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-82xhn"] Nov 24 11:12:38 crc kubenswrapper[5072]: I1124 11:12:38.595718 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-82xhn"] Nov 24 11:12:38 crc kubenswrapper[5072]: I1124 11:12:38.610286 5072 scope.go:117] "RemoveContainer" containerID="58396fa2b0b653dd60c59ae33a144a3218aaa9ce45c5fdea0a31a519cd4d8d3d" Nov 24 11:12:38 crc kubenswrapper[5072]: I1124 11:12:38.633411 5072 scope.go:117] "RemoveContainer" containerID="f0871c23ea8d3840ad5cd29b621e88438f177d73c4780b447c4e1ddf323b728d" Nov 24 11:12:38 crc kubenswrapper[5072]: E1124 11:12:38.634026 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0871c23ea8d3840ad5cd29b621e88438f177d73c4780b447c4e1ddf323b728d\": container with ID starting with f0871c23ea8d3840ad5cd29b621e88438f177d73c4780b447c4e1ddf323b728d not found: ID does not exist" containerID="f0871c23ea8d3840ad5cd29b621e88438f177d73c4780b447c4e1ddf323b728d" Nov 24 11:12:38 crc kubenswrapper[5072]: I1124 11:12:38.634170 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0871c23ea8d3840ad5cd29b621e88438f177d73c4780b447c4e1ddf323b728d"} err="failed to get container status \"f0871c23ea8d3840ad5cd29b621e88438f177d73c4780b447c4e1ddf323b728d\": rpc error: code = NotFound desc = could not find container \"f0871c23ea8d3840ad5cd29b621e88438f177d73c4780b447c4e1ddf323b728d\": container with ID starting with f0871c23ea8d3840ad5cd29b621e88438f177d73c4780b447c4e1ddf323b728d not found: ID does not exist" Nov 24 11:12:38 crc kubenswrapper[5072]: I1124 11:12:38.634276 5072 scope.go:117] "RemoveContainer" containerID="2cc2b3e1d86a70c2cb2cf7832218ca0b55cc5923f241eb1b4b0f880994e53788" Nov 24 11:12:38 crc kubenswrapper[5072]: E1124 11:12:38.635058 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2cc2b3e1d86a70c2cb2cf7832218ca0b55cc5923f241eb1b4b0f880994e53788\": container with ID starting with 2cc2b3e1d86a70c2cb2cf7832218ca0b55cc5923f241eb1b4b0f880994e53788 not found: ID does not exist" containerID="2cc2b3e1d86a70c2cb2cf7832218ca0b55cc5923f241eb1b4b0f880994e53788" Nov 24 11:12:38 crc kubenswrapper[5072]: I1124 11:12:38.635133 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cc2b3e1d86a70c2cb2cf7832218ca0b55cc5923f241eb1b4b0f880994e53788"} err="failed to get container status \"2cc2b3e1d86a70c2cb2cf7832218ca0b55cc5923f241eb1b4b0f880994e53788\": rpc error: code = NotFound desc = could not find container \"2cc2b3e1d86a70c2cb2cf7832218ca0b55cc5923f241eb1b4b0f880994e53788\": container with ID starting with 2cc2b3e1d86a70c2cb2cf7832218ca0b55cc5923f241eb1b4b0f880994e53788 not found: ID does not exist" Nov 24 11:12:38 crc kubenswrapper[5072]: I1124 11:12:38.635179 5072 scope.go:117] "RemoveContainer" containerID="58396fa2b0b653dd60c59ae33a144a3218aaa9ce45c5fdea0a31a519cd4d8d3d" Nov 24 11:12:38 crc kubenswrapper[5072]: E1124 11:12:38.635620 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58396fa2b0b653dd60c59ae33a144a3218aaa9ce45c5fdea0a31a519cd4d8d3d\": container with ID starting with 58396fa2b0b653dd60c59ae33a144a3218aaa9ce45c5fdea0a31a519cd4d8d3d not found: ID does not exist" containerID="58396fa2b0b653dd60c59ae33a144a3218aaa9ce45c5fdea0a31a519cd4d8d3d" Nov 24 11:12:38 crc kubenswrapper[5072]: I1124 11:12:38.635689 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58396fa2b0b653dd60c59ae33a144a3218aaa9ce45c5fdea0a31a519cd4d8d3d"} err="failed to get container status \"58396fa2b0b653dd60c59ae33a144a3218aaa9ce45c5fdea0a31a519cd4d8d3d\": rpc error: code = NotFound desc = could not find container \"58396fa2b0b653dd60c59ae33a144a3218aaa9ce45c5fdea0a31a519cd4d8d3d\": container with ID starting with 58396fa2b0b653dd60c59ae33a144a3218aaa9ce45c5fdea0a31a519cd4d8d3d not found: ID does not exist" Nov 24 11:12:39 crc kubenswrapper[5072]: I1124 11:12:39.028772 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e24c213-2ec7-48d9-a18c-bc0457d2a8a3" path="/var/lib/kubelet/pods/0e24c213-2ec7-48d9-a18c-bc0457d2a8a3/volumes" Nov 24 11:12:43 crc kubenswrapper[5072]: I1124 11:12:43.645164 5072 patch_prober.go:28] interesting pod/machine-config-daemon-jfxnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 11:12:43 crc kubenswrapper[5072]: I1124 11:12:43.645401 5072 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 11:12:43 crc kubenswrapper[5072]: I1124 11:12:43.645440 5072 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" Nov 24 11:12:43 crc kubenswrapper[5072]: I1124 11:12:43.645907 5072 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a3509fd52379451e43594c096ef652d92778331f2aef6b689e547f35a384b976"} pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 11:12:43 crc kubenswrapper[5072]: I1124 11:12:43.645948 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" containerName="machine-config-daemon" containerID="cri-o://a3509fd52379451e43594c096ef652d92778331f2aef6b689e547f35a384b976" gracePeriod=600 Nov 24 11:12:44 crc kubenswrapper[5072]: I1124 11:12:44.570979 5072 generic.go:334] "Generic (PLEG): container finished" podID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" containerID="a3509fd52379451e43594c096ef652d92778331f2aef6b689e547f35a384b976" exitCode=0 Nov 24 11:12:44 crc kubenswrapper[5072]: I1124 11:12:44.571043 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" event={"ID":"85ee6420-36f0-467c-acf4-ebea8b02c8d5","Type":"ContainerDied","Data":"a3509fd52379451e43594c096ef652d92778331f2aef6b689e547f35a384b976"} Nov 24 11:12:44 crc kubenswrapper[5072]: I1124 11:12:44.571570 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" event={"ID":"85ee6420-36f0-467c-acf4-ebea8b02c8d5","Type":"ContainerStarted","Data":"e839d6d58c16c68cbc04eeeedb69dee8ec0dd6b4c9bf97590bae2b1dd76b231f"} Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.028720 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-rxs28" podUID="c77a843c-6b36-4143-aff0-f5e7d227c11d" containerName="oauth-openshift" containerID="cri-o://5bb89a188c4140e6a63a98fe9a82ba1ca60e79ee8abebf0e85d4bf6b09c99e19" gracePeriod=15 Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.446236 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-rxs28" Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.497001 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-69b74fc85f-v4jks"] Nov 24 11:12:59 crc kubenswrapper[5072]: E1124 11:12:59.497404 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f9bfc36-3741-4e93-8356-f4fa8d8920a4" containerName="pruner" Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.497430 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f9bfc36-3741-4e93-8356-f4fa8d8920a4" containerName="pruner" Nov 24 11:12:59 crc kubenswrapper[5072]: E1124 11:12:59.497452 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f53d96c-25ab-4cc4-ac1a-84ae05681d4b" containerName="registry-server" Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.497466 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f53d96c-25ab-4cc4-ac1a-84ae05681d4b" containerName="registry-server" Nov 24 11:12:59 crc kubenswrapper[5072]: E1124 11:12:59.497484 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="525fe918-d559-44d0-b583-0347bbd7424c" containerName="pruner" Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.497498 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="525fe918-d559-44d0-b583-0347bbd7424c" containerName="pruner" Nov 24 11:12:59 crc kubenswrapper[5072]: E1124 11:12:59.497516 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b22a28d-845b-4cc5-a4d6-bd747cf5c958" containerName="registry-server" Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.497529 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b22a28d-845b-4cc5-a4d6-bd747cf5c958" containerName="registry-server" Nov 24 11:12:59 crc kubenswrapper[5072]: E1124 11:12:59.497549 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e24c213-2ec7-48d9-a18c-bc0457d2a8a3" containerName="extract-utilities" Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.497562 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e24c213-2ec7-48d9-a18c-bc0457d2a8a3" containerName="extract-utilities" Nov 24 11:12:59 crc kubenswrapper[5072]: E1124 11:12:59.497582 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f53d96c-25ab-4cc4-ac1a-84ae05681d4b" containerName="extract-content" Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.497595 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f53d96c-25ab-4cc4-ac1a-84ae05681d4b" containerName="extract-content" Nov 24 11:12:59 crc kubenswrapper[5072]: E1124 11:12:59.497611 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b22a28d-845b-4cc5-a4d6-bd747cf5c958" containerName="extract-content" Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.497624 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b22a28d-845b-4cc5-a4d6-bd747cf5c958" containerName="extract-content" Nov 24 11:12:59 crc kubenswrapper[5072]: E1124 11:12:59.497689 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f53d96c-25ab-4cc4-ac1a-84ae05681d4b" containerName="extract-utilities" Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.497704 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f53d96c-25ab-4cc4-ac1a-84ae05681d4b" containerName="extract-utilities" Nov 24 11:12:59 crc kubenswrapper[5072]: E1124 11:12:59.497721 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b22a28d-845b-4cc5-a4d6-bd747cf5c958" containerName="extract-utilities" Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.497772 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b22a28d-845b-4cc5-a4d6-bd747cf5c958" containerName="extract-utilities" Nov 24 11:12:59 crc kubenswrapper[5072]: E1124 11:12:59.497795 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f157ffe3-63a8-4ad9-a432-d65de31b5e8f" containerName="registry-server" Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.497811 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="f157ffe3-63a8-4ad9-a432-d65de31b5e8f" containerName="registry-server" Nov 24 11:12:59 crc kubenswrapper[5072]: E1124 11:12:59.498560 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e24c213-2ec7-48d9-a18c-bc0457d2a8a3" containerName="extract-content" Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.498630 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e24c213-2ec7-48d9-a18c-bc0457d2a8a3" containerName="extract-content" Nov 24 11:12:59 crc kubenswrapper[5072]: E1124 11:12:59.498654 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e24c213-2ec7-48d9-a18c-bc0457d2a8a3" containerName="registry-server" Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.498668 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e24c213-2ec7-48d9-a18c-bc0457d2a8a3" containerName="registry-server" Nov 24 11:12:59 crc kubenswrapper[5072]: E1124 11:12:59.498734 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c77a843c-6b36-4143-aff0-f5e7d227c11d" containerName="oauth-openshift" Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.498748 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="c77a843c-6b36-4143-aff0-f5e7d227c11d" containerName="oauth-openshift" Nov 24 11:12:59 crc kubenswrapper[5072]: E1124 11:12:59.498766 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f157ffe3-63a8-4ad9-a432-d65de31b5e8f" containerName="extract-utilities" Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.498819 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="f157ffe3-63a8-4ad9-a432-d65de31b5e8f" containerName="extract-utilities" Nov 24 11:12:59 crc kubenswrapper[5072]: E1124 11:12:59.498838 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f157ffe3-63a8-4ad9-a432-d65de31b5e8f" containerName="extract-content" Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.498851 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="f157ffe3-63a8-4ad9-a432-d65de31b5e8f" containerName="extract-content" Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.499142 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f9bfc36-3741-4e93-8356-f4fa8d8920a4" containerName="pruner" Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.499200 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="525fe918-d559-44d0-b583-0347bbd7424c" containerName="pruner" Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.499217 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="f157ffe3-63a8-4ad9-a432-d65de31b5e8f" containerName="registry-server" Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.499236 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="c77a843c-6b36-4143-aff0-f5e7d227c11d" containerName="oauth-openshift" Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.499414 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f53d96c-25ab-4cc4-ac1a-84ae05681d4b" containerName="registry-server" Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.499436 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b22a28d-845b-4cc5-a4d6-bd747cf5c958" containerName="registry-server" Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.499459 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e24c213-2ec7-48d9-a18c-bc0457d2a8a3" containerName="registry-server" Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.500634 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-69b74fc85f-v4jks" Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.506533 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-69b74fc85f-v4jks"] Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.509116 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c77a843c-6b36-4143-aff0-f5e7d227c11d-v4-0-config-system-router-certs\") pod \"c77a843c-6b36-4143-aff0-f5e7d227c11d\" (UID: \"c77a843c-6b36-4143-aff0-f5e7d227c11d\") " Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.509846 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c77a843c-6b36-4143-aff0-f5e7d227c11d-v4-0-config-user-template-login\") pod \"c77a843c-6b36-4143-aff0-f5e7d227c11d\" (UID: \"c77a843c-6b36-4143-aff0-f5e7d227c11d\") " Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.509911 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c77a843c-6b36-4143-aff0-f5e7d227c11d-v4-0-config-system-cliconfig\") pod \"c77a843c-6b36-4143-aff0-f5e7d227c11d\" (UID: \"c77a843c-6b36-4143-aff0-f5e7d227c11d\") " Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.509958 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c77a843c-6b36-4143-aff0-f5e7d227c11d-v4-0-config-system-service-ca\") pod \"c77a843c-6b36-4143-aff0-f5e7d227c11d\" (UID: \"c77a843c-6b36-4143-aff0-f5e7d227c11d\") " Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.510005 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c77a843c-6b36-4143-aff0-f5e7d227c11d-v4-0-config-system-trusted-ca-bundle\") pod \"c77a843c-6b36-4143-aff0-f5e7d227c11d\" (UID: \"c77a843c-6b36-4143-aff0-f5e7d227c11d\") " Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.510045 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c77a843c-6b36-4143-aff0-f5e7d227c11d-v4-0-config-user-template-provider-selection\") pod \"c77a843c-6b36-4143-aff0-f5e7d227c11d\" (UID: \"c77a843c-6b36-4143-aff0-f5e7d227c11d\") " Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.510121 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c77a843c-6b36-4143-aff0-f5e7d227c11d-v4-0-config-user-idp-0-file-data\") pod \"c77a843c-6b36-4143-aff0-f5e7d227c11d\" (UID: \"c77a843c-6b36-4143-aff0-f5e7d227c11d\") " Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.510171 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c77a843c-6b36-4143-aff0-f5e7d227c11d-v4-0-config-system-serving-cert\") pod \"c77a843c-6b36-4143-aff0-f5e7d227c11d\" (UID: \"c77a843c-6b36-4143-aff0-f5e7d227c11d\") " Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.510217 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c77a843c-6b36-4143-aff0-f5e7d227c11d-audit-policies\") pod \"c77a843c-6b36-4143-aff0-f5e7d227c11d\" (UID: \"c77a843c-6b36-4143-aff0-f5e7d227c11d\") " Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.510306 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c77a843c-6b36-4143-aff0-f5e7d227c11d-v4-0-config-user-template-error\") pod \"c77a843c-6b36-4143-aff0-f5e7d227c11d\" (UID: \"c77a843c-6b36-4143-aff0-f5e7d227c11d\") " Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.510412 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c77a843c-6b36-4143-aff0-f5e7d227c11d-v4-0-config-system-ocp-branding-template\") pod \"c77a843c-6b36-4143-aff0-f5e7d227c11d\" (UID: \"c77a843c-6b36-4143-aff0-f5e7d227c11d\") " Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.510467 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c77a843c-6b36-4143-aff0-f5e7d227c11d-v4-0-config-system-session\") pod \"c77a843c-6b36-4143-aff0-f5e7d227c11d\" (UID: \"c77a843c-6b36-4143-aff0-f5e7d227c11d\") " Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.510755 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/78ea8bb2-04c7-4df9-a66e-0e09aea0bf7c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-69b74fc85f-v4jks\" (UID: \"78ea8bb2-04c7-4df9-a66e-0e09aea0bf7c\") " pod="openshift-authentication/oauth-openshift-69b74fc85f-v4jks" Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.510816 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58mxf\" (UniqueName: \"kubernetes.io/projected/78ea8bb2-04c7-4df9-a66e-0e09aea0bf7c-kube-api-access-58mxf\") pod \"oauth-openshift-69b74fc85f-v4jks\" (UID: \"78ea8bb2-04c7-4df9-a66e-0e09aea0bf7c\") " pod="openshift-authentication/oauth-openshift-69b74fc85f-v4jks" Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.510918 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/78ea8bb2-04c7-4df9-a66e-0e09aea0bf7c-v4-0-config-system-session\") pod \"oauth-openshift-69b74fc85f-v4jks\" (UID: \"78ea8bb2-04c7-4df9-a66e-0e09aea0bf7c\") " pod="openshift-authentication/oauth-openshift-69b74fc85f-v4jks" Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.510967 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/78ea8bb2-04c7-4df9-a66e-0e09aea0bf7c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-69b74fc85f-v4jks\" (UID: \"78ea8bb2-04c7-4df9-a66e-0e09aea0bf7c\") " pod="openshift-authentication/oauth-openshift-69b74fc85f-v4jks" Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.511020 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/78ea8bb2-04c7-4df9-a66e-0e09aea0bf7c-v4-0-config-user-template-error\") pod \"oauth-openshift-69b74fc85f-v4jks\" (UID: \"78ea8bb2-04c7-4df9-a66e-0e09aea0bf7c\") " pod="openshift-authentication/oauth-openshift-69b74fc85f-v4jks" Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.511122 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/78ea8bb2-04c7-4df9-a66e-0e09aea0bf7c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-69b74fc85f-v4jks\" (UID: \"78ea8bb2-04c7-4df9-a66e-0e09aea0bf7c\") " pod="openshift-authentication/oauth-openshift-69b74fc85f-v4jks" Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.511165 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/78ea8bb2-04c7-4df9-a66e-0e09aea0bf7c-audit-policies\") pod \"oauth-openshift-69b74fc85f-v4jks\" (UID: \"78ea8bb2-04c7-4df9-a66e-0e09aea0bf7c\") " pod="openshift-authentication/oauth-openshift-69b74fc85f-v4jks" Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.511224 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/78ea8bb2-04c7-4df9-a66e-0e09aea0bf7c-v4-0-config-user-template-login\") pod \"oauth-openshift-69b74fc85f-v4jks\" (UID: \"78ea8bb2-04c7-4df9-a66e-0e09aea0bf7c\") " pod="openshift-authentication/oauth-openshift-69b74fc85f-v4jks" Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.511311 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/78ea8bb2-04c7-4df9-a66e-0e09aea0bf7c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-69b74fc85f-v4jks\" (UID: \"78ea8bb2-04c7-4df9-a66e-0e09aea0bf7c\") " pod="openshift-authentication/oauth-openshift-69b74fc85f-v4jks" Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.511354 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/78ea8bb2-04c7-4df9-a66e-0e09aea0bf7c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-69b74fc85f-v4jks\" (UID: \"78ea8bb2-04c7-4df9-a66e-0e09aea0bf7c\") " pod="openshift-authentication/oauth-openshift-69b74fc85f-v4jks" Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.511455 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/78ea8bb2-04c7-4df9-a66e-0e09aea0bf7c-v4-0-config-system-router-certs\") pod \"oauth-openshift-69b74fc85f-v4jks\" (UID: \"78ea8bb2-04c7-4df9-a66e-0e09aea0bf7c\") " pod="openshift-authentication/oauth-openshift-69b74fc85f-v4jks" Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.511555 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/78ea8bb2-04c7-4df9-a66e-0e09aea0bf7c-audit-dir\") pod \"oauth-openshift-69b74fc85f-v4jks\" (UID: \"78ea8bb2-04c7-4df9-a66e-0e09aea0bf7c\") " pod="openshift-authentication/oauth-openshift-69b74fc85f-v4jks" Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.511601 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/78ea8bb2-04c7-4df9-a66e-0e09aea0bf7c-v4-0-config-system-service-ca\") pod \"oauth-openshift-69b74fc85f-v4jks\" (UID: \"78ea8bb2-04c7-4df9-a66e-0e09aea0bf7c\") " pod="openshift-authentication/oauth-openshift-69b74fc85f-v4jks" Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.511674 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/78ea8bb2-04c7-4df9-a66e-0e09aea0bf7c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-69b74fc85f-v4jks\" (UID: \"78ea8bb2-04c7-4df9-a66e-0e09aea0bf7c\") " pod="openshift-authentication/oauth-openshift-69b74fc85f-v4jks" Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.514235 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c77a843c-6b36-4143-aff0-f5e7d227c11d-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "c77a843c-6b36-4143-aff0-f5e7d227c11d" (UID: "c77a843c-6b36-4143-aff0-f5e7d227c11d"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.514322 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c77a843c-6b36-4143-aff0-f5e7d227c11d-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "c77a843c-6b36-4143-aff0-f5e7d227c11d" (UID: "c77a843c-6b36-4143-aff0-f5e7d227c11d"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.514587 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c77a843c-6b36-4143-aff0-f5e7d227c11d-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "c77a843c-6b36-4143-aff0-f5e7d227c11d" (UID: "c77a843c-6b36-4143-aff0-f5e7d227c11d"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.517280 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c77a843c-6b36-4143-aff0-f5e7d227c11d-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "c77a843c-6b36-4143-aff0-f5e7d227c11d" (UID: "c77a843c-6b36-4143-aff0-f5e7d227c11d"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.518105 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c77a843c-6b36-4143-aff0-f5e7d227c11d-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "c77a843c-6b36-4143-aff0-f5e7d227c11d" (UID: "c77a843c-6b36-4143-aff0-f5e7d227c11d"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.518583 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c77a843c-6b36-4143-aff0-f5e7d227c11d-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "c77a843c-6b36-4143-aff0-f5e7d227c11d" (UID: "c77a843c-6b36-4143-aff0-f5e7d227c11d"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.519834 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c77a843c-6b36-4143-aff0-f5e7d227c11d-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "c77a843c-6b36-4143-aff0-f5e7d227c11d" (UID: "c77a843c-6b36-4143-aff0-f5e7d227c11d"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.527176 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c77a843c-6b36-4143-aff0-f5e7d227c11d-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "c77a843c-6b36-4143-aff0-f5e7d227c11d" (UID: "c77a843c-6b36-4143-aff0-f5e7d227c11d"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.532350 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c77a843c-6b36-4143-aff0-f5e7d227c11d-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "c77a843c-6b36-4143-aff0-f5e7d227c11d" (UID: "c77a843c-6b36-4143-aff0-f5e7d227c11d"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.533157 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c77a843c-6b36-4143-aff0-f5e7d227c11d-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "c77a843c-6b36-4143-aff0-f5e7d227c11d" (UID: "c77a843c-6b36-4143-aff0-f5e7d227c11d"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.535667 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c77a843c-6b36-4143-aff0-f5e7d227c11d-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "c77a843c-6b36-4143-aff0-f5e7d227c11d" (UID: "c77a843c-6b36-4143-aff0-f5e7d227c11d"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.540099 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c77a843c-6b36-4143-aff0-f5e7d227c11d-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "c77a843c-6b36-4143-aff0-f5e7d227c11d" (UID: "c77a843c-6b36-4143-aff0-f5e7d227c11d"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.612161 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vqzwr\" (UniqueName: \"kubernetes.io/projected/c77a843c-6b36-4143-aff0-f5e7d227c11d-kube-api-access-vqzwr\") pod \"c77a843c-6b36-4143-aff0-f5e7d227c11d\" (UID: \"c77a843c-6b36-4143-aff0-f5e7d227c11d\") " Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.612215 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c77a843c-6b36-4143-aff0-f5e7d227c11d-audit-dir\") pod \"c77a843c-6b36-4143-aff0-f5e7d227c11d\" (UID: \"c77a843c-6b36-4143-aff0-f5e7d227c11d\") " Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.612444 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/78ea8bb2-04c7-4df9-a66e-0e09aea0bf7c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-69b74fc85f-v4jks\" (UID: \"78ea8bb2-04c7-4df9-a66e-0e09aea0bf7c\") " pod="openshift-authentication/oauth-openshift-69b74fc85f-v4jks" Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.612473 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/78ea8bb2-04c7-4df9-a66e-0e09aea0bf7c-v4-0-config-user-template-error\") pod \"oauth-openshift-69b74fc85f-v4jks\" (UID: \"78ea8bb2-04c7-4df9-a66e-0e09aea0bf7c\") " pod="openshift-authentication/oauth-openshift-69b74fc85f-v4jks" Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.612507 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/78ea8bb2-04c7-4df9-a66e-0e09aea0bf7c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-69b74fc85f-v4jks\" (UID: \"78ea8bb2-04c7-4df9-a66e-0e09aea0bf7c\") " pod="openshift-authentication/oauth-openshift-69b74fc85f-v4jks" Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.612526 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/78ea8bb2-04c7-4df9-a66e-0e09aea0bf7c-audit-policies\") pod \"oauth-openshift-69b74fc85f-v4jks\" (UID: \"78ea8bb2-04c7-4df9-a66e-0e09aea0bf7c\") " pod="openshift-authentication/oauth-openshift-69b74fc85f-v4jks" Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.612547 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/78ea8bb2-04c7-4df9-a66e-0e09aea0bf7c-v4-0-config-user-template-login\") pod \"oauth-openshift-69b74fc85f-v4jks\" (UID: \"78ea8bb2-04c7-4df9-a66e-0e09aea0bf7c\") " pod="openshift-authentication/oauth-openshift-69b74fc85f-v4jks" Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.612456 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c77a843c-6b36-4143-aff0-f5e7d227c11d-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "c77a843c-6b36-4143-aff0-f5e7d227c11d" (UID: "c77a843c-6b36-4143-aff0-f5e7d227c11d"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.612574 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/78ea8bb2-04c7-4df9-a66e-0e09aea0bf7c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-69b74fc85f-v4jks\" (UID: \"78ea8bb2-04c7-4df9-a66e-0e09aea0bf7c\") " pod="openshift-authentication/oauth-openshift-69b74fc85f-v4jks" Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.612655 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/78ea8bb2-04c7-4df9-a66e-0e09aea0bf7c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-69b74fc85f-v4jks\" (UID: \"78ea8bb2-04c7-4df9-a66e-0e09aea0bf7c\") " pod="openshift-authentication/oauth-openshift-69b74fc85f-v4jks" Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.612719 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/78ea8bb2-04c7-4df9-a66e-0e09aea0bf7c-v4-0-config-system-router-certs\") pod \"oauth-openshift-69b74fc85f-v4jks\" (UID: \"78ea8bb2-04c7-4df9-a66e-0e09aea0bf7c\") " pod="openshift-authentication/oauth-openshift-69b74fc85f-v4jks" Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.612761 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/78ea8bb2-04c7-4df9-a66e-0e09aea0bf7c-audit-dir\") pod \"oauth-openshift-69b74fc85f-v4jks\" (UID: \"78ea8bb2-04c7-4df9-a66e-0e09aea0bf7c\") " pod="openshift-authentication/oauth-openshift-69b74fc85f-v4jks" Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.612806 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/78ea8bb2-04c7-4df9-a66e-0e09aea0bf7c-v4-0-config-system-service-ca\") pod \"oauth-openshift-69b74fc85f-v4jks\" (UID: \"78ea8bb2-04c7-4df9-a66e-0e09aea0bf7c\") " pod="openshift-authentication/oauth-openshift-69b74fc85f-v4jks" Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.612872 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/78ea8bb2-04c7-4df9-a66e-0e09aea0bf7c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-69b74fc85f-v4jks\" (UID: \"78ea8bb2-04c7-4df9-a66e-0e09aea0bf7c\") " pod="openshift-authentication/oauth-openshift-69b74fc85f-v4jks" Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.612912 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/78ea8bb2-04c7-4df9-a66e-0e09aea0bf7c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-69b74fc85f-v4jks\" (UID: \"78ea8bb2-04c7-4df9-a66e-0e09aea0bf7c\") " pod="openshift-authentication/oauth-openshift-69b74fc85f-v4jks" Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.612945 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58mxf\" (UniqueName: \"kubernetes.io/projected/78ea8bb2-04c7-4df9-a66e-0e09aea0bf7c-kube-api-access-58mxf\") pod \"oauth-openshift-69b74fc85f-v4jks\" (UID: \"78ea8bb2-04c7-4df9-a66e-0e09aea0bf7c\") " pod="openshift-authentication/oauth-openshift-69b74fc85f-v4jks" Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.613001 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/78ea8bb2-04c7-4df9-a66e-0e09aea0bf7c-v4-0-config-system-session\") pod \"oauth-openshift-69b74fc85f-v4jks\" (UID: \"78ea8bb2-04c7-4df9-a66e-0e09aea0bf7c\") " pod="openshift-authentication/oauth-openshift-69b74fc85f-v4jks" Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.613067 5072 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c77a843c-6b36-4143-aff0-f5e7d227c11d-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.613090 5072 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c77a843c-6b36-4143-aff0-f5e7d227c11d-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.613112 5072 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c77a843c-6b36-4143-aff0-f5e7d227c11d-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.613131 5072 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c77a843c-6b36-4143-aff0-f5e7d227c11d-audit-dir\") on node \"crc\" DevicePath \"\"" Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.613149 5072 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c77a843c-6b36-4143-aff0-f5e7d227c11d-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.613170 5072 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c77a843c-6b36-4143-aff0-f5e7d227c11d-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.613191 5072 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c77a843c-6b36-4143-aff0-f5e7d227c11d-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.613210 5072 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c77a843c-6b36-4143-aff0-f5e7d227c11d-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.613229 5072 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c77a843c-6b36-4143-aff0-f5e7d227c11d-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.613248 5072 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c77a843c-6b36-4143-aff0-f5e7d227c11d-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.613271 5072 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c77a843c-6b36-4143-aff0-f5e7d227c11d-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.613289 5072 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c77a843c-6b36-4143-aff0-f5e7d227c11d-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.613307 5072 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c77a843c-6b36-4143-aff0-f5e7d227c11d-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.614186 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/78ea8bb2-04c7-4df9-a66e-0e09aea0bf7c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-69b74fc85f-v4jks\" (UID: \"78ea8bb2-04c7-4df9-a66e-0e09aea0bf7c\") " pod="openshift-authentication/oauth-openshift-69b74fc85f-v4jks" Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.614267 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/78ea8bb2-04c7-4df9-a66e-0e09aea0bf7c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-69b74fc85f-v4jks\" (UID: \"78ea8bb2-04c7-4df9-a66e-0e09aea0bf7c\") " pod="openshift-authentication/oauth-openshift-69b74fc85f-v4jks" Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.614299 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/78ea8bb2-04c7-4df9-a66e-0e09aea0bf7c-v4-0-config-system-service-ca\") pod \"oauth-openshift-69b74fc85f-v4jks\" (UID: \"78ea8bb2-04c7-4df9-a66e-0e09aea0bf7c\") " pod="openshift-authentication/oauth-openshift-69b74fc85f-v4jks" Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.614315 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/78ea8bb2-04c7-4df9-a66e-0e09aea0bf7c-audit-policies\") pod \"oauth-openshift-69b74fc85f-v4jks\" (UID: \"78ea8bb2-04c7-4df9-a66e-0e09aea0bf7c\") " pod="openshift-authentication/oauth-openshift-69b74fc85f-v4jks" Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.614406 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/78ea8bb2-04c7-4df9-a66e-0e09aea0bf7c-audit-dir\") pod \"oauth-openshift-69b74fc85f-v4jks\" (UID: \"78ea8bb2-04c7-4df9-a66e-0e09aea0bf7c\") " pod="openshift-authentication/oauth-openshift-69b74fc85f-v4jks" Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.617221 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/78ea8bb2-04c7-4df9-a66e-0e09aea0bf7c-v4-0-config-user-template-error\") pod \"oauth-openshift-69b74fc85f-v4jks\" (UID: \"78ea8bb2-04c7-4df9-a66e-0e09aea0bf7c\") " pod="openshift-authentication/oauth-openshift-69b74fc85f-v4jks" Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.617232 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c77a843c-6b36-4143-aff0-f5e7d227c11d-kube-api-access-vqzwr" (OuterVolumeSpecName: "kube-api-access-vqzwr") pod "c77a843c-6b36-4143-aff0-f5e7d227c11d" (UID: "c77a843c-6b36-4143-aff0-f5e7d227c11d"). InnerVolumeSpecName "kube-api-access-vqzwr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.618195 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/78ea8bb2-04c7-4df9-a66e-0e09aea0bf7c-v4-0-config-user-template-login\") pod \"oauth-openshift-69b74fc85f-v4jks\" (UID: \"78ea8bb2-04c7-4df9-a66e-0e09aea0bf7c\") " pod="openshift-authentication/oauth-openshift-69b74fc85f-v4jks" Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.618220 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/78ea8bb2-04c7-4df9-a66e-0e09aea0bf7c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-69b74fc85f-v4jks\" (UID: \"78ea8bb2-04c7-4df9-a66e-0e09aea0bf7c\") " pod="openshift-authentication/oauth-openshift-69b74fc85f-v4jks" Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.618524 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/78ea8bb2-04c7-4df9-a66e-0e09aea0bf7c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-69b74fc85f-v4jks\" (UID: \"78ea8bb2-04c7-4df9-a66e-0e09aea0bf7c\") " pod="openshift-authentication/oauth-openshift-69b74fc85f-v4jks" Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.618872 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/78ea8bb2-04c7-4df9-a66e-0e09aea0bf7c-v4-0-config-system-session\") pod \"oauth-openshift-69b74fc85f-v4jks\" (UID: \"78ea8bb2-04c7-4df9-a66e-0e09aea0bf7c\") " pod="openshift-authentication/oauth-openshift-69b74fc85f-v4jks" Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.620770 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/78ea8bb2-04c7-4df9-a66e-0e09aea0bf7c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-69b74fc85f-v4jks\" (UID: \"78ea8bb2-04c7-4df9-a66e-0e09aea0bf7c\") " pod="openshift-authentication/oauth-openshift-69b74fc85f-v4jks" Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.621279 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/78ea8bb2-04c7-4df9-a66e-0e09aea0bf7c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-69b74fc85f-v4jks\" (UID: \"78ea8bb2-04c7-4df9-a66e-0e09aea0bf7c\") " pod="openshift-authentication/oauth-openshift-69b74fc85f-v4jks" Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.622734 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/78ea8bb2-04c7-4df9-a66e-0e09aea0bf7c-v4-0-config-system-router-certs\") pod \"oauth-openshift-69b74fc85f-v4jks\" (UID: \"78ea8bb2-04c7-4df9-a66e-0e09aea0bf7c\") " pod="openshift-authentication/oauth-openshift-69b74fc85f-v4jks" Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.637016 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58mxf\" (UniqueName: \"kubernetes.io/projected/78ea8bb2-04c7-4df9-a66e-0e09aea0bf7c-kube-api-access-58mxf\") pod \"oauth-openshift-69b74fc85f-v4jks\" (UID: \"78ea8bb2-04c7-4df9-a66e-0e09aea0bf7c\") " pod="openshift-authentication/oauth-openshift-69b74fc85f-v4jks" Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.673714 5072 generic.go:334] "Generic (PLEG): container finished" podID="c77a843c-6b36-4143-aff0-f5e7d227c11d" containerID="5bb89a188c4140e6a63a98fe9a82ba1ca60e79ee8abebf0e85d4bf6b09c99e19" exitCode=0 Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.673772 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-rxs28" event={"ID":"c77a843c-6b36-4143-aff0-f5e7d227c11d","Type":"ContainerDied","Data":"5bb89a188c4140e6a63a98fe9a82ba1ca60e79ee8abebf0e85d4bf6b09c99e19"} Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.673836 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-rxs28" event={"ID":"c77a843c-6b36-4143-aff0-f5e7d227c11d","Type":"ContainerDied","Data":"8f0f7944981212dadc57678af153a4aa7cc9f32b4194098a51b601f230ea9af5"} Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.673866 5072 scope.go:117] "RemoveContainer" containerID="5bb89a188c4140e6a63a98fe9a82ba1ca60e79ee8abebf0e85d4bf6b09c99e19" Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.673788 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-rxs28" Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.711090 5072 scope.go:117] "RemoveContainer" containerID="5bb89a188c4140e6a63a98fe9a82ba1ca60e79ee8abebf0e85d4bf6b09c99e19" Nov 24 11:12:59 crc kubenswrapper[5072]: E1124 11:12:59.712926 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5bb89a188c4140e6a63a98fe9a82ba1ca60e79ee8abebf0e85d4bf6b09c99e19\": container with ID starting with 5bb89a188c4140e6a63a98fe9a82ba1ca60e79ee8abebf0e85d4bf6b09c99e19 not found: ID does not exist" containerID="5bb89a188c4140e6a63a98fe9a82ba1ca60e79ee8abebf0e85d4bf6b09c99e19" Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.713271 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5bb89a188c4140e6a63a98fe9a82ba1ca60e79ee8abebf0e85d4bf6b09c99e19"} err="failed to get container status \"5bb89a188c4140e6a63a98fe9a82ba1ca60e79ee8abebf0e85d4bf6b09c99e19\": rpc error: code = NotFound desc = could not find container \"5bb89a188c4140e6a63a98fe9a82ba1ca60e79ee8abebf0e85d4bf6b09c99e19\": container with ID starting with 5bb89a188c4140e6a63a98fe9a82ba1ca60e79ee8abebf0e85d4bf6b09c99e19 not found: ID does not exist" Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.714274 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-rxs28"] Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.714706 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vqzwr\" (UniqueName: \"kubernetes.io/projected/c77a843c-6b36-4143-aff0-f5e7d227c11d-kube-api-access-vqzwr\") on node \"crc\" DevicePath \"\"" Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.717994 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-rxs28"] Nov 24 11:12:59 crc kubenswrapper[5072]: I1124 11:12:59.885232 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-69b74fc85f-v4jks" Nov 24 11:13:00 crc kubenswrapper[5072]: I1124 11:13:00.390546 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-69b74fc85f-v4jks"] Nov 24 11:13:00 crc kubenswrapper[5072]: I1124 11:13:00.683026 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-69b74fc85f-v4jks" event={"ID":"78ea8bb2-04c7-4df9-a66e-0e09aea0bf7c","Type":"ContainerStarted","Data":"5ff526852f4eae83ffec0cc775d529610026b3bbb41dabd749d5823dbec55776"} Nov 24 11:13:01 crc kubenswrapper[5072]: I1124 11:13:01.030614 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c77a843c-6b36-4143-aff0-f5e7d227c11d" path="/var/lib/kubelet/pods/c77a843c-6b36-4143-aff0-f5e7d227c11d/volumes" Nov 24 11:13:01 crc kubenswrapper[5072]: I1124 11:13:01.693361 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-69b74fc85f-v4jks" event={"ID":"78ea8bb2-04c7-4df9-a66e-0e09aea0bf7c","Type":"ContainerStarted","Data":"038b775e544dfc52422ceafe0880df063b9e5a40548923dc776fe8f2fc8c9d33"} Nov 24 11:13:01 crc kubenswrapper[5072]: I1124 11:13:01.693836 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-69b74fc85f-v4jks" Nov 24 11:13:01 crc kubenswrapper[5072]: I1124 11:13:01.708263 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-69b74fc85f-v4jks" Nov 24 11:13:01 crc kubenswrapper[5072]: I1124 11:13:01.723848 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-69b74fc85f-v4jks" podStartSLOduration=27.723826584 podStartE2EDuration="27.723826584s" podCreationTimestamp="2025-11-24 11:12:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:13:01.722194667 +0000 UTC m=+233.433719183" watchObservedRunningTime="2025-11-24 11:13:01.723826584 +0000 UTC m=+233.435351090" Nov 24 11:13:17 crc kubenswrapper[5072]: I1124 11:13:17.524678 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-slkhf"] Nov 24 11:13:17 crc kubenswrapper[5072]: I1124 11:13:17.528683 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-slkhf" podUID="cbeb508a-245e-4c6c-9d4f-6f6f330cea5d" containerName="registry-server" containerID="cri-o://7df137ea95a12b501b439a5b62bf06a9d5c8c8b3977525854a05582d5d5ed4e2" gracePeriod=30 Nov 24 11:13:17 crc kubenswrapper[5072]: I1124 11:13:17.538128 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pvs9g"] Nov 24 11:13:17 crc kubenswrapper[5072]: I1124 11:13:17.538393 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-pvs9g" podUID="2f57ff17-1692-4fef-ba23-2b510f5a748b" containerName="registry-server" containerID="cri-o://b0c70158acfffa159f35a11f033f93bcec4e3685da783bab60c14a51202ff508" gracePeriod=30 Nov 24 11:13:17 crc kubenswrapper[5072]: I1124 11:13:17.553696 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ztvf4"] Nov 24 11:13:17 crc kubenswrapper[5072]: I1124 11:13:17.553960 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-ztvf4" podUID="ff258f9c-6ace-46bf-8228-05668edcbdd6" containerName="marketplace-operator" containerID="cri-o://ccd408d15620e17218e4114f89aed9a7d363d8d800cebf9fed86e85667326a17" gracePeriod=30 Nov 24 11:13:17 crc kubenswrapper[5072]: I1124 11:13:17.561453 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cvm5b"] Nov 24 11:13:17 crc kubenswrapper[5072]: I1124 11:13:17.561785 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-cvm5b" podUID="f9b1a9a7-8932-4045-bd63-bbc4d796d018" containerName="registry-server" containerID="cri-o://49bd04fc0e832d07318d2e881f19773a33521b47733fd5c3f1a726310283faed" gracePeriod=30 Nov 24 11:13:17 crc kubenswrapper[5072]: I1124 11:13:17.572307 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cngqk"] Nov 24 11:13:17 crc kubenswrapper[5072]: I1124 11:13:17.572608 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-cngqk" podUID="2b89b78a-9da6-40b4-8285-4311083ba178" containerName="registry-server" containerID="cri-o://8f89d74e598ced8e066ace7c2cf527cfcb24ff775d2a3f4c544b4faa5280cb00" gracePeriod=30 Nov 24 11:13:17 crc kubenswrapper[5072]: I1124 11:13:17.576124 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4scvq"] Nov 24 11:13:17 crc kubenswrapper[5072]: I1124 11:13:17.576889 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-4scvq" Nov 24 11:13:17 crc kubenswrapper[5072]: I1124 11:13:17.593780 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4scvq"] Nov 24 11:13:17 crc kubenswrapper[5072]: I1124 11:13:17.652078 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f3db2294-11de-44ff-ac29-e9f1bcf6cd24-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-4scvq\" (UID: \"f3db2294-11de-44ff-ac29-e9f1bcf6cd24\") " pod="openshift-marketplace/marketplace-operator-79b997595-4scvq" Nov 24 11:13:17 crc kubenswrapper[5072]: I1124 11:13:17.652124 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cpll\" (UniqueName: \"kubernetes.io/projected/f3db2294-11de-44ff-ac29-e9f1bcf6cd24-kube-api-access-5cpll\") pod \"marketplace-operator-79b997595-4scvq\" (UID: \"f3db2294-11de-44ff-ac29-e9f1bcf6cd24\") " pod="openshift-marketplace/marketplace-operator-79b997595-4scvq" Nov 24 11:13:17 crc kubenswrapper[5072]: I1124 11:13:17.652251 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f3db2294-11de-44ff-ac29-e9f1bcf6cd24-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-4scvq\" (UID: \"f3db2294-11de-44ff-ac29-e9f1bcf6cd24\") " pod="openshift-marketplace/marketplace-operator-79b997595-4scvq" Nov 24 11:13:17 crc kubenswrapper[5072]: I1124 11:13:17.753145 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f3db2294-11de-44ff-ac29-e9f1bcf6cd24-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-4scvq\" (UID: \"f3db2294-11de-44ff-ac29-e9f1bcf6cd24\") " pod="openshift-marketplace/marketplace-operator-79b997595-4scvq" Nov 24 11:13:17 crc kubenswrapper[5072]: I1124 11:13:17.753211 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f3db2294-11de-44ff-ac29-e9f1bcf6cd24-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-4scvq\" (UID: \"f3db2294-11de-44ff-ac29-e9f1bcf6cd24\") " pod="openshift-marketplace/marketplace-operator-79b997595-4scvq" Nov 24 11:13:17 crc kubenswrapper[5072]: I1124 11:13:17.753227 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cpll\" (UniqueName: \"kubernetes.io/projected/f3db2294-11de-44ff-ac29-e9f1bcf6cd24-kube-api-access-5cpll\") pod \"marketplace-operator-79b997595-4scvq\" (UID: \"f3db2294-11de-44ff-ac29-e9f1bcf6cd24\") " pod="openshift-marketplace/marketplace-operator-79b997595-4scvq" Nov 24 11:13:17 crc kubenswrapper[5072]: I1124 11:13:17.754453 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f3db2294-11de-44ff-ac29-e9f1bcf6cd24-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-4scvq\" (UID: \"f3db2294-11de-44ff-ac29-e9f1bcf6cd24\") " pod="openshift-marketplace/marketplace-operator-79b997595-4scvq" Nov 24 11:13:17 crc kubenswrapper[5072]: I1124 11:13:17.760195 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f3db2294-11de-44ff-ac29-e9f1bcf6cd24-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-4scvq\" (UID: \"f3db2294-11de-44ff-ac29-e9f1bcf6cd24\") " pod="openshift-marketplace/marketplace-operator-79b997595-4scvq" Nov 24 11:13:17 crc kubenswrapper[5072]: I1124 11:13:17.773489 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cpll\" (UniqueName: \"kubernetes.io/projected/f3db2294-11de-44ff-ac29-e9f1bcf6cd24-kube-api-access-5cpll\") pod \"marketplace-operator-79b997595-4scvq\" (UID: \"f3db2294-11de-44ff-ac29-e9f1bcf6cd24\") " pod="openshift-marketplace/marketplace-operator-79b997595-4scvq" Nov 24 11:13:17 crc kubenswrapper[5072]: I1124 11:13:17.797905 5072 generic.go:334] "Generic (PLEG): container finished" podID="ff258f9c-6ace-46bf-8228-05668edcbdd6" containerID="ccd408d15620e17218e4114f89aed9a7d363d8d800cebf9fed86e85667326a17" exitCode=0 Nov 24 11:13:17 crc kubenswrapper[5072]: I1124 11:13:17.797955 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-ztvf4" event={"ID":"ff258f9c-6ace-46bf-8228-05668edcbdd6","Type":"ContainerDied","Data":"ccd408d15620e17218e4114f89aed9a7d363d8d800cebf9fed86e85667326a17"} Nov 24 11:13:17 crc kubenswrapper[5072]: I1124 11:13:17.799336 5072 generic.go:334] "Generic (PLEG): container finished" podID="f9b1a9a7-8932-4045-bd63-bbc4d796d018" containerID="49bd04fc0e832d07318d2e881f19773a33521b47733fd5c3f1a726310283faed" exitCode=0 Nov 24 11:13:17 crc kubenswrapper[5072]: I1124 11:13:17.799382 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cvm5b" event={"ID":"f9b1a9a7-8932-4045-bd63-bbc4d796d018","Type":"ContainerDied","Data":"49bd04fc0e832d07318d2e881f19773a33521b47733fd5c3f1a726310283faed"} Nov 24 11:13:17 crc kubenswrapper[5072]: I1124 11:13:17.801475 5072 generic.go:334] "Generic (PLEG): container finished" podID="2f57ff17-1692-4fef-ba23-2b510f5a748b" containerID="b0c70158acfffa159f35a11f033f93bcec4e3685da783bab60c14a51202ff508" exitCode=0 Nov 24 11:13:17 crc kubenswrapper[5072]: I1124 11:13:17.801510 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pvs9g" event={"ID":"2f57ff17-1692-4fef-ba23-2b510f5a748b","Type":"ContainerDied","Data":"b0c70158acfffa159f35a11f033f93bcec4e3685da783bab60c14a51202ff508"} Nov 24 11:13:17 crc kubenswrapper[5072]: I1124 11:13:17.802716 5072 generic.go:334] "Generic (PLEG): container finished" podID="cbeb508a-245e-4c6c-9d4f-6f6f330cea5d" containerID="7df137ea95a12b501b439a5b62bf06a9d5c8c8b3977525854a05582d5d5ed4e2" exitCode=0 Nov 24 11:13:17 crc kubenswrapper[5072]: I1124 11:13:17.802746 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-slkhf" event={"ID":"cbeb508a-245e-4c6c-9d4f-6f6f330cea5d","Type":"ContainerDied","Data":"7df137ea95a12b501b439a5b62bf06a9d5c8c8b3977525854a05582d5d5ed4e2"} Nov 24 11:13:17 crc kubenswrapper[5072]: I1124 11:13:17.804022 5072 generic.go:334] "Generic (PLEG): container finished" podID="2b89b78a-9da6-40b4-8285-4311083ba178" containerID="8f89d74e598ced8e066ace7c2cf527cfcb24ff775d2a3f4c544b4faa5280cb00" exitCode=0 Nov 24 11:13:17 crc kubenswrapper[5072]: I1124 11:13:17.804038 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cngqk" event={"ID":"2b89b78a-9da6-40b4-8285-4311083ba178","Type":"ContainerDied","Data":"8f89d74e598ced8e066ace7c2cf527cfcb24ff775d2a3f4c544b4faa5280cb00"} Nov 24 11:13:17 crc kubenswrapper[5072]: I1124 11:13:17.939196 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-4scvq" Nov 24 11:13:17 crc kubenswrapper[5072]: I1124 11:13:17.981825 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-slkhf" Nov 24 11:13:18 crc kubenswrapper[5072]: I1124 11:13:18.007743 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-ztvf4" Nov 24 11:13:18 crc kubenswrapper[5072]: I1124 11:13:18.012476 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cvm5b" Nov 24 11:13:18 crc kubenswrapper[5072]: I1124 11:13:18.058696 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cngqk" Nov 24 11:13:18 crc kubenswrapper[5072]: I1124 11:13:18.160355 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9b1a9a7-8932-4045-bd63-bbc4d796d018-catalog-content\") pod \"f9b1a9a7-8932-4045-bd63-bbc4d796d018\" (UID: \"f9b1a9a7-8932-4045-bd63-bbc4d796d018\") " Nov 24 11:13:18 crc kubenswrapper[5072]: I1124 11:13:18.160446 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gnvd5\" (UniqueName: \"kubernetes.io/projected/f9b1a9a7-8932-4045-bd63-bbc4d796d018-kube-api-access-gnvd5\") pod \"f9b1a9a7-8932-4045-bd63-bbc4d796d018\" (UID: \"f9b1a9a7-8932-4045-bd63-bbc4d796d018\") " Nov 24 11:13:18 crc kubenswrapper[5072]: I1124 11:13:18.160467 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbeb508a-245e-4c6c-9d4f-6f6f330cea5d-utilities\") pod \"cbeb508a-245e-4c6c-9d4f-6f6f330cea5d\" (UID: \"cbeb508a-245e-4c6c-9d4f-6f6f330cea5d\") " Nov 24 11:13:18 crc kubenswrapper[5072]: I1124 11:13:18.160490 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9b1a9a7-8932-4045-bd63-bbc4d796d018-utilities\") pod \"f9b1a9a7-8932-4045-bd63-bbc4d796d018\" (UID: \"f9b1a9a7-8932-4045-bd63-bbc4d796d018\") " Nov 24 11:13:18 crc kubenswrapper[5072]: I1124 11:13:18.160510 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wqcm9\" (UniqueName: \"kubernetes.io/projected/cbeb508a-245e-4c6c-9d4f-6f6f330cea5d-kube-api-access-wqcm9\") pod \"cbeb508a-245e-4c6c-9d4f-6f6f330cea5d\" (UID: \"cbeb508a-245e-4c6c-9d4f-6f6f330cea5d\") " Nov 24 11:13:18 crc kubenswrapper[5072]: I1124 11:13:18.160533 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ff258f9c-6ace-46bf-8228-05668edcbdd6-marketplace-operator-metrics\") pod \"ff258f9c-6ace-46bf-8228-05668edcbdd6\" (UID: \"ff258f9c-6ace-46bf-8228-05668edcbdd6\") " Nov 24 11:13:18 crc kubenswrapper[5072]: I1124 11:13:18.160563 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbeb508a-245e-4c6c-9d4f-6f6f330cea5d-catalog-content\") pod \"cbeb508a-245e-4c6c-9d4f-6f6f330cea5d\" (UID: \"cbeb508a-245e-4c6c-9d4f-6f6f330cea5d\") " Nov 24 11:13:18 crc kubenswrapper[5072]: I1124 11:13:18.160593 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ff258f9c-6ace-46bf-8228-05668edcbdd6-marketplace-trusted-ca\") pod \"ff258f9c-6ace-46bf-8228-05668edcbdd6\" (UID: \"ff258f9c-6ace-46bf-8228-05668edcbdd6\") " Nov 24 11:13:18 crc kubenswrapper[5072]: I1124 11:13:18.160619 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b89b78a-9da6-40b4-8285-4311083ba178-catalog-content\") pod \"2b89b78a-9da6-40b4-8285-4311083ba178\" (UID: \"2b89b78a-9da6-40b4-8285-4311083ba178\") " Nov 24 11:13:18 crc kubenswrapper[5072]: I1124 11:13:18.160659 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b89b78a-9da6-40b4-8285-4311083ba178-utilities\") pod \"2b89b78a-9da6-40b4-8285-4311083ba178\" (UID: \"2b89b78a-9da6-40b4-8285-4311083ba178\") " Nov 24 11:13:18 crc kubenswrapper[5072]: I1124 11:13:18.160688 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2cfrh\" (UniqueName: \"kubernetes.io/projected/2b89b78a-9da6-40b4-8285-4311083ba178-kube-api-access-2cfrh\") pod \"2b89b78a-9da6-40b4-8285-4311083ba178\" (UID: \"2b89b78a-9da6-40b4-8285-4311083ba178\") " Nov 24 11:13:18 crc kubenswrapper[5072]: I1124 11:13:18.160705 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbh28\" (UniqueName: \"kubernetes.io/projected/ff258f9c-6ace-46bf-8228-05668edcbdd6-kube-api-access-hbh28\") pod \"ff258f9c-6ace-46bf-8228-05668edcbdd6\" (UID: \"ff258f9c-6ace-46bf-8228-05668edcbdd6\") " Nov 24 11:13:18 crc kubenswrapper[5072]: I1124 11:13:18.162042 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbeb508a-245e-4c6c-9d4f-6f6f330cea5d-utilities" (OuterVolumeSpecName: "utilities") pod "cbeb508a-245e-4c6c-9d4f-6f6f330cea5d" (UID: "cbeb508a-245e-4c6c-9d4f-6f6f330cea5d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:13:18 crc kubenswrapper[5072]: I1124 11:13:18.162679 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff258f9c-6ace-46bf-8228-05668edcbdd6-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "ff258f9c-6ace-46bf-8228-05668edcbdd6" (UID: "ff258f9c-6ace-46bf-8228-05668edcbdd6"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:13:18 crc kubenswrapper[5072]: I1124 11:13:18.164294 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b89b78a-9da6-40b4-8285-4311083ba178-utilities" (OuterVolumeSpecName: "utilities") pod "2b89b78a-9da6-40b4-8285-4311083ba178" (UID: "2b89b78a-9da6-40b4-8285-4311083ba178"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:13:18 crc kubenswrapper[5072]: I1124 11:13:18.164412 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9b1a9a7-8932-4045-bd63-bbc4d796d018-utilities" (OuterVolumeSpecName: "utilities") pod "f9b1a9a7-8932-4045-bd63-bbc4d796d018" (UID: "f9b1a9a7-8932-4045-bd63-bbc4d796d018"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:13:18 crc kubenswrapper[5072]: I1124 11:13:18.165446 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9b1a9a7-8932-4045-bd63-bbc4d796d018-kube-api-access-gnvd5" (OuterVolumeSpecName: "kube-api-access-gnvd5") pod "f9b1a9a7-8932-4045-bd63-bbc4d796d018" (UID: "f9b1a9a7-8932-4045-bd63-bbc4d796d018"). InnerVolumeSpecName "kube-api-access-gnvd5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:13:18 crc kubenswrapper[5072]: I1124 11:13:18.165832 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff258f9c-6ace-46bf-8228-05668edcbdd6-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "ff258f9c-6ace-46bf-8228-05668edcbdd6" (UID: "ff258f9c-6ace-46bf-8228-05668edcbdd6"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:13:18 crc kubenswrapper[5072]: I1124 11:13:18.171448 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbeb508a-245e-4c6c-9d4f-6f6f330cea5d-kube-api-access-wqcm9" (OuterVolumeSpecName: "kube-api-access-wqcm9") pod "cbeb508a-245e-4c6c-9d4f-6f6f330cea5d" (UID: "cbeb508a-245e-4c6c-9d4f-6f6f330cea5d"). InnerVolumeSpecName "kube-api-access-wqcm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:13:18 crc kubenswrapper[5072]: I1124 11:13:18.173535 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b89b78a-9da6-40b4-8285-4311083ba178-kube-api-access-2cfrh" (OuterVolumeSpecName: "kube-api-access-2cfrh") pod "2b89b78a-9da6-40b4-8285-4311083ba178" (UID: "2b89b78a-9da6-40b4-8285-4311083ba178"). InnerVolumeSpecName "kube-api-access-2cfrh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:13:18 crc kubenswrapper[5072]: I1124 11:13:18.173896 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff258f9c-6ace-46bf-8228-05668edcbdd6-kube-api-access-hbh28" (OuterVolumeSpecName: "kube-api-access-hbh28") pod "ff258f9c-6ace-46bf-8228-05668edcbdd6" (UID: "ff258f9c-6ace-46bf-8228-05668edcbdd6"). InnerVolumeSpecName "kube-api-access-hbh28". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:13:18 crc kubenswrapper[5072]: I1124 11:13:18.188347 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4scvq"] Nov 24 11:13:18 crc kubenswrapper[5072]: W1124 11:13:18.198911 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf3db2294_11de_44ff_ac29_e9f1bcf6cd24.slice/crio-0548827edcbc5ed11ae3c86777873e2a0e9261771ff13962f256bd2e2422ef94 WatchSource:0}: Error finding container 0548827edcbc5ed11ae3c86777873e2a0e9261771ff13962f256bd2e2422ef94: Status 404 returned error can't find the container with id 0548827edcbc5ed11ae3c86777873e2a0e9261771ff13962f256bd2e2422ef94 Nov 24 11:13:18 crc kubenswrapper[5072]: I1124 11:13:18.201918 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9b1a9a7-8932-4045-bd63-bbc4d796d018-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f9b1a9a7-8932-4045-bd63-bbc4d796d018" (UID: "f9b1a9a7-8932-4045-bd63-bbc4d796d018"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:13:18 crc kubenswrapper[5072]: I1124 11:13:18.217070 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbeb508a-245e-4c6c-9d4f-6f6f330cea5d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cbeb508a-245e-4c6c-9d4f-6f6f330cea5d" (UID: "cbeb508a-245e-4c6c-9d4f-6f6f330cea5d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:13:18 crc kubenswrapper[5072]: I1124 11:13:18.261765 5072 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ff258f9c-6ace-46bf-8228-05668edcbdd6-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 11:13:18 crc kubenswrapper[5072]: I1124 11:13:18.261787 5072 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b89b78a-9da6-40b4-8285-4311083ba178-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 11:13:18 crc kubenswrapper[5072]: I1124 11:13:18.261797 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2cfrh\" (UniqueName: \"kubernetes.io/projected/2b89b78a-9da6-40b4-8285-4311083ba178-kube-api-access-2cfrh\") on node \"crc\" DevicePath \"\"" Nov 24 11:13:18 crc kubenswrapper[5072]: I1124 11:13:18.261805 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbh28\" (UniqueName: \"kubernetes.io/projected/ff258f9c-6ace-46bf-8228-05668edcbdd6-kube-api-access-hbh28\") on node \"crc\" DevicePath \"\"" Nov 24 11:13:18 crc kubenswrapper[5072]: I1124 11:13:18.261814 5072 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9b1a9a7-8932-4045-bd63-bbc4d796d018-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 11:13:18 crc kubenswrapper[5072]: I1124 11:13:18.261822 5072 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbeb508a-245e-4c6c-9d4f-6f6f330cea5d-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 11:13:18 crc kubenswrapper[5072]: I1124 11:13:18.261830 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gnvd5\" (UniqueName: \"kubernetes.io/projected/f9b1a9a7-8932-4045-bd63-bbc4d796d018-kube-api-access-gnvd5\") on node \"crc\" DevicePath \"\"" Nov 24 11:13:18 crc kubenswrapper[5072]: I1124 11:13:18.261838 5072 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9b1a9a7-8932-4045-bd63-bbc4d796d018-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 11:13:18 crc kubenswrapper[5072]: I1124 11:13:18.261847 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wqcm9\" (UniqueName: \"kubernetes.io/projected/cbeb508a-245e-4c6c-9d4f-6f6f330cea5d-kube-api-access-wqcm9\") on node \"crc\" DevicePath \"\"" Nov 24 11:13:18 crc kubenswrapper[5072]: I1124 11:13:18.261855 5072 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ff258f9c-6ace-46bf-8228-05668edcbdd6-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 24 11:13:18 crc kubenswrapper[5072]: I1124 11:13:18.261864 5072 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbeb508a-245e-4c6c-9d4f-6f6f330cea5d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 11:13:18 crc kubenswrapper[5072]: I1124 11:13:18.295277 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b89b78a-9da6-40b4-8285-4311083ba178-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2b89b78a-9da6-40b4-8285-4311083ba178" (UID: "2b89b78a-9da6-40b4-8285-4311083ba178"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:13:18 crc kubenswrapper[5072]: I1124 11:13:18.346930 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pvs9g" Nov 24 11:13:18 crc kubenswrapper[5072]: I1124 11:13:18.365427 5072 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b89b78a-9da6-40b4-8285-4311083ba178-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 11:13:18 crc kubenswrapper[5072]: I1124 11:13:18.466811 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f57ff17-1692-4fef-ba23-2b510f5a748b-catalog-content\") pod \"2f57ff17-1692-4fef-ba23-2b510f5a748b\" (UID: \"2f57ff17-1692-4fef-ba23-2b510f5a748b\") " Nov 24 11:13:18 crc kubenswrapper[5072]: I1124 11:13:18.466901 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2nm6n\" (UniqueName: \"kubernetes.io/projected/2f57ff17-1692-4fef-ba23-2b510f5a748b-kube-api-access-2nm6n\") pod \"2f57ff17-1692-4fef-ba23-2b510f5a748b\" (UID: \"2f57ff17-1692-4fef-ba23-2b510f5a748b\") " Nov 24 11:13:18 crc kubenswrapper[5072]: I1124 11:13:18.466930 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f57ff17-1692-4fef-ba23-2b510f5a748b-utilities\") pod \"2f57ff17-1692-4fef-ba23-2b510f5a748b\" (UID: \"2f57ff17-1692-4fef-ba23-2b510f5a748b\") " Nov 24 11:13:18 crc kubenswrapper[5072]: I1124 11:13:18.467854 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f57ff17-1692-4fef-ba23-2b510f5a748b-utilities" (OuterVolumeSpecName: "utilities") pod "2f57ff17-1692-4fef-ba23-2b510f5a748b" (UID: "2f57ff17-1692-4fef-ba23-2b510f5a748b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:13:18 crc kubenswrapper[5072]: I1124 11:13:18.472102 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f57ff17-1692-4fef-ba23-2b510f5a748b-kube-api-access-2nm6n" (OuterVolumeSpecName: "kube-api-access-2nm6n") pod "2f57ff17-1692-4fef-ba23-2b510f5a748b" (UID: "2f57ff17-1692-4fef-ba23-2b510f5a748b"). InnerVolumeSpecName "kube-api-access-2nm6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:13:18 crc kubenswrapper[5072]: I1124 11:13:18.531459 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f57ff17-1692-4fef-ba23-2b510f5a748b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2f57ff17-1692-4fef-ba23-2b510f5a748b" (UID: "2f57ff17-1692-4fef-ba23-2b510f5a748b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:13:18 crc kubenswrapper[5072]: I1124 11:13:18.572057 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2nm6n\" (UniqueName: \"kubernetes.io/projected/2f57ff17-1692-4fef-ba23-2b510f5a748b-kube-api-access-2nm6n\") on node \"crc\" DevicePath \"\"" Nov 24 11:13:18 crc kubenswrapper[5072]: I1124 11:13:18.572095 5072 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f57ff17-1692-4fef-ba23-2b510f5a748b-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 11:13:18 crc kubenswrapper[5072]: I1124 11:13:18.572105 5072 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f57ff17-1692-4fef-ba23-2b510f5a748b-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 11:13:18 crc kubenswrapper[5072]: I1124 11:13:18.812454 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-slkhf" event={"ID":"cbeb508a-245e-4c6c-9d4f-6f6f330cea5d","Type":"ContainerDied","Data":"939d608df208286fe427568c919fe8ba318dc489192c59c701db33fcaec1bfc5"} Nov 24 11:13:18 crc kubenswrapper[5072]: I1124 11:13:18.812513 5072 scope.go:117] "RemoveContainer" containerID="7df137ea95a12b501b439a5b62bf06a9d5c8c8b3977525854a05582d5d5ed4e2" Nov 24 11:13:18 crc kubenswrapper[5072]: I1124 11:13:18.812553 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-slkhf" Nov 24 11:13:18 crc kubenswrapper[5072]: I1124 11:13:18.815293 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cngqk" event={"ID":"2b89b78a-9da6-40b4-8285-4311083ba178","Type":"ContainerDied","Data":"81521d2fdd979fcbd96bbf586e97e673fb8f4467d6ce38c732f32584fb89cf1b"} Nov 24 11:13:18 crc kubenswrapper[5072]: I1124 11:13:18.815416 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cngqk" Nov 24 11:13:18 crc kubenswrapper[5072]: I1124 11:13:18.817527 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-ztvf4" event={"ID":"ff258f9c-6ace-46bf-8228-05668edcbdd6","Type":"ContainerDied","Data":"cc3419d4ddcdcdfd5fa243cafce6c84fbbc7c86089add5416c8d67f8e2fe6d37"} Nov 24 11:13:18 crc kubenswrapper[5072]: I1124 11:13:18.817707 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-ztvf4" Nov 24 11:13:18 crc kubenswrapper[5072]: I1124 11:13:18.820154 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cvm5b" Nov 24 11:13:18 crc kubenswrapper[5072]: I1124 11:13:18.820197 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cvm5b" event={"ID":"f9b1a9a7-8932-4045-bd63-bbc4d796d018","Type":"ContainerDied","Data":"3648b5f00ab456e28452b7792d6bb6ffd2765ec564499205b70a7999ac33cb85"} Nov 24 11:13:18 crc kubenswrapper[5072]: I1124 11:13:18.821904 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-4scvq" event={"ID":"f3db2294-11de-44ff-ac29-e9f1bcf6cd24","Type":"ContainerStarted","Data":"3756a06bfee7b84eef97dd0db453a1452d54e889c48891b0f04400a6211ee4c1"} Nov 24 11:13:18 crc kubenswrapper[5072]: I1124 11:13:18.821926 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-4scvq" event={"ID":"f3db2294-11de-44ff-ac29-e9f1bcf6cd24","Type":"ContainerStarted","Data":"0548827edcbc5ed11ae3c86777873e2a0e9261771ff13962f256bd2e2422ef94"} Nov 24 11:13:18 crc kubenswrapper[5072]: I1124 11:13:18.822082 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-4scvq" Nov 24 11:13:18 crc kubenswrapper[5072]: I1124 11:13:18.824619 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pvs9g" event={"ID":"2f57ff17-1692-4fef-ba23-2b510f5a748b","Type":"ContainerDied","Data":"0f9de5a99d4455e5c05febd476c92ccf2c123f3d8fc7dfc232c2e217c3b74b9c"} Nov 24 11:13:18 crc kubenswrapper[5072]: I1124 11:13:18.824736 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pvs9g" Nov 24 11:13:18 crc kubenswrapper[5072]: I1124 11:13:18.826802 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-4scvq" Nov 24 11:13:18 crc kubenswrapper[5072]: I1124 11:13:18.833602 5072 scope.go:117] "RemoveContainer" containerID="cf85a81334a29ad002bd0dff52348cc2d895a47547614886a3821fbe67aeebce" Nov 24 11:13:18 crc kubenswrapper[5072]: I1124 11:13:18.842712 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-4scvq" podStartSLOduration=1.842686724 podStartE2EDuration="1.842686724s" podCreationTimestamp="2025-11-24 11:13:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:13:18.839194711 +0000 UTC m=+250.550719197" watchObservedRunningTime="2025-11-24 11:13:18.842686724 +0000 UTC m=+250.554211210" Nov 24 11:13:18 crc kubenswrapper[5072]: I1124 11:13:18.887464 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-slkhf"] Nov 24 11:13:18 crc kubenswrapper[5072]: I1124 11:13:18.892462 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-slkhf"] Nov 24 11:13:18 crc kubenswrapper[5072]: I1124 11:13:18.894980 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cngqk"] Nov 24 11:13:18 crc kubenswrapper[5072]: I1124 11:13:18.897650 5072 scope.go:117] "RemoveContainer" containerID="273c8e1614c8796c7f274fc3178d7508cc1dc89246aaaf2d29d8b8c30f5833da" Nov 24 11:13:18 crc kubenswrapper[5072]: I1124 11:13:18.899259 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-cngqk"] Nov 24 11:13:18 crc kubenswrapper[5072]: I1124 11:13:18.925069 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pvs9g"] Nov 24 11:13:18 crc kubenswrapper[5072]: I1124 11:13:18.935479 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pvs9g"] Nov 24 11:13:18 crc kubenswrapper[5072]: I1124 11:13:18.936565 5072 scope.go:117] "RemoveContainer" containerID="8f89d74e598ced8e066ace7c2cf527cfcb24ff775d2a3f4c544b4faa5280cb00" Nov 24 11:13:18 crc kubenswrapper[5072]: I1124 11:13:18.943829 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ztvf4"] Nov 24 11:13:18 crc kubenswrapper[5072]: I1124 11:13:18.946198 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ztvf4"] Nov 24 11:13:18 crc kubenswrapper[5072]: I1124 11:13:18.949839 5072 scope.go:117] "RemoveContainer" containerID="8e1303b29ce7f1b5915d240567474fc8af18ff77b9f8c7a0d27f35cee2ddb9a7" Nov 24 11:13:18 crc kubenswrapper[5072]: I1124 11:13:18.956607 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cvm5b"] Nov 24 11:13:18 crc kubenswrapper[5072]: I1124 11:13:18.961674 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-cvm5b"] Nov 24 11:13:18 crc kubenswrapper[5072]: I1124 11:13:18.977880 5072 scope.go:117] "RemoveContainer" containerID="4b1e65418291db316bdd7bc4ef4f404e1ad9a81e7fbf5b403e62a7d339755957" Nov 24 11:13:18 crc kubenswrapper[5072]: I1124 11:13:18.996310 5072 scope.go:117] "RemoveContainer" containerID="ccd408d15620e17218e4114f89aed9a7d363d8d800cebf9fed86e85667326a17" Nov 24 11:13:19 crc kubenswrapper[5072]: I1124 11:13:19.007962 5072 scope.go:117] "RemoveContainer" containerID="49bd04fc0e832d07318d2e881f19773a33521b47733fd5c3f1a726310283faed" Nov 24 11:13:19 crc kubenswrapper[5072]: I1124 11:13:19.022148 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b89b78a-9da6-40b4-8285-4311083ba178" path="/var/lib/kubelet/pods/2b89b78a-9da6-40b4-8285-4311083ba178/volumes" Nov 24 11:13:19 crc kubenswrapper[5072]: I1124 11:13:19.022444 5072 scope.go:117] "RemoveContainer" containerID="b801c1017f6294f4297f2b42cec67a18b4deaf2e731e8ff53b3741d589d06f0f" Nov 24 11:13:19 crc kubenswrapper[5072]: I1124 11:13:19.022978 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f57ff17-1692-4fef-ba23-2b510f5a748b" path="/var/lib/kubelet/pods/2f57ff17-1692-4fef-ba23-2b510f5a748b/volumes" Nov 24 11:13:19 crc kubenswrapper[5072]: I1124 11:13:19.023758 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbeb508a-245e-4c6c-9d4f-6f6f330cea5d" path="/var/lib/kubelet/pods/cbeb508a-245e-4c6c-9d4f-6f6f330cea5d/volumes" Nov 24 11:13:19 crc kubenswrapper[5072]: I1124 11:13:19.025057 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9b1a9a7-8932-4045-bd63-bbc4d796d018" path="/var/lib/kubelet/pods/f9b1a9a7-8932-4045-bd63-bbc4d796d018/volumes" Nov 24 11:13:19 crc kubenswrapper[5072]: I1124 11:13:19.025869 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff258f9c-6ace-46bf-8228-05668edcbdd6" path="/var/lib/kubelet/pods/ff258f9c-6ace-46bf-8228-05668edcbdd6/volumes" Nov 24 11:13:19 crc kubenswrapper[5072]: I1124 11:13:19.038326 5072 scope.go:117] "RemoveContainer" containerID="cfa1f17f667120865c41ae475f888857d09a6046a2db2a5e183afb10aa27917a" Nov 24 11:13:19 crc kubenswrapper[5072]: I1124 11:13:19.059502 5072 scope.go:117] "RemoveContainer" containerID="b0c70158acfffa159f35a11f033f93bcec4e3685da783bab60c14a51202ff508" Nov 24 11:13:19 crc kubenswrapper[5072]: I1124 11:13:19.070780 5072 scope.go:117] "RemoveContainer" containerID="354161dd7b5489d7b2051618e9e789bb0fd65b0a4002cd0ed1de42a154b8cf81" Nov 24 11:13:19 crc kubenswrapper[5072]: I1124 11:13:19.084557 5072 scope.go:117] "RemoveContainer" containerID="14e7ad0acb5f9b40b7aac0926e576bfa93a8d825bd873ca1062264032f09368e" Nov 24 11:13:19 crc kubenswrapper[5072]: I1124 11:13:19.745026 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4jrmf"] Nov 24 11:13:19 crc kubenswrapper[5072]: E1124 11:13:19.745406 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b89b78a-9da6-40b4-8285-4311083ba178" containerName="registry-server" Nov 24 11:13:19 crc kubenswrapper[5072]: I1124 11:13:19.745436 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b89b78a-9da6-40b4-8285-4311083ba178" containerName="registry-server" Nov 24 11:13:19 crc kubenswrapper[5072]: E1124 11:13:19.745465 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b89b78a-9da6-40b4-8285-4311083ba178" containerName="extract-content" Nov 24 11:13:19 crc kubenswrapper[5072]: I1124 11:13:19.745483 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b89b78a-9da6-40b4-8285-4311083ba178" containerName="extract-content" Nov 24 11:13:19 crc kubenswrapper[5072]: E1124 11:13:19.745507 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f57ff17-1692-4fef-ba23-2b510f5a748b" containerName="registry-server" Nov 24 11:13:19 crc kubenswrapper[5072]: I1124 11:13:19.745524 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f57ff17-1692-4fef-ba23-2b510f5a748b" containerName="registry-server" Nov 24 11:13:19 crc kubenswrapper[5072]: E1124 11:13:19.745543 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9b1a9a7-8932-4045-bd63-bbc4d796d018" containerName="extract-content" Nov 24 11:13:19 crc kubenswrapper[5072]: I1124 11:13:19.745558 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9b1a9a7-8932-4045-bd63-bbc4d796d018" containerName="extract-content" Nov 24 11:13:19 crc kubenswrapper[5072]: E1124 11:13:19.745580 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f57ff17-1692-4fef-ba23-2b510f5a748b" containerName="extract-utilities" Nov 24 11:13:19 crc kubenswrapper[5072]: I1124 11:13:19.745595 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f57ff17-1692-4fef-ba23-2b510f5a748b" containerName="extract-utilities" Nov 24 11:13:19 crc kubenswrapper[5072]: E1124 11:13:19.745615 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbeb508a-245e-4c6c-9d4f-6f6f330cea5d" containerName="registry-server" Nov 24 11:13:19 crc kubenswrapper[5072]: I1124 11:13:19.745629 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbeb508a-245e-4c6c-9d4f-6f6f330cea5d" containerName="registry-server" Nov 24 11:13:19 crc kubenswrapper[5072]: E1124 11:13:19.745648 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbeb508a-245e-4c6c-9d4f-6f6f330cea5d" containerName="extract-content" Nov 24 11:13:19 crc kubenswrapper[5072]: I1124 11:13:19.745661 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbeb508a-245e-4c6c-9d4f-6f6f330cea5d" containerName="extract-content" Nov 24 11:13:19 crc kubenswrapper[5072]: E1124 11:13:19.745678 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9b1a9a7-8932-4045-bd63-bbc4d796d018" containerName="registry-server" Nov 24 11:13:19 crc kubenswrapper[5072]: I1124 11:13:19.745690 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9b1a9a7-8932-4045-bd63-bbc4d796d018" containerName="registry-server" Nov 24 11:13:19 crc kubenswrapper[5072]: E1124 11:13:19.745707 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff258f9c-6ace-46bf-8228-05668edcbdd6" containerName="marketplace-operator" Nov 24 11:13:19 crc kubenswrapper[5072]: I1124 11:13:19.745719 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff258f9c-6ace-46bf-8228-05668edcbdd6" containerName="marketplace-operator" Nov 24 11:13:19 crc kubenswrapper[5072]: E1124 11:13:19.745736 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbeb508a-245e-4c6c-9d4f-6f6f330cea5d" containerName="extract-utilities" Nov 24 11:13:19 crc kubenswrapper[5072]: I1124 11:13:19.745748 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbeb508a-245e-4c6c-9d4f-6f6f330cea5d" containerName="extract-utilities" Nov 24 11:13:19 crc kubenswrapper[5072]: E1124 11:13:19.745761 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f57ff17-1692-4fef-ba23-2b510f5a748b" containerName="extract-content" Nov 24 11:13:19 crc kubenswrapper[5072]: I1124 11:13:19.745775 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f57ff17-1692-4fef-ba23-2b510f5a748b" containerName="extract-content" Nov 24 11:13:19 crc kubenswrapper[5072]: E1124 11:13:19.745790 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9b1a9a7-8932-4045-bd63-bbc4d796d018" containerName="extract-utilities" Nov 24 11:13:19 crc kubenswrapper[5072]: I1124 11:13:19.745801 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9b1a9a7-8932-4045-bd63-bbc4d796d018" containerName="extract-utilities" Nov 24 11:13:19 crc kubenswrapper[5072]: E1124 11:13:19.745820 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b89b78a-9da6-40b4-8285-4311083ba178" containerName="extract-utilities" Nov 24 11:13:19 crc kubenswrapper[5072]: I1124 11:13:19.745832 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b89b78a-9da6-40b4-8285-4311083ba178" containerName="extract-utilities" Nov 24 11:13:19 crc kubenswrapper[5072]: I1124 11:13:19.745988 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9b1a9a7-8932-4045-bd63-bbc4d796d018" containerName="registry-server" Nov 24 11:13:19 crc kubenswrapper[5072]: I1124 11:13:19.746004 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbeb508a-245e-4c6c-9d4f-6f6f330cea5d" containerName="registry-server" Nov 24 11:13:19 crc kubenswrapper[5072]: I1124 11:13:19.746026 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f57ff17-1692-4fef-ba23-2b510f5a748b" containerName="registry-server" Nov 24 11:13:19 crc kubenswrapper[5072]: I1124 11:13:19.746042 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff258f9c-6ace-46bf-8228-05668edcbdd6" containerName="marketplace-operator" Nov 24 11:13:19 crc kubenswrapper[5072]: I1124 11:13:19.746063 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b89b78a-9da6-40b4-8285-4311083ba178" containerName="registry-server" Nov 24 11:13:19 crc kubenswrapper[5072]: I1124 11:13:19.747266 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4jrmf" Nov 24 11:13:19 crc kubenswrapper[5072]: I1124 11:13:19.749461 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 24 11:13:19 crc kubenswrapper[5072]: I1124 11:13:19.754922 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4jrmf"] Nov 24 11:13:19 crc kubenswrapper[5072]: I1124 11:13:19.786207 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afa685e2-1d27-44a0-bdb9-ee494b9e8190-catalog-content\") pod \"redhat-marketplace-4jrmf\" (UID: \"afa685e2-1d27-44a0-bdb9-ee494b9e8190\") " pod="openshift-marketplace/redhat-marketplace-4jrmf" Nov 24 11:13:19 crc kubenswrapper[5072]: I1124 11:13:19.786289 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afa685e2-1d27-44a0-bdb9-ee494b9e8190-utilities\") pod \"redhat-marketplace-4jrmf\" (UID: \"afa685e2-1d27-44a0-bdb9-ee494b9e8190\") " pod="openshift-marketplace/redhat-marketplace-4jrmf" Nov 24 11:13:19 crc kubenswrapper[5072]: I1124 11:13:19.786461 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x957j\" (UniqueName: \"kubernetes.io/projected/afa685e2-1d27-44a0-bdb9-ee494b9e8190-kube-api-access-x957j\") pod \"redhat-marketplace-4jrmf\" (UID: \"afa685e2-1d27-44a0-bdb9-ee494b9e8190\") " pod="openshift-marketplace/redhat-marketplace-4jrmf" Nov 24 11:13:19 crc kubenswrapper[5072]: I1124 11:13:19.887069 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x957j\" (UniqueName: \"kubernetes.io/projected/afa685e2-1d27-44a0-bdb9-ee494b9e8190-kube-api-access-x957j\") pod \"redhat-marketplace-4jrmf\" (UID: \"afa685e2-1d27-44a0-bdb9-ee494b9e8190\") " pod="openshift-marketplace/redhat-marketplace-4jrmf" Nov 24 11:13:19 crc kubenswrapper[5072]: I1124 11:13:19.887130 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afa685e2-1d27-44a0-bdb9-ee494b9e8190-catalog-content\") pod \"redhat-marketplace-4jrmf\" (UID: \"afa685e2-1d27-44a0-bdb9-ee494b9e8190\") " pod="openshift-marketplace/redhat-marketplace-4jrmf" Nov 24 11:13:19 crc kubenswrapper[5072]: I1124 11:13:19.887155 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afa685e2-1d27-44a0-bdb9-ee494b9e8190-utilities\") pod \"redhat-marketplace-4jrmf\" (UID: \"afa685e2-1d27-44a0-bdb9-ee494b9e8190\") " pod="openshift-marketplace/redhat-marketplace-4jrmf" Nov 24 11:13:19 crc kubenswrapper[5072]: I1124 11:13:19.887830 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afa685e2-1d27-44a0-bdb9-ee494b9e8190-utilities\") pod \"redhat-marketplace-4jrmf\" (UID: \"afa685e2-1d27-44a0-bdb9-ee494b9e8190\") " pod="openshift-marketplace/redhat-marketplace-4jrmf" Nov 24 11:13:19 crc kubenswrapper[5072]: I1124 11:13:19.891256 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afa685e2-1d27-44a0-bdb9-ee494b9e8190-catalog-content\") pod \"redhat-marketplace-4jrmf\" (UID: \"afa685e2-1d27-44a0-bdb9-ee494b9e8190\") " pod="openshift-marketplace/redhat-marketplace-4jrmf" Nov 24 11:13:19 crc kubenswrapper[5072]: I1124 11:13:19.910451 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x957j\" (UniqueName: \"kubernetes.io/projected/afa685e2-1d27-44a0-bdb9-ee494b9e8190-kube-api-access-x957j\") pod \"redhat-marketplace-4jrmf\" (UID: \"afa685e2-1d27-44a0-bdb9-ee494b9e8190\") " pod="openshift-marketplace/redhat-marketplace-4jrmf" Nov 24 11:13:19 crc kubenswrapper[5072]: I1124 11:13:19.951073 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ksmz7"] Nov 24 11:13:19 crc kubenswrapper[5072]: I1124 11:13:19.959629 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ksmz7"] Nov 24 11:13:19 crc kubenswrapper[5072]: I1124 11:13:19.959697 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ksmz7" Nov 24 11:13:19 crc kubenswrapper[5072]: I1124 11:13:19.964549 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 24 11:13:20 crc kubenswrapper[5072]: I1124 11:13:20.074720 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4jrmf" Nov 24 11:13:20 crc kubenswrapper[5072]: I1124 11:13:20.089258 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8w95\" (UniqueName: \"kubernetes.io/projected/467abc7c-eb59-4ec5-a2c4-369c84e0faf0-kube-api-access-s8w95\") pod \"redhat-operators-ksmz7\" (UID: \"467abc7c-eb59-4ec5-a2c4-369c84e0faf0\") " pod="openshift-marketplace/redhat-operators-ksmz7" Nov 24 11:13:20 crc kubenswrapper[5072]: I1124 11:13:20.089314 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/467abc7c-eb59-4ec5-a2c4-369c84e0faf0-utilities\") pod \"redhat-operators-ksmz7\" (UID: \"467abc7c-eb59-4ec5-a2c4-369c84e0faf0\") " pod="openshift-marketplace/redhat-operators-ksmz7" Nov 24 11:13:20 crc kubenswrapper[5072]: I1124 11:13:20.089337 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/467abc7c-eb59-4ec5-a2c4-369c84e0faf0-catalog-content\") pod \"redhat-operators-ksmz7\" (UID: \"467abc7c-eb59-4ec5-a2c4-369c84e0faf0\") " pod="openshift-marketplace/redhat-operators-ksmz7" Nov 24 11:13:20 crc kubenswrapper[5072]: I1124 11:13:20.191825 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/467abc7c-eb59-4ec5-a2c4-369c84e0faf0-utilities\") pod \"redhat-operators-ksmz7\" (UID: \"467abc7c-eb59-4ec5-a2c4-369c84e0faf0\") " pod="openshift-marketplace/redhat-operators-ksmz7" Nov 24 11:13:20 crc kubenswrapper[5072]: I1124 11:13:20.192601 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/467abc7c-eb59-4ec5-a2c4-369c84e0faf0-catalog-content\") pod \"redhat-operators-ksmz7\" (UID: \"467abc7c-eb59-4ec5-a2c4-369c84e0faf0\") " pod="openshift-marketplace/redhat-operators-ksmz7" Nov 24 11:13:20 crc kubenswrapper[5072]: I1124 11:13:20.192547 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/467abc7c-eb59-4ec5-a2c4-369c84e0faf0-utilities\") pod \"redhat-operators-ksmz7\" (UID: \"467abc7c-eb59-4ec5-a2c4-369c84e0faf0\") " pod="openshift-marketplace/redhat-operators-ksmz7" Nov 24 11:13:20 crc kubenswrapper[5072]: I1124 11:13:20.192962 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/467abc7c-eb59-4ec5-a2c4-369c84e0faf0-catalog-content\") pod \"redhat-operators-ksmz7\" (UID: \"467abc7c-eb59-4ec5-a2c4-369c84e0faf0\") " pod="openshift-marketplace/redhat-operators-ksmz7" Nov 24 11:13:20 crc kubenswrapper[5072]: I1124 11:13:20.193041 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8w95\" (UniqueName: \"kubernetes.io/projected/467abc7c-eb59-4ec5-a2c4-369c84e0faf0-kube-api-access-s8w95\") pod \"redhat-operators-ksmz7\" (UID: \"467abc7c-eb59-4ec5-a2c4-369c84e0faf0\") " pod="openshift-marketplace/redhat-operators-ksmz7" Nov 24 11:13:20 crc kubenswrapper[5072]: I1124 11:13:20.217103 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8w95\" (UniqueName: \"kubernetes.io/projected/467abc7c-eb59-4ec5-a2c4-369c84e0faf0-kube-api-access-s8w95\") pod \"redhat-operators-ksmz7\" (UID: \"467abc7c-eb59-4ec5-a2c4-369c84e0faf0\") " pod="openshift-marketplace/redhat-operators-ksmz7" Nov 24 11:13:20 crc kubenswrapper[5072]: I1124 11:13:20.323899 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ksmz7" Nov 24 11:13:20 crc kubenswrapper[5072]: I1124 11:13:20.476642 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4jrmf"] Nov 24 11:13:20 crc kubenswrapper[5072]: W1124 11:13:20.482047 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podafa685e2_1d27_44a0_bdb9_ee494b9e8190.slice/crio-6b934093dc43a1434299ebdc18ec2a2179ab2e7049477686db89e606b5cded1a WatchSource:0}: Error finding container 6b934093dc43a1434299ebdc18ec2a2179ab2e7049477686db89e606b5cded1a: Status 404 returned error can't find the container with id 6b934093dc43a1434299ebdc18ec2a2179ab2e7049477686db89e606b5cded1a Nov 24 11:13:20 crc kubenswrapper[5072]: I1124 11:13:20.732989 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ksmz7"] Nov 24 11:13:20 crc kubenswrapper[5072]: I1124 11:13:20.841448 5072 generic.go:334] "Generic (PLEG): container finished" podID="afa685e2-1d27-44a0-bdb9-ee494b9e8190" containerID="bd9acae6799e58a810b00496a133f05f731199bb56594f6416a88425b0c42f48" exitCode=0 Nov 24 11:13:20 crc kubenswrapper[5072]: I1124 11:13:20.841515 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4jrmf" event={"ID":"afa685e2-1d27-44a0-bdb9-ee494b9e8190","Type":"ContainerDied","Data":"bd9acae6799e58a810b00496a133f05f731199bb56594f6416a88425b0c42f48"} Nov 24 11:13:20 crc kubenswrapper[5072]: I1124 11:13:20.841545 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4jrmf" event={"ID":"afa685e2-1d27-44a0-bdb9-ee494b9e8190","Type":"ContainerStarted","Data":"6b934093dc43a1434299ebdc18ec2a2179ab2e7049477686db89e606b5cded1a"} Nov 24 11:13:20 crc kubenswrapper[5072]: I1124 11:13:20.845112 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ksmz7" event={"ID":"467abc7c-eb59-4ec5-a2c4-369c84e0faf0","Type":"ContainerStarted","Data":"4060d77882da33da081cb5f154733d3ee098936154f299adff42abec84551738"} Nov 24 11:13:21 crc kubenswrapper[5072]: I1124 11:13:21.852760 5072 generic.go:334] "Generic (PLEG): container finished" podID="afa685e2-1d27-44a0-bdb9-ee494b9e8190" containerID="0130fb79d97b0275e8f10d5628cd2a6e27496e1af0f0ebf0c66c6c55ac62bcef" exitCode=0 Nov 24 11:13:21 crc kubenswrapper[5072]: I1124 11:13:21.852848 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4jrmf" event={"ID":"afa685e2-1d27-44a0-bdb9-ee494b9e8190","Type":"ContainerDied","Data":"0130fb79d97b0275e8f10d5628cd2a6e27496e1af0f0ebf0c66c6c55ac62bcef"} Nov 24 11:13:21 crc kubenswrapper[5072]: I1124 11:13:21.856592 5072 generic.go:334] "Generic (PLEG): container finished" podID="467abc7c-eb59-4ec5-a2c4-369c84e0faf0" containerID="baefadfc40c28655b92b039612a9635d5d3a4a1a0be45421895c4dd4af02ab7f" exitCode=0 Nov 24 11:13:21 crc kubenswrapper[5072]: I1124 11:13:21.856637 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ksmz7" event={"ID":"467abc7c-eb59-4ec5-a2c4-369c84e0faf0","Type":"ContainerDied","Data":"baefadfc40c28655b92b039612a9635d5d3a4a1a0be45421895c4dd4af02ab7f"} Nov 24 11:13:22 crc kubenswrapper[5072]: I1124 11:13:22.147684 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9k5tg"] Nov 24 11:13:22 crc kubenswrapper[5072]: I1124 11:13:22.149616 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9k5tg" Nov 24 11:13:22 crc kubenswrapper[5072]: I1124 11:13:22.151819 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 24 11:13:22 crc kubenswrapper[5072]: I1124 11:13:22.158549 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9k5tg"] Nov 24 11:13:22 crc kubenswrapper[5072]: I1124 11:13:22.218170 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73b603ce-232a-4aa0-b6c7-fd3a47d3031c-utilities\") pod \"community-operators-9k5tg\" (UID: \"73b603ce-232a-4aa0-b6c7-fd3a47d3031c\") " pod="openshift-marketplace/community-operators-9k5tg" Nov 24 11:13:22 crc kubenswrapper[5072]: I1124 11:13:22.218360 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73b603ce-232a-4aa0-b6c7-fd3a47d3031c-catalog-content\") pod \"community-operators-9k5tg\" (UID: \"73b603ce-232a-4aa0-b6c7-fd3a47d3031c\") " pod="openshift-marketplace/community-operators-9k5tg" Nov 24 11:13:22 crc kubenswrapper[5072]: I1124 11:13:22.218410 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bml6p\" (UniqueName: \"kubernetes.io/projected/73b603ce-232a-4aa0-b6c7-fd3a47d3031c-kube-api-access-bml6p\") pod \"community-operators-9k5tg\" (UID: \"73b603ce-232a-4aa0-b6c7-fd3a47d3031c\") " pod="openshift-marketplace/community-operators-9k5tg" Nov 24 11:13:22 crc kubenswrapper[5072]: I1124 11:13:22.319526 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73b603ce-232a-4aa0-b6c7-fd3a47d3031c-catalog-content\") pod \"community-operators-9k5tg\" (UID: \"73b603ce-232a-4aa0-b6c7-fd3a47d3031c\") " pod="openshift-marketplace/community-operators-9k5tg" Nov 24 11:13:22 crc kubenswrapper[5072]: I1124 11:13:22.319586 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bml6p\" (UniqueName: \"kubernetes.io/projected/73b603ce-232a-4aa0-b6c7-fd3a47d3031c-kube-api-access-bml6p\") pod \"community-operators-9k5tg\" (UID: \"73b603ce-232a-4aa0-b6c7-fd3a47d3031c\") " pod="openshift-marketplace/community-operators-9k5tg" Nov 24 11:13:22 crc kubenswrapper[5072]: I1124 11:13:22.319678 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73b603ce-232a-4aa0-b6c7-fd3a47d3031c-utilities\") pod \"community-operators-9k5tg\" (UID: \"73b603ce-232a-4aa0-b6c7-fd3a47d3031c\") " pod="openshift-marketplace/community-operators-9k5tg" Nov 24 11:13:22 crc kubenswrapper[5072]: I1124 11:13:22.320344 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73b603ce-232a-4aa0-b6c7-fd3a47d3031c-catalog-content\") pod \"community-operators-9k5tg\" (UID: \"73b603ce-232a-4aa0-b6c7-fd3a47d3031c\") " pod="openshift-marketplace/community-operators-9k5tg" Nov 24 11:13:22 crc kubenswrapper[5072]: I1124 11:13:22.320388 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73b603ce-232a-4aa0-b6c7-fd3a47d3031c-utilities\") pod \"community-operators-9k5tg\" (UID: \"73b603ce-232a-4aa0-b6c7-fd3a47d3031c\") " pod="openshift-marketplace/community-operators-9k5tg" Nov 24 11:13:22 crc kubenswrapper[5072]: I1124 11:13:22.340317 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-b8kkq"] Nov 24 11:13:22 crc kubenswrapper[5072]: I1124 11:13:22.341493 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b8kkq" Nov 24 11:13:22 crc kubenswrapper[5072]: I1124 11:13:22.341884 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bml6p\" (UniqueName: \"kubernetes.io/projected/73b603ce-232a-4aa0-b6c7-fd3a47d3031c-kube-api-access-bml6p\") pod \"community-operators-9k5tg\" (UID: \"73b603ce-232a-4aa0-b6c7-fd3a47d3031c\") " pod="openshift-marketplace/community-operators-9k5tg" Nov 24 11:13:22 crc kubenswrapper[5072]: I1124 11:13:22.348171 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 24 11:13:22 crc kubenswrapper[5072]: I1124 11:13:22.352405 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b8kkq"] Nov 24 11:13:22 crc kubenswrapper[5072]: I1124 11:13:22.420227 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b414b96-7437-45fe-82ff-663bdd600440-utilities\") pod \"certified-operators-b8kkq\" (UID: \"0b414b96-7437-45fe-82ff-663bdd600440\") " pod="openshift-marketplace/certified-operators-b8kkq" Nov 24 11:13:22 crc kubenswrapper[5072]: I1124 11:13:22.420275 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dxjl\" (UniqueName: \"kubernetes.io/projected/0b414b96-7437-45fe-82ff-663bdd600440-kube-api-access-6dxjl\") pod \"certified-operators-b8kkq\" (UID: \"0b414b96-7437-45fe-82ff-663bdd600440\") " pod="openshift-marketplace/certified-operators-b8kkq" Nov 24 11:13:22 crc kubenswrapper[5072]: I1124 11:13:22.420422 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b414b96-7437-45fe-82ff-663bdd600440-catalog-content\") pod \"certified-operators-b8kkq\" (UID: \"0b414b96-7437-45fe-82ff-663bdd600440\") " pod="openshift-marketplace/certified-operators-b8kkq" Nov 24 11:13:22 crc kubenswrapper[5072]: I1124 11:13:22.474584 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9k5tg" Nov 24 11:13:22 crc kubenswrapper[5072]: I1124 11:13:22.520903 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b414b96-7437-45fe-82ff-663bdd600440-catalog-content\") pod \"certified-operators-b8kkq\" (UID: \"0b414b96-7437-45fe-82ff-663bdd600440\") " pod="openshift-marketplace/certified-operators-b8kkq" Nov 24 11:13:22 crc kubenswrapper[5072]: I1124 11:13:22.521002 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b414b96-7437-45fe-82ff-663bdd600440-utilities\") pod \"certified-operators-b8kkq\" (UID: \"0b414b96-7437-45fe-82ff-663bdd600440\") " pod="openshift-marketplace/certified-operators-b8kkq" Nov 24 11:13:22 crc kubenswrapper[5072]: I1124 11:13:22.521024 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dxjl\" (UniqueName: \"kubernetes.io/projected/0b414b96-7437-45fe-82ff-663bdd600440-kube-api-access-6dxjl\") pod \"certified-operators-b8kkq\" (UID: \"0b414b96-7437-45fe-82ff-663bdd600440\") " pod="openshift-marketplace/certified-operators-b8kkq" Nov 24 11:13:22 crc kubenswrapper[5072]: I1124 11:13:22.521651 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b414b96-7437-45fe-82ff-663bdd600440-utilities\") pod \"certified-operators-b8kkq\" (UID: \"0b414b96-7437-45fe-82ff-663bdd600440\") " pod="openshift-marketplace/certified-operators-b8kkq" Nov 24 11:13:22 crc kubenswrapper[5072]: I1124 11:13:22.521958 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b414b96-7437-45fe-82ff-663bdd600440-catalog-content\") pod \"certified-operators-b8kkq\" (UID: \"0b414b96-7437-45fe-82ff-663bdd600440\") " pod="openshift-marketplace/certified-operators-b8kkq" Nov 24 11:13:22 crc kubenswrapper[5072]: I1124 11:13:22.546201 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dxjl\" (UniqueName: \"kubernetes.io/projected/0b414b96-7437-45fe-82ff-663bdd600440-kube-api-access-6dxjl\") pod \"certified-operators-b8kkq\" (UID: \"0b414b96-7437-45fe-82ff-663bdd600440\") " pod="openshift-marketplace/certified-operators-b8kkq" Nov 24 11:13:22 crc kubenswrapper[5072]: I1124 11:13:22.677734 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9k5tg"] Nov 24 11:13:22 crc kubenswrapper[5072]: W1124 11:13:22.684359 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod73b603ce_232a_4aa0_b6c7_fd3a47d3031c.slice/crio-a8621cb41477fc4222c30d915a84f480e740d6bc67bcebcf96a3b8e76b3d7ffb WatchSource:0}: Error finding container a8621cb41477fc4222c30d915a84f480e740d6bc67bcebcf96a3b8e76b3d7ffb: Status 404 returned error can't find the container with id a8621cb41477fc4222c30d915a84f480e740d6bc67bcebcf96a3b8e76b3d7ffb Nov 24 11:13:22 crc kubenswrapper[5072]: I1124 11:13:22.712183 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b8kkq" Nov 24 11:13:22 crc kubenswrapper[5072]: I1124 11:13:22.864325 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4jrmf" event={"ID":"afa685e2-1d27-44a0-bdb9-ee494b9e8190","Type":"ContainerStarted","Data":"7d11ed11aa22616c9d215673b8da012cf254d9bbe3761b80fcdefa1090ffb2a1"} Nov 24 11:13:22 crc kubenswrapper[5072]: I1124 11:13:22.866475 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ksmz7" event={"ID":"467abc7c-eb59-4ec5-a2c4-369c84e0faf0","Type":"ContainerStarted","Data":"6ee720e6a5ffa51974c45dbd7049855b267b3ce32fe74361231e80170f725c96"} Nov 24 11:13:22 crc kubenswrapper[5072]: I1124 11:13:22.867766 5072 generic.go:334] "Generic (PLEG): container finished" podID="73b603ce-232a-4aa0-b6c7-fd3a47d3031c" containerID="ce6eec7b31dc9ee5918dc3c9b466e5a1f1d662881a13ddf235ca586d4f2a4e9f" exitCode=0 Nov 24 11:13:22 crc kubenswrapper[5072]: I1124 11:13:22.867805 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9k5tg" event={"ID":"73b603ce-232a-4aa0-b6c7-fd3a47d3031c","Type":"ContainerDied","Data":"ce6eec7b31dc9ee5918dc3c9b466e5a1f1d662881a13ddf235ca586d4f2a4e9f"} Nov 24 11:13:22 crc kubenswrapper[5072]: I1124 11:13:22.867823 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9k5tg" event={"ID":"73b603ce-232a-4aa0-b6c7-fd3a47d3031c","Type":"ContainerStarted","Data":"a8621cb41477fc4222c30d915a84f480e740d6bc67bcebcf96a3b8e76b3d7ffb"} Nov 24 11:13:22 crc kubenswrapper[5072]: I1124 11:13:22.881436 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4jrmf" podStartSLOduration=2.176622308 podStartE2EDuration="3.881418416s" podCreationTimestamp="2025-11-24 11:13:19 +0000 UTC" firstStartedPulling="2025-11-24 11:13:20.842676662 +0000 UTC m=+252.554201138" lastFinishedPulling="2025-11-24 11:13:22.54747277 +0000 UTC m=+254.258997246" observedRunningTime="2025-11-24 11:13:22.879461387 +0000 UTC m=+254.590985863" watchObservedRunningTime="2025-11-24 11:13:22.881418416 +0000 UTC m=+254.592942892" Nov 24 11:13:22 crc kubenswrapper[5072]: I1124 11:13:22.887312 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b8kkq"] Nov 24 11:13:22 crc kubenswrapper[5072]: W1124 11:13:22.891749 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b414b96_7437_45fe_82ff_663bdd600440.slice/crio-be697386c57238c6751581fc81e413350d96bab7292196781beeeef0ac5bccab WatchSource:0}: Error finding container be697386c57238c6751581fc81e413350d96bab7292196781beeeef0ac5bccab: Status 404 returned error can't find the container with id be697386c57238c6751581fc81e413350d96bab7292196781beeeef0ac5bccab Nov 24 11:13:23 crc kubenswrapper[5072]: I1124 11:13:23.878174 5072 generic.go:334] "Generic (PLEG): container finished" podID="467abc7c-eb59-4ec5-a2c4-369c84e0faf0" containerID="6ee720e6a5ffa51974c45dbd7049855b267b3ce32fe74361231e80170f725c96" exitCode=0 Nov 24 11:13:23 crc kubenswrapper[5072]: I1124 11:13:23.878236 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ksmz7" event={"ID":"467abc7c-eb59-4ec5-a2c4-369c84e0faf0","Type":"ContainerDied","Data":"6ee720e6a5ffa51974c45dbd7049855b267b3ce32fe74361231e80170f725c96"} Nov 24 11:13:23 crc kubenswrapper[5072]: I1124 11:13:23.882715 5072 generic.go:334] "Generic (PLEG): container finished" podID="73b603ce-232a-4aa0-b6c7-fd3a47d3031c" containerID="e4c9d497cbed7bb7513114d0a51f47637c86b0474a697317e2bedf6e24582b3a" exitCode=0 Nov 24 11:13:23 crc kubenswrapper[5072]: I1124 11:13:23.882803 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9k5tg" event={"ID":"73b603ce-232a-4aa0-b6c7-fd3a47d3031c","Type":"ContainerDied","Data":"e4c9d497cbed7bb7513114d0a51f47637c86b0474a697317e2bedf6e24582b3a"} Nov 24 11:13:23 crc kubenswrapper[5072]: I1124 11:13:23.886967 5072 generic.go:334] "Generic (PLEG): container finished" podID="0b414b96-7437-45fe-82ff-663bdd600440" containerID="6f328a59edf2219f2f6d192829aed4f3b4a43885f5b846d02c2148d6977f6158" exitCode=0 Nov 24 11:13:23 crc kubenswrapper[5072]: I1124 11:13:23.887629 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b8kkq" event={"ID":"0b414b96-7437-45fe-82ff-663bdd600440","Type":"ContainerDied","Data":"6f328a59edf2219f2f6d192829aed4f3b4a43885f5b846d02c2148d6977f6158"} Nov 24 11:13:23 crc kubenswrapper[5072]: I1124 11:13:23.887651 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b8kkq" event={"ID":"0b414b96-7437-45fe-82ff-663bdd600440","Type":"ContainerStarted","Data":"be697386c57238c6751581fc81e413350d96bab7292196781beeeef0ac5bccab"} Nov 24 11:13:24 crc kubenswrapper[5072]: I1124 11:13:24.894244 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b8kkq" event={"ID":"0b414b96-7437-45fe-82ff-663bdd600440","Type":"ContainerStarted","Data":"aa0e704e1b79abc090e709959bdc13efcc649f53206a4e817d4e4496322bd333"} Nov 24 11:13:24 crc kubenswrapper[5072]: I1124 11:13:24.898126 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ksmz7" event={"ID":"467abc7c-eb59-4ec5-a2c4-369c84e0faf0","Type":"ContainerStarted","Data":"4988b575732bdb3f1db4a4f92bcc39bafa8b28d2514d18be755d15a6cb247305"} Nov 24 11:13:24 crc kubenswrapper[5072]: I1124 11:13:24.900162 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9k5tg" event={"ID":"73b603ce-232a-4aa0-b6c7-fd3a47d3031c","Type":"ContainerStarted","Data":"ca49a9bae1e976a5495655ba26bd93da29a0bc9240d928a311ee7fd613b90d55"} Nov 24 11:13:24 crc kubenswrapper[5072]: I1124 11:13:24.932147 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9k5tg" podStartSLOduration=1.40299266 podStartE2EDuration="2.932122323s" podCreationTimestamp="2025-11-24 11:13:22 +0000 UTC" firstStartedPulling="2025-11-24 11:13:22.869246557 +0000 UTC m=+254.580771033" lastFinishedPulling="2025-11-24 11:13:24.39837622 +0000 UTC m=+256.109900696" observedRunningTime="2025-11-24 11:13:24.931014794 +0000 UTC m=+256.642539280" watchObservedRunningTime="2025-11-24 11:13:24.932122323 +0000 UTC m=+256.643646839" Nov 24 11:13:25 crc kubenswrapper[5072]: I1124 11:13:25.907327 5072 generic.go:334] "Generic (PLEG): container finished" podID="0b414b96-7437-45fe-82ff-663bdd600440" containerID="aa0e704e1b79abc090e709959bdc13efcc649f53206a4e817d4e4496322bd333" exitCode=0 Nov 24 11:13:25 crc kubenswrapper[5072]: I1124 11:13:25.907438 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b8kkq" event={"ID":"0b414b96-7437-45fe-82ff-663bdd600440","Type":"ContainerDied","Data":"aa0e704e1b79abc090e709959bdc13efcc649f53206a4e817d4e4496322bd333"} Nov 24 11:13:25 crc kubenswrapper[5072]: I1124 11:13:25.929785 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ksmz7" podStartSLOduration=4.292386328 podStartE2EDuration="6.929762213s" podCreationTimestamp="2025-11-24 11:13:19 +0000 UTC" firstStartedPulling="2025-11-24 11:13:21.865586724 +0000 UTC m=+253.577111190" lastFinishedPulling="2025-11-24 11:13:24.502962599 +0000 UTC m=+256.214487075" observedRunningTime="2025-11-24 11:13:24.950324945 +0000 UTC m=+256.661849421" watchObservedRunningTime="2025-11-24 11:13:25.929762213 +0000 UTC m=+257.641286719" Nov 24 11:13:27 crc kubenswrapper[5072]: I1124 11:13:27.919665 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b8kkq" event={"ID":"0b414b96-7437-45fe-82ff-663bdd600440","Type":"ContainerStarted","Data":"3a2da54ec0fab391913eb8ba5e18e520737d426c1b674e18d4fe495a136461d3"} Nov 24 11:13:27 crc kubenswrapper[5072]: I1124 11:13:27.942218 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-b8kkq" podStartSLOduration=2.5628305 podStartE2EDuration="5.94220359s" podCreationTimestamp="2025-11-24 11:13:22 +0000 UTC" firstStartedPulling="2025-11-24 11:13:23.888037834 +0000 UTC m=+255.599562310" lastFinishedPulling="2025-11-24 11:13:27.267410924 +0000 UTC m=+258.978935400" observedRunningTime="2025-11-24 11:13:27.939124782 +0000 UTC m=+259.650649258" watchObservedRunningTime="2025-11-24 11:13:27.94220359 +0000 UTC m=+259.653728066" Nov 24 11:13:30 crc kubenswrapper[5072]: I1124 11:13:30.075320 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4jrmf" Nov 24 11:13:30 crc kubenswrapper[5072]: I1124 11:13:30.075689 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4jrmf" Nov 24 11:13:30 crc kubenswrapper[5072]: I1124 11:13:30.120139 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4jrmf" Nov 24 11:13:30 crc kubenswrapper[5072]: I1124 11:13:30.324277 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ksmz7" Nov 24 11:13:30 crc kubenswrapper[5072]: I1124 11:13:30.324342 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ksmz7" Nov 24 11:13:30 crc kubenswrapper[5072]: I1124 11:13:30.980781 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4jrmf" Nov 24 11:13:31 crc kubenswrapper[5072]: I1124 11:13:31.373812 5072 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ksmz7" podUID="467abc7c-eb59-4ec5-a2c4-369c84e0faf0" containerName="registry-server" probeResult="failure" output=< Nov 24 11:13:31 crc kubenswrapper[5072]: timeout: failed to connect service ":50051" within 1s Nov 24 11:13:31 crc kubenswrapper[5072]: > Nov 24 11:13:32 crc kubenswrapper[5072]: I1124 11:13:32.474927 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9k5tg" Nov 24 11:13:32 crc kubenswrapper[5072]: I1124 11:13:32.475301 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9k5tg" Nov 24 11:13:32 crc kubenswrapper[5072]: I1124 11:13:32.519610 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9k5tg" Nov 24 11:13:32 crc kubenswrapper[5072]: I1124 11:13:32.713195 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-b8kkq" Nov 24 11:13:32 crc kubenswrapper[5072]: I1124 11:13:32.713281 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-b8kkq" Nov 24 11:13:32 crc kubenswrapper[5072]: I1124 11:13:32.777230 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-b8kkq" Nov 24 11:13:32 crc kubenswrapper[5072]: I1124 11:13:32.985171 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9k5tg" Nov 24 11:13:32 crc kubenswrapper[5072]: I1124 11:13:32.994500 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-b8kkq" Nov 24 11:13:40 crc kubenswrapper[5072]: I1124 11:13:40.396534 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ksmz7" Nov 24 11:13:40 crc kubenswrapper[5072]: I1124 11:13:40.450766 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ksmz7" Nov 24 11:14:43 crc kubenswrapper[5072]: I1124 11:14:43.645764 5072 patch_prober.go:28] interesting pod/machine-config-daemon-jfxnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 11:14:43 crc kubenswrapper[5072]: I1124 11:14:43.646685 5072 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 11:15:00 crc kubenswrapper[5072]: I1124 11:15:00.139338 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399715-tdrwn"] Nov 24 11:15:00 crc kubenswrapper[5072]: I1124 11:15:00.140525 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399715-tdrwn" Nov 24 11:15:00 crc kubenswrapper[5072]: I1124 11:15:00.142575 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 24 11:15:00 crc kubenswrapper[5072]: I1124 11:15:00.142641 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 11:15:00 crc kubenswrapper[5072]: I1124 11:15:00.149486 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399715-tdrwn"] Nov 24 11:15:00 crc kubenswrapper[5072]: I1124 11:15:00.246982 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ad3bb474-e119-49eb-a13d-3c71b170fb33-config-volume\") pod \"collect-profiles-29399715-tdrwn\" (UID: \"ad3bb474-e119-49eb-a13d-3c71b170fb33\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399715-tdrwn" Nov 24 11:15:00 crc kubenswrapper[5072]: I1124 11:15:00.247049 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ad3bb474-e119-49eb-a13d-3c71b170fb33-secret-volume\") pod \"collect-profiles-29399715-tdrwn\" (UID: \"ad3bb474-e119-49eb-a13d-3c71b170fb33\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399715-tdrwn" Nov 24 11:15:00 crc kubenswrapper[5072]: I1124 11:15:00.247107 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9llgq\" (UniqueName: \"kubernetes.io/projected/ad3bb474-e119-49eb-a13d-3c71b170fb33-kube-api-access-9llgq\") pod \"collect-profiles-29399715-tdrwn\" (UID: \"ad3bb474-e119-49eb-a13d-3c71b170fb33\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399715-tdrwn" Nov 24 11:15:00 crc kubenswrapper[5072]: I1124 11:15:00.349179 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ad3bb474-e119-49eb-a13d-3c71b170fb33-config-volume\") pod \"collect-profiles-29399715-tdrwn\" (UID: \"ad3bb474-e119-49eb-a13d-3c71b170fb33\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399715-tdrwn" Nov 24 11:15:00 crc kubenswrapper[5072]: I1124 11:15:00.349262 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ad3bb474-e119-49eb-a13d-3c71b170fb33-secret-volume\") pod \"collect-profiles-29399715-tdrwn\" (UID: \"ad3bb474-e119-49eb-a13d-3c71b170fb33\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399715-tdrwn" Nov 24 11:15:00 crc kubenswrapper[5072]: I1124 11:15:00.349316 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9llgq\" (UniqueName: \"kubernetes.io/projected/ad3bb474-e119-49eb-a13d-3c71b170fb33-kube-api-access-9llgq\") pod \"collect-profiles-29399715-tdrwn\" (UID: \"ad3bb474-e119-49eb-a13d-3c71b170fb33\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399715-tdrwn" Nov 24 11:15:00 crc kubenswrapper[5072]: I1124 11:15:00.351244 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ad3bb474-e119-49eb-a13d-3c71b170fb33-config-volume\") pod \"collect-profiles-29399715-tdrwn\" (UID: \"ad3bb474-e119-49eb-a13d-3c71b170fb33\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399715-tdrwn" Nov 24 11:15:00 crc kubenswrapper[5072]: I1124 11:15:00.357069 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ad3bb474-e119-49eb-a13d-3c71b170fb33-secret-volume\") pod \"collect-profiles-29399715-tdrwn\" (UID: \"ad3bb474-e119-49eb-a13d-3c71b170fb33\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399715-tdrwn" Nov 24 11:15:00 crc kubenswrapper[5072]: I1124 11:15:00.366227 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9llgq\" (UniqueName: \"kubernetes.io/projected/ad3bb474-e119-49eb-a13d-3c71b170fb33-kube-api-access-9llgq\") pod \"collect-profiles-29399715-tdrwn\" (UID: \"ad3bb474-e119-49eb-a13d-3c71b170fb33\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399715-tdrwn" Nov 24 11:15:00 crc kubenswrapper[5072]: I1124 11:15:00.475295 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399715-tdrwn" Nov 24 11:15:00 crc kubenswrapper[5072]: I1124 11:15:00.746883 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399715-tdrwn"] Nov 24 11:15:01 crc kubenswrapper[5072]: I1124 11:15:01.516297 5072 generic.go:334] "Generic (PLEG): container finished" podID="ad3bb474-e119-49eb-a13d-3c71b170fb33" containerID="4a81dc24ed3d563a3996aa3e050718e3c7ea8d792b140465372cabc473f2a017" exitCode=0 Nov 24 11:15:01 crc kubenswrapper[5072]: I1124 11:15:01.516363 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399715-tdrwn" event={"ID":"ad3bb474-e119-49eb-a13d-3c71b170fb33","Type":"ContainerDied","Data":"4a81dc24ed3d563a3996aa3e050718e3c7ea8d792b140465372cabc473f2a017"} Nov 24 11:15:01 crc kubenswrapper[5072]: I1124 11:15:01.516589 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399715-tdrwn" event={"ID":"ad3bb474-e119-49eb-a13d-3c71b170fb33","Type":"ContainerStarted","Data":"6a444451bac5a8098f701bdd7319cbdc185b5f530497551068dec68045d6175e"} Nov 24 11:15:02 crc kubenswrapper[5072]: I1124 11:15:02.750717 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399715-tdrwn" Nov 24 11:15:02 crc kubenswrapper[5072]: I1124 11:15:02.881750 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ad3bb474-e119-49eb-a13d-3c71b170fb33-config-volume\") pod \"ad3bb474-e119-49eb-a13d-3c71b170fb33\" (UID: \"ad3bb474-e119-49eb-a13d-3c71b170fb33\") " Nov 24 11:15:02 crc kubenswrapper[5072]: I1124 11:15:02.881843 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9llgq\" (UniqueName: \"kubernetes.io/projected/ad3bb474-e119-49eb-a13d-3c71b170fb33-kube-api-access-9llgq\") pod \"ad3bb474-e119-49eb-a13d-3c71b170fb33\" (UID: \"ad3bb474-e119-49eb-a13d-3c71b170fb33\") " Nov 24 11:15:02 crc kubenswrapper[5072]: I1124 11:15:02.881917 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad3bb474-e119-49eb-a13d-3c71b170fb33-config-volume" (OuterVolumeSpecName: "config-volume") pod "ad3bb474-e119-49eb-a13d-3c71b170fb33" (UID: "ad3bb474-e119-49eb-a13d-3c71b170fb33"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:15:02 crc kubenswrapper[5072]: I1124 11:15:02.882899 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ad3bb474-e119-49eb-a13d-3c71b170fb33-secret-volume\") pod \"ad3bb474-e119-49eb-a13d-3c71b170fb33\" (UID: \"ad3bb474-e119-49eb-a13d-3c71b170fb33\") " Nov 24 11:15:02 crc kubenswrapper[5072]: I1124 11:15:02.883118 5072 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ad3bb474-e119-49eb-a13d-3c71b170fb33-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 11:15:02 crc kubenswrapper[5072]: I1124 11:15:02.888165 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad3bb474-e119-49eb-a13d-3c71b170fb33-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ad3bb474-e119-49eb-a13d-3c71b170fb33" (UID: "ad3bb474-e119-49eb-a13d-3c71b170fb33"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:15:02 crc kubenswrapper[5072]: I1124 11:15:02.888760 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad3bb474-e119-49eb-a13d-3c71b170fb33-kube-api-access-9llgq" (OuterVolumeSpecName: "kube-api-access-9llgq") pod "ad3bb474-e119-49eb-a13d-3c71b170fb33" (UID: "ad3bb474-e119-49eb-a13d-3c71b170fb33"). InnerVolumeSpecName "kube-api-access-9llgq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:15:02 crc kubenswrapper[5072]: I1124 11:15:02.984877 5072 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ad3bb474-e119-49eb-a13d-3c71b170fb33-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 24 11:15:02 crc kubenswrapper[5072]: I1124 11:15:02.984937 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9llgq\" (UniqueName: \"kubernetes.io/projected/ad3bb474-e119-49eb-a13d-3c71b170fb33-kube-api-access-9llgq\") on node \"crc\" DevicePath \"\"" Nov 24 11:15:03 crc kubenswrapper[5072]: I1124 11:15:03.530192 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399715-tdrwn" event={"ID":"ad3bb474-e119-49eb-a13d-3c71b170fb33","Type":"ContainerDied","Data":"6a444451bac5a8098f701bdd7319cbdc185b5f530497551068dec68045d6175e"} Nov 24 11:15:03 crc kubenswrapper[5072]: I1124 11:15:03.530248 5072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a444451bac5a8098f701bdd7319cbdc185b5f530497551068dec68045d6175e" Nov 24 11:15:03 crc kubenswrapper[5072]: I1124 11:15:03.530325 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399715-tdrwn" Nov 24 11:15:13 crc kubenswrapper[5072]: I1124 11:15:13.645053 5072 patch_prober.go:28] interesting pod/machine-config-daemon-jfxnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 11:15:13 crc kubenswrapper[5072]: I1124 11:15:13.645544 5072 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 11:15:43 crc kubenswrapper[5072]: I1124 11:15:43.645784 5072 patch_prober.go:28] interesting pod/machine-config-daemon-jfxnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 11:15:43 crc kubenswrapper[5072]: I1124 11:15:43.646552 5072 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 11:15:43 crc kubenswrapper[5072]: I1124 11:15:43.646789 5072 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" Nov 24 11:15:43 crc kubenswrapper[5072]: I1124 11:15:43.647756 5072 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e839d6d58c16c68cbc04eeeedb69dee8ec0dd6b4c9bf97590bae2b1dd76b231f"} pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 11:15:43 crc kubenswrapper[5072]: I1124 11:15:43.647891 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" containerName="machine-config-daemon" containerID="cri-o://e839d6d58c16c68cbc04eeeedb69dee8ec0dd6b4c9bf97590bae2b1dd76b231f" gracePeriod=600 Nov 24 11:15:43 crc kubenswrapper[5072]: I1124 11:15:43.803518 5072 generic.go:334] "Generic (PLEG): container finished" podID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" containerID="e839d6d58c16c68cbc04eeeedb69dee8ec0dd6b4c9bf97590bae2b1dd76b231f" exitCode=0 Nov 24 11:15:43 crc kubenswrapper[5072]: I1124 11:15:43.803580 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" event={"ID":"85ee6420-36f0-467c-acf4-ebea8b02c8d5","Type":"ContainerDied","Data":"e839d6d58c16c68cbc04eeeedb69dee8ec0dd6b4c9bf97590bae2b1dd76b231f"} Nov 24 11:15:43 crc kubenswrapper[5072]: I1124 11:15:43.803625 5072 scope.go:117] "RemoveContainer" containerID="a3509fd52379451e43594c096ef652d92778331f2aef6b689e547f35a384b976" Nov 24 11:15:44 crc kubenswrapper[5072]: I1124 11:15:44.811934 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" event={"ID":"85ee6420-36f0-467c-acf4-ebea8b02c8d5","Type":"ContainerStarted","Data":"0a6ebf9514d44fa623afa2ad42e78869426bcafc62c418072ab42294a40efd6e"} Nov 24 11:16:56 crc kubenswrapper[5072]: I1124 11:16:56.517724 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-sg6ss"] Nov 24 11:16:56 crc kubenswrapper[5072]: E1124 11:16:56.518421 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad3bb474-e119-49eb-a13d-3c71b170fb33" containerName="collect-profiles" Nov 24 11:16:56 crc kubenswrapper[5072]: I1124 11:16:56.518434 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad3bb474-e119-49eb-a13d-3c71b170fb33" containerName="collect-profiles" Nov 24 11:16:56 crc kubenswrapper[5072]: I1124 11:16:56.518536 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad3bb474-e119-49eb-a13d-3c71b170fb33" containerName="collect-profiles" Nov 24 11:16:56 crc kubenswrapper[5072]: I1124 11:16:56.518885 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-sg6ss" Nov 24 11:16:56 crc kubenswrapper[5072]: I1124 11:16:56.536855 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-sg6ss"] Nov 24 11:16:56 crc kubenswrapper[5072]: I1124 11:16:56.617068 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/50a54df2-bf89-40c1-a4da-24859f6d2afe-registry-tls\") pod \"image-registry-66df7c8f76-sg6ss\" (UID: \"50a54df2-bf89-40c1-a4da-24859f6d2afe\") " pod="openshift-image-registry/image-registry-66df7c8f76-sg6ss" Nov 24 11:16:56 crc kubenswrapper[5072]: I1124 11:16:56.617120 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/50a54df2-bf89-40c1-a4da-24859f6d2afe-bound-sa-token\") pod \"image-registry-66df7c8f76-sg6ss\" (UID: \"50a54df2-bf89-40c1-a4da-24859f6d2afe\") " pod="openshift-image-registry/image-registry-66df7c8f76-sg6ss" Nov 24 11:16:56 crc kubenswrapper[5072]: I1124 11:16:56.617169 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8z27q\" (UniqueName: \"kubernetes.io/projected/50a54df2-bf89-40c1-a4da-24859f6d2afe-kube-api-access-8z27q\") pod \"image-registry-66df7c8f76-sg6ss\" (UID: \"50a54df2-bf89-40c1-a4da-24859f6d2afe\") " pod="openshift-image-registry/image-registry-66df7c8f76-sg6ss" Nov 24 11:16:56 crc kubenswrapper[5072]: I1124 11:16:56.617194 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/50a54df2-bf89-40c1-a4da-24859f6d2afe-registry-certificates\") pod \"image-registry-66df7c8f76-sg6ss\" (UID: \"50a54df2-bf89-40c1-a4da-24859f6d2afe\") " pod="openshift-image-registry/image-registry-66df7c8f76-sg6ss" Nov 24 11:16:56 crc kubenswrapper[5072]: I1124 11:16:56.617222 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/50a54df2-bf89-40c1-a4da-24859f6d2afe-trusted-ca\") pod \"image-registry-66df7c8f76-sg6ss\" (UID: \"50a54df2-bf89-40c1-a4da-24859f6d2afe\") " pod="openshift-image-registry/image-registry-66df7c8f76-sg6ss" Nov 24 11:16:56 crc kubenswrapper[5072]: I1124 11:16:56.617264 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-sg6ss\" (UID: \"50a54df2-bf89-40c1-a4da-24859f6d2afe\") " pod="openshift-image-registry/image-registry-66df7c8f76-sg6ss" Nov 24 11:16:56 crc kubenswrapper[5072]: I1124 11:16:56.617311 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/50a54df2-bf89-40c1-a4da-24859f6d2afe-ca-trust-extracted\") pod \"image-registry-66df7c8f76-sg6ss\" (UID: \"50a54df2-bf89-40c1-a4da-24859f6d2afe\") " pod="openshift-image-registry/image-registry-66df7c8f76-sg6ss" Nov 24 11:16:56 crc kubenswrapper[5072]: I1124 11:16:56.617393 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/50a54df2-bf89-40c1-a4da-24859f6d2afe-installation-pull-secrets\") pod \"image-registry-66df7c8f76-sg6ss\" (UID: \"50a54df2-bf89-40c1-a4da-24859f6d2afe\") " pod="openshift-image-registry/image-registry-66df7c8f76-sg6ss" Nov 24 11:16:56 crc kubenswrapper[5072]: I1124 11:16:56.648515 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-sg6ss\" (UID: \"50a54df2-bf89-40c1-a4da-24859f6d2afe\") " pod="openshift-image-registry/image-registry-66df7c8f76-sg6ss" Nov 24 11:16:56 crc kubenswrapper[5072]: I1124 11:16:56.718637 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8z27q\" (UniqueName: \"kubernetes.io/projected/50a54df2-bf89-40c1-a4da-24859f6d2afe-kube-api-access-8z27q\") pod \"image-registry-66df7c8f76-sg6ss\" (UID: \"50a54df2-bf89-40c1-a4da-24859f6d2afe\") " pod="openshift-image-registry/image-registry-66df7c8f76-sg6ss" Nov 24 11:16:56 crc kubenswrapper[5072]: I1124 11:16:56.718692 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/50a54df2-bf89-40c1-a4da-24859f6d2afe-registry-certificates\") pod \"image-registry-66df7c8f76-sg6ss\" (UID: \"50a54df2-bf89-40c1-a4da-24859f6d2afe\") " pod="openshift-image-registry/image-registry-66df7c8f76-sg6ss" Nov 24 11:16:56 crc kubenswrapper[5072]: I1124 11:16:56.718718 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/50a54df2-bf89-40c1-a4da-24859f6d2afe-trusted-ca\") pod \"image-registry-66df7c8f76-sg6ss\" (UID: \"50a54df2-bf89-40c1-a4da-24859f6d2afe\") " pod="openshift-image-registry/image-registry-66df7c8f76-sg6ss" Nov 24 11:16:56 crc kubenswrapper[5072]: I1124 11:16:56.718781 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/50a54df2-bf89-40c1-a4da-24859f6d2afe-ca-trust-extracted\") pod \"image-registry-66df7c8f76-sg6ss\" (UID: \"50a54df2-bf89-40c1-a4da-24859f6d2afe\") " pod="openshift-image-registry/image-registry-66df7c8f76-sg6ss" Nov 24 11:16:56 crc kubenswrapper[5072]: I1124 11:16:56.718834 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/50a54df2-bf89-40c1-a4da-24859f6d2afe-installation-pull-secrets\") pod \"image-registry-66df7c8f76-sg6ss\" (UID: \"50a54df2-bf89-40c1-a4da-24859f6d2afe\") " pod="openshift-image-registry/image-registry-66df7c8f76-sg6ss" Nov 24 11:16:56 crc kubenswrapper[5072]: I1124 11:16:56.718859 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/50a54df2-bf89-40c1-a4da-24859f6d2afe-registry-tls\") pod \"image-registry-66df7c8f76-sg6ss\" (UID: \"50a54df2-bf89-40c1-a4da-24859f6d2afe\") " pod="openshift-image-registry/image-registry-66df7c8f76-sg6ss" Nov 24 11:16:56 crc kubenswrapper[5072]: I1124 11:16:56.718881 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/50a54df2-bf89-40c1-a4da-24859f6d2afe-bound-sa-token\") pod \"image-registry-66df7c8f76-sg6ss\" (UID: \"50a54df2-bf89-40c1-a4da-24859f6d2afe\") " pod="openshift-image-registry/image-registry-66df7c8f76-sg6ss" Nov 24 11:16:56 crc kubenswrapper[5072]: I1124 11:16:56.719839 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/50a54df2-bf89-40c1-a4da-24859f6d2afe-ca-trust-extracted\") pod \"image-registry-66df7c8f76-sg6ss\" (UID: \"50a54df2-bf89-40c1-a4da-24859f6d2afe\") " pod="openshift-image-registry/image-registry-66df7c8f76-sg6ss" Nov 24 11:16:56 crc kubenswrapper[5072]: I1124 11:16:56.720108 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/50a54df2-bf89-40c1-a4da-24859f6d2afe-registry-certificates\") pod \"image-registry-66df7c8f76-sg6ss\" (UID: \"50a54df2-bf89-40c1-a4da-24859f6d2afe\") " pod="openshift-image-registry/image-registry-66df7c8f76-sg6ss" Nov 24 11:16:56 crc kubenswrapper[5072]: I1124 11:16:56.720142 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/50a54df2-bf89-40c1-a4da-24859f6d2afe-trusted-ca\") pod \"image-registry-66df7c8f76-sg6ss\" (UID: \"50a54df2-bf89-40c1-a4da-24859f6d2afe\") " pod="openshift-image-registry/image-registry-66df7c8f76-sg6ss" Nov 24 11:16:56 crc kubenswrapper[5072]: I1124 11:16:56.725630 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/50a54df2-bf89-40c1-a4da-24859f6d2afe-registry-tls\") pod \"image-registry-66df7c8f76-sg6ss\" (UID: \"50a54df2-bf89-40c1-a4da-24859f6d2afe\") " pod="openshift-image-registry/image-registry-66df7c8f76-sg6ss" Nov 24 11:16:56 crc kubenswrapper[5072]: I1124 11:16:56.726082 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/50a54df2-bf89-40c1-a4da-24859f6d2afe-installation-pull-secrets\") pod \"image-registry-66df7c8f76-sg6ss\" (UID: \"50a54df2-bf89-40c1-a4da-24859f6d2afe\") " pod="openshift-image-registry/image-registry-66df7c8f76-sg6ss" Nov 24 11:16:56 crc kubenswrapper[5072]: I1124 11:16:56.735526 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8z27q\" (UniqueName: \"kubernetes.io/projected/50a54df2-bf89-40c1-a4da-24859f6d2afe-kube-api-access-8z27q\") pod \"image-registry-66df7c8f76-sg6ss\" (UID: \"50a54df2-bf89-40c1-a4da-24859f6d2afe\") " pod="openshift-image-registry/image-registry-66df7c8f76-sg6ss" Nov 24 11:16:56 crc kubenswrapper[5072]: I1124 11:16:56.736939 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/50a54df2-bf89-40c1-a4da-24859f6d2afe-bound-sa-token\") pod \"image-registry-66df7c8f76-sg6ss\" (UID: \"50a54df2-bf89-40c1-a4da-24859f6d2afe\") " pod="openshift-image-registry/image-registry-66df7c8f76-sg6ss" Nov 24 11:16:56 crc kubenswrapper[5072]: I1124 11:16:56.837490 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-sg6ss" Nov 24 11:16:57 crc kubenswrapper[5072]: I1124 11:16:57.124036 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-sg6ss"] Nov 24 11:16:57 crc kubenswrapper[5072]: I1124 11:16:57.264494 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-sg6ss" event={"ID":"50a54df2-bf89-40c1-a4da-24859f6d2afe","Type":"ContainerStarted","Data":"de4ab18e4e74bedea353e209a4b83a1c069f44697ccf20eeb2ea15c4f96a389d"} Nov 24 11:16:58 crc kubenswrapper[5072]: I1124 11:16:58.275535 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-sg6ss" event={"ID":"50a54df2-bf89-40c1-a4da-24859f6d2afe","Type":"ContainerStarted","Data":"f9f5dc53987b91ca9a87cc7a3e87abd9ef14f63bf84781863c948e11a7aa2aa5"} Nov 24 11:16:58 crc kubenswrapper[5072]: I1124 11:16:58.275907 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-sg6ss" Nov 24 11:16:58 crc kubenswrapper[5072]: I1124 11:16:58.312012 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-sg6ss" podStartSLOduration=2.311979511 podStartE2EDuration="2.311979511s" podCreationTimestamp="2025-11-24 11:16:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:16:58.305035089 +0000 UTC m=+470.016559615" watchObservedRunningTime="2025-11-24 11:16:58.311979511 +0000 UTC m=+470.023504027" Nov 24 11:17:16 crc kubenswrapper[5072]: I1124 11:17:16.848259 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-sg6ss" Nov 24 11:17:16 crc kubenswrapper[5072]: I1124 11:17:16.926278 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9w2qz"] Nov 24 11:17:41 crc kubenswrapper[5072]: I1124 11:17:41.974063 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-9w2qz" podUID="d68516ef-c18f-4d3f-bc80-71739e73cee1" containerName="registry" containerID="cri-o://bc443c4756d71119b2cb06fe4b2b1fcc698178d163338849422cedc0d20f7424" gracePeriod=30 Nov 24 11:17:42 crc kubenswrapper[5072]: I1124 11:17:42.573566 5072 generic.go:334] "Generic (PLEG): container finished" podID="d68516ef-c18f-4d3f-bc80-71739e73cee1" containerID="bc443c4756d71119b2cb06fe4b2b1fcc698178d163338849422cedc0d20f7424" exitCode=0 Nov 24 11:17:42 crc kubenswrapper[5072]: I1124 11:17:42.573629 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-9w2qz" event={"ID":"d68516ef-c18f-4d3f-bc80-71739e73cee1","Type":"ContainerDied","Data":"bc443c4756d71119b2cb06fe4b2b1fcc698178d163338849422cedc0d20f7424"} Nov 24 11:17:42 crc kubenswrapper[5072]: I1124 11:17:42.675446 5072 patch_prober.go:28] interesting pod/image-registry-697d97f7c8-9w2qz container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.13:5000/healthz\": dial tcp 10.217.0.13:5000: connect: connection refused" start-of-body= Nov 24 11:17:42 crc kubenswrapper[5072]: I1124 11:17:42.675532 5072 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-697d97f7c8-9w2qz" podUID="d68516ef-c18f-4d3f-bc80-71739e73cee1" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.13:5000/healthz\": dial tcp 10.217.0.13:5000: connect: connection refused" Nov 24 11:17:43 crc kubenswrapper[5072]: I1124 11:17:43.064305 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-9w2qz" Nov 24 11:17:43 crc kubenswrapper[5072]: I1124 11:17:43.177497 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48xbw\" (UniqueName: \"kubernetes.io/projected/d68516ef-c18f-4d3f-bc80-71739e73cee1-kube-api-access-48xbw\") pod \"d68516ef-c18f-4d3f-bc80-71739e73cee1\" (UID: \"d68516ef-c18f-4d3f-bc80-71739e73cee1\") " Nov 24 11:17:43 crc kubenswrapper[5072]: I1124 11:17:43.177563 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d68516ef-c18f-4d3f-bc80-71739e73cee1-trusted-ca\") pod \"d68516ef-c18f-4d3f-bc80-71739e73cee1\" (UID: \"d68516ef-c18f-4d3f-bc80-71739e73cee1\") " Nov 24 11:17:43 crc kubenswrapper[5072]: I1124 11:17:43.177634 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d68516ef-c18f-4d3f-bc80-71739e73cee1-registry-tls\") pod \"d68516ef-c18f-4d3f-bc80-71739e73cee1\" (UID: \"d68516ef-c18f-4d3f-bc80-71739e73cee1\") " Nov 24 11:17:43 crc kubenswrapper[5072]: I1124 11:17:43.177961 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"d68516ef-c18f-4d3f-bc80-71739e73cee1\" (UID: \"d68516ef-c18f-4d3f-bc80-71739e73cee1\") " Nov 24 11:17:43 crc kubenswrapper[5072]: I1124 11:17:43.178081 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d68516ef-c18f-4d3f-bc80-71739e73cee1-installation-pull-secrets\") pod \"d68516ef-c18f-4d3f-bc80-71739e73cee1\" (UID: \"d68516ef-c18f-4d3f-bc80-71739e73cee1\") " Nov 24 11:17:43 crc kubenswrapper[5072]: I1124 11:17:43.178718 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d68516ef-c18f-4d3f-bc80-71739e73cee1-ca-trust-extracted\") pod \"d68516ef-c18f-4d3f-bc80-71739e73cee1\" (UID: \"d68516ef-c18f-4d3f-bc80-71739e73cee1\") " Nov 24 11:17:43 crc kubenswrapper[5072]: I1124 11:17:43.178790 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d68516ef-c18f-4d3f-bc80-71739e73cee1-registry-certificates\") pod \"d68516ef-c18f-4d3f-bc80-71739e73cee1\" (UID: \"d68516ef-c18f-4d3f-bc80-71739e73cee1\") " Nov 24 11:17:43 crc kubenswrapper[5072]: I1124 11:17:43.178833 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d68516ef-c18f-4d3f-bc80-71739e73cee1-bound-sa-token\") pod \"d68516ef-c18f-4d3f-bc80-71739e73cee1\" (UID: \"d68516ef-c18f-4d3f-bc80-71739e73cee1\") " Nov 24 11:17:43 crc kubenswrapper[5072]: I1124 11:17:43.178923 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d68516ef-c18f-4d3f-bc80-71739e73cee1-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "d68516ef-c18f-4d3f-bc80-71739e73cee1" (UID: "d68516ef-c18f-4d3f-bc80-71739e73cee1"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:17:43 crc kubenswrapper[5072]: I1124 11:17:43.179218 5072 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d68516ef-c18f-4d3f-bc80-71739e73cee1-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 11:17:43 crc kubenswrapper[5072]: I1124 11:17:43.179992 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d68516ef-c18f-4d3f-bc80-71739e73cee1-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "d68516ef-c18f-4d3f-bc80-71739e73cee1" (UID: "d68516ef-c18f-4d3f-bc80-71739e73cee1"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:17:43 crc kubenswrapper[5072]: I1124 11:17:43.184530 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d68516ef-c18f-4d3f-bc80-71739e73cee1-kube-api-access-48xbw" (OuterVolumeSpecName: "kube-api-access-48xbw") pod "d68516ef-c18f-4d3f-bc80-71739e73cee1" (UID: "d68516ef-c18f-4d3f-bc80-71739e73cee1"). InnerVolumeSpecName "kube-api-access-48xbw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:17:43 crc kubenswrapper[5072]: I1124 11:17:43.184735 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d68516ef-c18f-4d3f-bc80-71739e73cee1-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "d68516ef-c18f-4d3f-bc80-71739e73cee1" (UID: "d68516ef-c18f-4d3f-bc80-71739e73cee1"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:17:43 crc kubenswrapper[5072]: I1124 11:17:43.185142 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d68516ef-c18f-4d3f-bc80-71739e73cee1-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "d68516ef-c18f-4d3f-bc80-71739e73cee1" (UID: "d68516ef-c18f-4d3f-bc80-71739e73cee1"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:17:43 crc kubenswrapper[5072]: I1124 11:17:43.185906 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d68516ef-c18f-4d3f-bc80-71739e73cee1-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "d68516ef-c18f-4d3f-bc80-71739e73cee1" (UID: "d68516ef-c18f-4d3f-bc80-71739e73cee1"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:17:43 crc kubenswrapper[5072]: I1124 11:17:43.191196 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "d68516ef-c18f-4d3f-bc80-71739e73cee1" (UID: "d68516ef-c18f-4d3f-bc80-71739e73cee1"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 24 11:17:43 crc kubenswrapper[5072]: I1124 11:17:43.203696 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d68516ef-c18f-4d3f-bc80-71739e73cee1-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "d68516ef-c18f-4d3f-bc80-71739e73cee1" (UID: "d68516ef-c18f-4d3f-bc80-71739e73cee1"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:17:43 crc kubenswrapper[5072]: I1124 11:17:43.280125 5072 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d68516ef-c18f-4d3f-bc80-71739e73cee1-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 24 11:17:43 crc kubenswrapper[5072]: I1124 11:17:43.280162 5072 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d68516ef-c18f-4d3f-bc80-71739e73cee1-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 24 11:17:43 crc kubenswrapper[5072]: I1124 11:17:43.280174 5072 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d68516ef-c18f-4d3f-bc80-71739e73cee1-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 24 11:17:43 crc kubenswrapper[5072]: I1124 11:17:43.280186 5072 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d68516ef-c18f-4d3f-bc80-71739e73cee1-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 24 11:17:43 crc kubenswrapper[5072]: I1124 11:17:43.280197 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-48xbw\" (UniqueName: \"kubernetes.io/projected/d68516ef-c18f-4d3f-bc80-71739e73cee1-kube-api-access-48xbw\") on node \"crc\" DevicePath \"\"" Nov 24 11:17:43 crc kubenswrapper[5072]: I1124 11:17:43.280209 5072 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d68516ef-c18f-4d3f-bc80-71739e73cee1-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 24 11:17:43 crc kubenswrapper[5072]: I1124 11:17:43.583901 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-9w2qz" event={"ID":"d68516ef-c18f-4d3f-bc80-71739e73cee1","Type":"ContainerDied","Data":"b50e3edb3e87ac26b6fadae92cd538b42386f7ce95e0f359f3a5ea97a6809f73"} Nov 24 11:17:43 crc kubenswrapper[5072]: I1124 11:17:43.583975 5072 scope.go:117] "RemoveContainer" containerID="bc443c4756d71119b2cb06fe4b2b1fcc698178d163338849422cedc0d20f7424" Nov 24 11:17:43 crc kubenswrapper[5072]: I1124 11:17:43.583991 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-9w2qz" Nov 24 11:17:43 crc kubenswrapper[5072]: I1124 11:17:43.638764 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9w2qz"] Nov 24 11:17:43 crc kubenswrapper[5072]: I1124 11:17:43.646066 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9w2qz"] Nov 24 11:17:43 crc kubenswrapper[5072]: I1124 11:17:43.646325 5072 patch_prober.go:28] interesting pod/machine-config-daemon-jfxnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 11:17:43 crc kubenswrapper[5072]: I1124 11:17:43.646529 5072 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 11:17:45 crc kubenswrapper[5072]: I1124 11:17:45.028874 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d68516ef-c18f-4d3f-bc80-71739e73cee1" path="/var/lib/kubelet/pods/d68516ef-c18f-4d3f-bc80-71739e73cee1/volumes" Nov 24 11:18:13 crc kubenswrapper[5072]: I1124 11:18:13.645321 5072 patch_prober.go:28] interesting pod/machine-config-daemon-jfxnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 11:18:13 crc kubenswrapper[5072]: I1124 11:18:13.645994 5072 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 11:18:43 crc kubenswrapper[5072]: I1124 11:18:43.644798 5072 patch_prober.go:28] interesting pod/machine-config-daemon-jfxnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 11:18:43 crc kubenswrapper[5072]: I1124 11:18:43.645505 5072 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 11:18:43 crc kubenswrapper[5072]: I1124 11:18:43.645868 5072 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" Nov 24 11:18:43 crc kubenswrapper[5072]: I1124 11:18:43.646638 5072 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0a6ebf9514d44fa623afa2ad42e78869426bcafc62c418072ab42294a40efd6e"} pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 11:18:43 crc kubenswrapper[5072]: I1124 11:18:43.646727 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" containerName="machine-config-daemon" containerID="cri-o://0a6ebf9514d44fa623afa2ad42e78869426bcafc62c418072ab42294a40efd6e" gracePeriod=600 Nov 24 11:18:44 crc kubenswrapper[5072]: I1124 11:18:44.001250 5072 generic.go:334] "Generic (PLEG): container finished" podID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" containerID="0a6ebf9514d44fa623afa2ad42e78869426bcafc62c418072ab42294a40efd6e" exitCode=0 Nov 24 11:18:44 crc kubenswrapper[5072]: I1124 11:18:44.001503 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" event={"ID":"85ee6420-36f0-467c-acf4-ebea8b02c8d5","Type":"ContainerDied","Data":"0a6ebf9514d44fa623afa2ad42e78869426bcafc62c418072ab42294a40efd6e"} Nov 24 11:18:44 crc kubenswrapper[5072]: I1124 11:18:44.001992 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" event={"ID":"85ee6420-36f0-467c-acf4-ebea8b02c8d5","Type":"ContainerStarted","Data":"9acae0aae65eaa2777547c62fd161d329c111af7aec02efa5b970dc26ddc2ae7"} Nov 24 11:18:44 crc kubenswrapper[5072]: I1124 11:18:44.002066 5072 scope.go:117] "RemoveContainer" containerID="e839d6d58c16c68cbc04eeeedb69dee8ec0dd6b4c9bf97590bae2b1dd76b231f" Nov 24 11:19:18 crc kubenswrapper[5072]: I1124 11:19:18.572733 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-v62vq"] Nov 24 11:19:18 crc kubenswrapper[5072]: E1124 11:19:18.573630 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d68516ef-c18f-4d3f-bc80-71739e73cee1" containerName="registry" Nov 24 11:19:18 crc kubenswrapper[5072]: I1124 11:19:18.573648 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="d68516ef-c18f-4d3f-bc80-71739e73cee1" containerName="registry" Nov 24 11:19:18 crc kubenswrapper[5072]: I1124 11:19:18.573814 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="d68516ef-c18f-4d3f-bc80-71739e73cee1" containerName="registry" Nov 24 11:19:18 crc kubenswrapper[5072]: I1124 11:19:18.574325 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-v62vq" Nov 24 11:19:18 crc kubenswrapper[5072]: I1124 11:19:18.576219 5072 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-qfdw5" Nov 24 11:19:18 crc kubenswrapper[5072]: I1124 11:19:18.576559 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Nov 24 11:19:18 crc kubenswrapper[5072]: I1124 11:19:18.579321 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-g8nvp"] Nov 24 11:19:18 crc kubenswrapper[5072]: I1124 11:19:18.579865 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Nov 24 11:19:18 crc kubenswrapper[5072]: I1124 11:19:18.580046 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-g8nvp" Nov 24 11:19:18 crc kubenswrapper[5072]: I1124 11:19:18.590293 5072 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-xl2rw" Nov 24 11:19:18 crc kubenswrapper[5072]: I1124 11:19:18.590495 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-v62vq"] Nov 24 11:19:18 crc kubenswrapper[5072]: I1124 11:19:18.596645 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-g8nvp"] Nov 24 11:19:18 crc kubenswrapper[5072]: I1124 11:19:18.621454 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-hcmw7"] Nov 24 11:19:18 crc kubenswrapper[5072]: I1124 11:19:18.622244 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-hcmw7" Nov 24 11:19:18 crc kubenswrapper[5072]: I1124 11:19:18.624005 5072 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-94tff" Nov 24 11:19:18 crc kubenswrapper[5072]: I1124 11:19:18.651075 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-hcmw7"] Nov 24 11:19:18 crc kubenswrapper[5072]: I1124 11:19:18.689727 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znfrq\" (UniqueName: \"kubernetes.io/projected/5da70e2a-5e52-437b-b1e4-fee7f8460a72-kube-api-access-znfrq\") pod \"cert-manager-webhook-5655c58dd6-hcmw7\" (UID: \"5da70e2a-5e52-437b-b1e4-fee7f8460a72\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-hcmw7" Nov 24 11:19:18 crc kubenswrapper[5072]: I1124 11:19:18.689775 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjfqw\" (UniqueName: \"kubernetes.io/projected/69649578-7c12-47bd-900a-a6ebe612c305-kube-api-access-sjfqw\") pod \"cert-manager-5b446d88c5-g8nvp\" (UID: \"69649578-7c12-47bd-900a-a6ebe612c305\") " pod="cert-manager/cert-manager-5b446d88c5-g8nvp" Nov 24 11:19:18 crc kubenswrapper[5072]: I1124 11:19:18.689858 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ntgj\" (UniqueName: \"kubernetes.io/projected/01b23be1-c336-40a5-8b57-60ed5edddef1-kube-api-access-2ntgj\") pod \"cert-manager-cainjector-7f985d654d-v62vq\" (UID: \"01b23be1-c336-40a5-8b57-60ed5edddef1\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-v62vq" Nov 24 11:19:18 crc kubenswrapper[5072]: I1124 11:19:18.790816 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ntgj\" (UniqueName: \"kubernetes.io/projected/01b23be1-c336-40a5-8b57-60ed5edddef1-kube-api-access-2ntgj\") pod \"cert-manager-cainjector-7f985d654d-v62vq\" (UID: \"01b23be1-c336-40a5-8b57-60ed5edddef1\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-v62vq" Nov 24 11:19:18 crc kubenswrapper[5072]: I1124 11:19:18.790885 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znfrq\" (UniqueName: \"kubernetes.io/projected/5da70e2a-5e52-437b-b1e4-fee7f8460a72-kube-api-access-znfrq\") pod \"cert-manager-webhook-5655c58dd6-hcmw7\" (UID: \"5da70e2a-5e52-437b-b1e4-fee7f8460a72\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-hcmw7" Nov 24 11:19:18 crc kubenswrapper[5072]: I1124 11:19:18.790905 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjfqw\" (UniqueName: \"kubernetes.io/projected/69649578-7c12-47bd-900a-a6ebe612c305-kube-api-access-sjfqw\") pod \"cert-manager-5b446d88c5-g8nvp\" (UID: \"69649578-7c12-47bd-900a-a6ebe612c305\") " pod="cert-manager/cert-manager-5b446d88c5-g8nvp" Nov 24 11:19:18 crc kubenswrapper[5072]: I1124 11:19:18.808077 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjfqw\" (UniqueName: \"kubernetes.io/projected/69649578-7c12-47bd-900a-a6ebe612c305-kube-api-access-sjfqw\") pod \"cert-manager-5b446d88c5-g8nvp\" (UID: \"69649578-7c12-47bd-900a-a6ebe612c305\") " pod="cert-manager/cert-manager-5b446d88c5-g8nvp" Nov 24 11:19:18 crc kubenswrapper[5072]: I1124 11:19:18.808209 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znfrq\" (UniqueName: \"kubernetes.io/projected/5da70e2a-5e52-437b-b1e4-fee7f8460a72-kube-api-access-znfrq\") pod \"cert-manager-webhook-5655c58dd6-hcmw7\" (UID: \"5da70e2a-5e52-437b-b1e4-fee7f8460a72\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-hcmw7" Nov 24 11:19:18 crc kubenswrapper[5072]: I1124 11:19:18.810406 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ntgj\" (UniqueName: \"kubernetes.io/projected/01b23be1-c336-40a5-8b57-60ed5edddef1-kube-api-access-2ntgj\") pod \"cert-manager-cainjector-7f985d654d-v62vq\" (UID: \"01b23be1-c336-40a5-8b57-60ed5edddef1\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-v62vq" Nov 24 11:19:18 crc kubenswrapper[5072]: I1124 11:19:18.893248 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-v62vq" Nov 24 11:19:18 crc kubenswrapper[5072]: I1124 11:19:18.899149 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-g8nvp" Nov 24 11:19:18 crc kubenswrapper[5072]: I1124 11:19:18.934059 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-hcmw7" Nov 24 11:19:19 crc kubenswrapper[5072]: I1124 11:19:19.209796 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-hcmw7"] Nov 24 11:19:19 crc kubenswrapper[5072]: I1124 11:19:19.213518 5072 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 11:19:19 crc kubenswrapper[5072]: I1124 11:19:19.226683 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-hcmw7" event={"ID":"5da70e2a-5e52-437b-b1e4-fee7f8460a72","Type":"ContainerStarted","Data":"7d6f2ddc0999851f43401219ef074e2dbc22f4e1466505b60d8a2d67951e5686"} Nov 24 11:19:19 crc kubenswrapper[5072]: I1124 11:19:19.346057 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-v62vq"] Nov 24 11:19:19 crc kubenswrapper[5072]: I1124 11:19:19.351042 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-g8nvp"] Nov 24 11:19:19 crc kubenswrapper[5072]: W1124 11:19:19.353253 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01b23be1_c336_40a5_8b57_60ed5edddef1.slice/crio-c99e6c104acdcd9386d8ceb031d6ccae267bdf7f6b09653013385298383155fa WatchSource:0}: Error finding container c99e6c104acdcd9386d8ceb031d6ccae267bdf7f6b09653013385298383155fa: Status 404 returned error can't find the container with id c99e6c104acdcd9386d8ceb031d6ccae267bdf7f6b09653013385298383155fa Nov 24 11:19:19 crc kubenswrapper[5072]: W1124 11:19:19.355910 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod69649578_7c12_47bd_900a_a6ebe612c305.slice/crio-51a9d9033089af9b6a303254268e631a4de6ef1b5e47829c61a9b3f4aae2ba04 WatchSource:0}: Error finding container 51a9d9033089af9b6a303254268e631a4de6ef1b5e47829c61a9b3f4aae2ba04: Status 404 returned error can't find the container with id 51a9d9033089af9b6a303254268e631a4de6ef1b5e47829c61a9b3f4aae2ba04 Nov 24 11:19:20 crc kubenswrapper[5072]: I1124 11:19:20.234341 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-v62vq" event={"ID":"01b23be1-c336-40a5-8b57-60ed5edddef1","Type":"ContainerStarted","Data":"c99e6c104acdcd9386d8ceb031d6ccae267bdf7f6b09653013385298383155fa"} Nov 24 11:19:20 crc kubenswrapper[5072]: I1124 11:19:20.235643 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-g8nvp" event={"ID":"69649578-7c12-47bd-900a-a6ebe612c305","Type":"ContainerStarted","Data":"51a9d9033089af9b6a303254268e631a4de6ef1b5e47829c61a9b3f4aae2ba04"} Nov 24 11:19:22 crc kubenswrapper[5072]: I1124 11:19:22.247484 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-hcmw7" event={"ID":"5da70e2a-5e52-437b-b1e4-fee7f8460a72","Type":"ContainerStarted","Data":"f00b1ccc5633f2af4a2f0a446deca31f309e44cb4e94ab12420f668d1d01e742"} Nov 24 11:19:22 crc kubenswrapper[5072]: I1124 11:19:22.247739 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-hcmw7" Nov 24 11:19:22 crc kubenswrapper[5072]: I1124 11:19:22.266541 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-hcmw7" podStartSLOduration=2.174298488 podStartE2EDuration="4.266522718s" podCreationTimestamp="2025-11-24 11:19:18 +0000 UTC" firstStartedPulling="2025-11-24 11:19:19.213272758 +0000 UTC m=+610.924797234" lastFinishedPulling="2025-11-24 11:19:21.305496988 +0000 UTC m=+613.017021464" observedRunningTime="2025-11-24 11:19:22.262748153 +0000 UTC m=+613.974272629" watchObservedRunningTime="2025-11-24 11:19:22.266522718 +0000 UTC m=+613.978047204" Nov 24 11:19:23 crc kubenswrapper[5072]: I1124 11:19:23.254653 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-v62vq" event={"ID":"01b23be1-c336-40a5-8b57-60ed5edddef1","Type":"ContainerStarted","Data":"4575c4d31fbec760f0dd64ce18db0a4e08162f4041038e9827ce0e3339339264"} Nov 24 11:19:23 crc kubenswrapper[5072]: I1124 11:19:23.258582 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-g8nvp" event={"ID":"69649578-7c12-47bd-900a-a6ebe612c305","Type":"ContainerStarted","Data":"c2bf06b458578bedf61156a2a22d7ced217f9a906b42ba101c57d6147174ee3e"} Nov 24 11:19:23 crc kubenswrapper[5072]: I1124 11:19:23.271536 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-v62vq" podStartSLOduration=2.056280272 podStartE2EDuration="5.271516257s" podCreationTimestamp="2025-11-24 11:19:18 +0000 UTC" firstStartedPulling="2025-11-24 11:19:19.355461085 +0000 UTC m=+611.066985601" lastFinishedPulling="2025-11-24 11:19:22.5706971 +0000 UTC m=+614.282221586" observedRunningTime="2025-11-24 11:19:23.269540787 +0000 UTC m=+614.981065273" watchObservedRunningTime="2025-11-24 11:19:23.271516257 +0000 UTC m=+614.983040733" Nov 24 11:19:23 crc kubenswrapper[5072]: I1124 11:19:23.291690 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-g8nvp" podStartSLOduration=2.065476714 podStartE2EDuration="5.291667565s" podCreationTimestamp="2025-11-24 11:19:18 +0000 UTC" firstStartedPulling="2025-11-24 11:19:19.358671776 +0000 UTC m=+611.070196262" lastFinishedPulling="2025-11-24 11:19:22.584862627 +0000 UTC m=+614.296387113" observedRunningTime="2025-11-24 11:19:23.287337706 +0000 UTC m=+614.998862192" watchObservedRunningTime="2025-11-24 11:19:23.291667565 +0000 UTC m=+615.003192061" Nov 24 11:19:28 crc kubenswrapper[5072]: I1124 11:19:28.938356 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-hcmw7" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.355086 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-n4qmw"] Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.355606 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" podUID="80fda759-ddfd-438a-b5a2-cb775ee1bf7e" containerName="ovn-controller" containerID="cri-o://7621cb39fa8d0330ee899d4962150519618be95eabfc592e6678bb5f5fbbdbfb" gracePeriod=30 Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.355820 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" podUID="80fda759-ddfd-438a-b5a2-cb775ee1bf7e" containerName="northd" containerID="cri-o://9f6526ffcce8bc139bd9442203e460c71b46e2e8cf9e1f0d03beb067f5dc1c39" gracePeriod=30 Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.356013 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" podUID="80fda759-ddfd-438a-b5a2-cb775ee1bf7e" containerName="sbdb" containerID="cri-o://af4c3d6857b6aaa6a401604f5423cfb55488de707a08698b4cf9f420b9c07975" gracePeriod=30 Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.356068 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" podUID="80fda759-ddfd-438a-b5a2-cb775ee1bf7e" containerName="nbdb" containerID="cri-o://89dd7133a078fe05808fdf20f22b6939004406ae85d3b6ef854a3e4031350491" gracePeriod=30 Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.356213 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" podUID="80fda759-ddfd-438a-b5a2-cb775ee1bf7e" containerName="kube-rbac-proxy-node" containerID="cri-o://1421e4bd297d99e68c36da933221bbabf8d74aa5fbfa7cbfe831215de52d4790" gracePeriod=30 Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.356264 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" podUID="80fda759-ddfd-438a-b5a2-cb775ee1bf7e" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://c82cb1df0677da29463f84139b09b8ee263695e4c994ef7d17846556260b5c24" gracePeriod=30 Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.356316 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" podUID="80fda759-ddfd-438a-b5a2-cb775ee1bf7e" containerName="ovn-acl-logging" containerID="cri-o://98470930757c0529cc831f91feab9f4b004c808efbfdf40e3e95b12e6af1c6d9" gracePeriod=30 Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.424865 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" podUID="80fda759-ddfd-438a-b5a2-cb775ee1bf7e" containerName="ovnkube-controller" containerID="cri-o://742ede6186d9ba2c21d0ef3f6150d749e4713eec1d303faa160b73247570dd93" gracePeriod=30 Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.703017 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n4qmw_80fda759-ddfd-438a-b5a2-cb775ee1bf7e/ovnkube-controller/3.log" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.704832 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n4qmw_80fda759-ddfd-438a-b5a2-cb775ee1bf7e/ovn-acl-logging/0.log" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.705342 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n4qmw_80fda759-ddfd-438a-b5a2-cb775ee1bf7e/ovn-controller/0.log" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.705730 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.755481 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-trpxh\" (UniqueName: \"kubernetes.io/projected/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-kube-api-access-trpxh\") pod \"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\" (UID: \"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\") " Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.755556 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-run-openvswitch\") pod \"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\" (UID: \"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\") " Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.755606 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-host-run-ovn-kubernetes\") pod \"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\" (UID: \"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\") " Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.755708 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "80fda759-ddfd-438a-b5a2-cb775ee1bf7e" (UID: "80fda759-ddfd-438a-b5a2-cb775ee1bf7e"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.755712 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "80fda759-ddfd-438a-b5a2-cb775ee1bf7e" (UID: "80fda759-ddfd-438a-b5a2-cb775ee1bf7e"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.755766 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-ovn-node-metrics-cert\") pod \"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\" (UID: \"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\") " Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.755796 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-host-slash\") pod \"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\" (UID: \"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\") " Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.755856 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-host-cni-bin\") pod \"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\" (UID: \"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\") " Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.755908 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-host-slash" (OuterVolumeSpecName: "host-slash") pod "80fda759-ddfd-438a-b5a2-cb775ee1bf7e" (UID: "80fda759-ddfd-438a-b5a2-cb775ee1bf7e"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.755954 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-var-lib-openvswitch\") pod \"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\" (UID: \"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\") " Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.756009 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "80fda759-ddfd-438a-b5a2-cb775ee1bf7e" (UID: "80fda759-ddfd-438a-b5a2-cb775ee1bf7e"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.756021 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "80fda759-ddfd-438a-b5a2-cb775ee1bf7e" (UID: "80fda759-ddfd-438a-b5a2-cb775ee1bf7e"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.756074 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-log-socket\") pod \"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\" (UID: \"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\") " Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.756128 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-log-socket" (OuterVolumeSpecName: "log-socket") pod "80fda759-ddfd-438a-b5a2-cb775ee1bf7e" (UID: "80fda759-ddfd-438a-b5a2-cb775ee1bf7e"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.756158 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\" (UID: \"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\") " Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.756221 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-host-run-netns\") pod \"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\" (UID: \"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\") " Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.756249 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "80fda759-ddfd-438a-b5a2-cb775ee1bf7e" (UID: "80fda759-ddfd-438a-b5a2-cb775ee1bf7e"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.756278 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-host-cni-netd\") pod \"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\" (UID: \"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\") " Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.756297 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "80fda759-ddfd-438a-b5a2-cb775ee1bf7e" (UID: "80fda759-ddfd-438a-b5a2-cb775ee1bf7e"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.756334 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-env-overrides\") pod \"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\" (UID: \"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\") " Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.756346 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "80fda759-ddfd-438a-b5a2-cb775ee1bf7e" (UID: "80fda759-ddfd-438a-b5a2-cb775ee1bf7e"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.756430 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-etc-openvswitch\") pod \"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\" (UID: \"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\") " Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.756493 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-run-systemd\") pod \"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\" (UID: \"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\") " Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.756541 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-host-kubelet\") pod \"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\" (UID: \"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\") " Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.756593 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-run-ovn\") pod \"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\" (UID: \"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\") " Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.756650 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-ovnkube-config\") pod \"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\" (UID: \"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\") " Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.756689 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-systemd-units\") pod \"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\" (UID: \"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\") " Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.756733 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-node-log\") pod \"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\" (UID: \"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\") " Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.756804 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-ovnkube-script-lib\") pod \"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\" (UID: \"80fda759-ddfd-438a-b5a2-cb775ee1bf7e\") " Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.756832 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "80fda759-ddfd-438a-b5a2-cb775ee1bf7e" (UID: "80fda759-ddfd-438a-b5a2-cb775ee1bf7e"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.756863 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "80fda759-ddfd-438a-b5a2-cb775ee1bf7e" (UID: "80fda759-ddfd-438a-b5a2-cb775ee1bf7e"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.756935 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "80fda759-ddfd-438a-b5a2-cb775ee1bf7e" (UID: "80fda759-ddfd-438a-b5a2-cb775ee1bf7e"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.756974 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "80fda759-ddfd-438a-b5a2-cb775ee1bf7e" (UID: "80fda759-ddfd-438a-b5a2-cb775ee1bf7e"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.757035 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-node-log" (OuterVolumeSpecName: "node-log") pod "80fda759-ddfd-438a-b5a2-cb775ee1bf7e" (UID: "80fda759-ddfd-438a-b5a2-cb775ee1bf7e"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.757066 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "80fda759-ddfd-438a-b5a2-cb775ee1bf7e" (UID: "80fda759-ddfd-438a-b5a2-cb775ee1bf7e"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.757201 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "80fda759-ddfd-438a-b5a2-cb775ee1bf7e" (UID: "80fda759-ddfd-438a-b5a2-cb775ee1bf7e"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.757290 5072 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-run-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.757341 5072 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.757404 5072 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-host-slash\") on node \"crc\" DevicePath \"\"" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.757431 5072 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.757455 5072 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-host-cni-bin\") on node \"crc\" DevicePath \"\"" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.757480 5072 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.757505 5072 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-log-socket\") on node \"crc\" DevicePath \"\"" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.757529 5072 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-host-run-netns\") on node \"crc\" DevicePath \"\"" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.757532 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "80fda759-ddfd-438a-b5a2-cb775ee1bf7e" (UID: "80fda759-ddfd-438a-b5a2-cb775ee1bf7e"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.757555 5072 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.757578 5072 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-host-cni-netd\") on node \"crc\" DevicePath \"\"" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.757601 5072 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.757624 5072 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-host-kubelet\") on node \"crc\" DevicePath \"\"" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.757646 5072 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.757669 5072 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-systemd-units\") on node \"crc\" DevicePath \"\"" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.757694 5072 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-node-log\") on node \"crc\" DevicePath \"\"" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.762541 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-kube-api-access-trpxh" (OuterVolumeSpecName: "kube-api-access-trpxh") pod "80fda759-ddfd-438a-b5a2-cb775ee1bf7e" (UID: "80fda759-ddfd-438a-b5a2-cb775ee1bf7e"). InnerVolumeSpecName "kube-api-access-trpxh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.764822 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-z4bj4"] Nov 24 11:19:29 crc kubenswrapper[5072]: E1124 11:19:29.765047 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80fda759-ddfd-438a-b5a2-cb775ee1bf7e" containerName="ovnkube-controller" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.765067 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="80fda759-ddfd-438a-b5a2-cb775ee1bf7e" containerName="ovnkube-controller" Nov 24 11:19:29 crc kubenswrapper[5072]: E1124 11:19:29.765079 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80fda759-ddfd-438a-b5a2-cb775ee1bf7e" containerName="ovnkube-controller" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.765086 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="80fda759-ddfd-438a-b5a2-cb775ee1bf7e" containerName="ovnkube-controller" Nov 24 11:19:29 crc kubenswrapper[5072]: E1124 11:19:29.765094 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80fda759-ddfd-438a-b5a2-cb775ee1bf7e" containerName="nbdb" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.765100 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="80fda759-ddfd-438a-b5a2-cb775ee1bf7e" containerName="nbdb" Nov 24 11:19:29 crc kubenswrapper[5072]: E1124 11:19:29.765114 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80fda759-ddfd-438a-b5a2-cb775ee1bf7e" containerName="northd" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.765121 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="80fda759-ddfd-438a-b5a2-cb775ee1bf7e" containerName="northd" Nov 24 11:19:29 crc kubenswrapper[5072]: E1124 11:19:29.765130 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80fda759-ddfd-438a-b5a2-cb775ee1bf7e" containerName="kube-rbac-proxy-node" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.765136 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="80fda759-ddfd-438a-b5a2-cb775ee1bf7e" containerName="kube-rbac-proxy-node" Nov 24 11:19:29 crc kubenswrapper[5072]: E1124 11:19:29.765146 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80fda759-ddfd-438a-b5a2-cb775ee1bf7e" containerName="ovn-controller" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.765153 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="80fda759-ddfd-438a-b5a2-cb775ee1bf7e" containerName="ovn-controller" Nov 24 11:19:29 crc kubenswrapper[5072]: E1124 11:19:29.765165 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80fda759-ddfd-438a-b5a2-cb775ee1bf7e" containerName="ovnkube-controller" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.765171 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="80fda759-ddfd-438a-b5a2-cb775ee1bf7e" containerName="ovnkube-controller" Nov 24 11:19:29 crc kubenswrapper[5072]: E1124 11:19:29.765180 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80fda759-ddfd-438a-b5a2-cb775ee1bf7e" containerName="ovnkube-controller" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.765187 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="80fda759-ddfd-438a-b5a2-cb775ee1bf7e" containerName="ovnkube-controller" Nov 24 11:19:29 crc kubenswrapper[5072]: E1124 11:19:29.765197 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80fda759-ddfd-438a-b5a2-cb775ee1bf7e" containerName="sbdb" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.765204 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="80fda759-ddfd-438a-b5a2-cb775ee1bf7e" containerName="sbdb" Nov 24 11:19:29 crc kubenswrapper[5072]: E1124 11:19:29.765212 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80fda759-ddfd-438a-b5a2-cb775ee1bf7e" containerName="ovn-acl-logging" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.765219 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="80fda759-ddfd-438a-b5a2-cb775ee1bf7e" containerName="ovn-acl-logging" Nov 24 11:19:29 crc kubenswrapper[5072]: E1124 11:19:29.765233 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80fda759-ddfd-438a-b5a2-cb775ee1bf7e" containerName="kubecfg-setup" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.765239 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="80fda759-ddfd-438a-b5a2-cb775ee1bf7e" containerName="kubecfg-setup" Nov 24 11:19:29 crc kubenswrapper[5072]: E1124 11:19:29.765248 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80fda759-ddfd-438a-b5a2-cb775ee1bf7e" containerName="kube-rbac-proxy-ovn-metrics" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.765255 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="80fda759-ddfd-438a-b5a2-cb775ee1bf7e" containerName="kube-rbac-proxy-ovn-metrics" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.765361 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="80fda759-ddfd-438a-b5a2-cb775ee1bf7e" containerName="ovnkube-controller" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.765352 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "80fda759-ddfd-438a-b5a2-cb775ee1bf7e" (UID: "80fda759-ddfd-438a-b5a2-cb775ee1bf7e"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.765387 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="80fda759-ddfd-438a-b5a2-cb775ee1bf7e" containerName="nbdb" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.765398 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="80fda759-ddfd-438a-b5a2-cb775ee1bf7e" containerName="ovnkube-controller" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.765411 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="80fda759-ddfd-438a-b5a2-cb775ee1bf7e" containerName="kube-rbac-proxy-node" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.765422 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="80fda759-ddfd-438a-b5a2-cb775ee1bf7e" containerName="northd" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.765432 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="80fda759-ddfd-438a-b5a2-cb775ee1bf7e" containerName="ovn-controller" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.765439 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="80fda759-ddfd-438a-b5a2-cb775ee1bf7e" containerName="ovn-acl-logging" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.765450 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="80fda759-ddfd-438a-b5a2-cb775ee1bf7e" containerName="ovnkube-controller" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.765457 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="80fda759-ddfd-438a-b5a2-cb775ee1bf7e" containerName="kube-rbac-proxy-ovn-metrics" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.765468 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="80fda759-ddfd-438a-b5a2-cb775ee1bf7e" containerName="sbdb" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.765477 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="80fda759-ddfd-438a-b5a2-cb775ee1bf7e" containerName="ovnkube-controller" Nov 24 11:19:29 crc kubenswrapper[5072]: E1124 11:19:29.765581 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80fda759-ddfd-438a-b5a2-cb775ee1bf7e" containerName="ovnkube-controller" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.765591 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="80fda759-ddfd-438a-b5a2-cb775ee1bf7e" containerName="ovnkube-controller" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.765691 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="80fda759-ddfd-438a-b5a2-cb775ee1bf7e" containerName="ovnkube-controller" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.767181 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-z4bj4" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.779080 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "80fda759-ddfd-438a-b5a2-cb775ee1bf7e" (UID: "80fda759-ddfd-438a-b5a2-cb775ee1bf7e"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.858960 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/77aeb6df-2cbe-4e4d-a103-d530f95eee80-host-run-ovn-kubernetes\") pod \"ovnkube-node-z4bj4\" (UID: \"77aeb6df-2cbe-4e4d-a103-d530f95eee80\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4bj4" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.859006 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/77aeb6df-2cbe-4e4d-a103-d530f95eee80-var-lib-openvswitch\") pod \"ovnkube-node-z4bj4\" (UID: \"77aeb6df-2cbe-4e4d-a103-d530f95eee80\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4bj4" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.859044 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/77aeb6df-2cbe-4e4d-a103-d530f95eee80-host-cni-bin\") pod \"ovnkube-node-z4bj4\" (UID: \"77aeb6df-2cbe-4e4d-a103-d530f95eee80\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4bj4" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.859078 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/77aeb6df-2cbe-4e4d-a103-d530f95eee80-run-ovn\") pod \"ovnkube-node-z4bj4\" (UID: \"77aeb6df-2cbe-4e4d-a103-d530f95eee80\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4bj4" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.859107 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/77aeb6df-2cbe-4e4d-a103-d530f95eee80-host-slash\") pod \"ovnkube-node-z4bj4\" (UID: \"77aeb6df-2cbe-4e4d-a103-d530f95eee80\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4bj4" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.859171 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbrkw\" (UniqueName: \"kubernetes.io/projected/77aeb6df-2cbe-4e4d-a103-d530f95eee80-kube-api-access-xbrkw\") pod \"ovnkube-node-z4bj4\" (UID: \"77aeb6df-2cbe-4e4d-a103-d530f95eee80\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4bj4" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.859194 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/77aeb6df-2cbe-4e4d-a103-d530f95eee80-ovnkube-config\") pod \"ovnkube-node-z4bj4\" (UID: \"77aeb6df-2cbe-4e4d-a103-d530f95eee80\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4bj4" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.859221 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/77aeb6df-2cbe-4e4d-a103-d530f95eee80-log-socket\") pod \"ovnkube-node-z4bj4\" (UID: \"77aeb6df-2cbe-4e4d-a103-d530f95eee80\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4bj4" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.859248 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/77aeb6df-2cbe-4e4d-a103-d530f95eee80-ovn-node-metrics-cert\") pod \"ovnkube-node-z4bj4\" (UID: \"77aeb6df-2cbe-4e4d-a103-d530f95eee80\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4bj4" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.859271 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/77aeb6df-2cbe-4e4d-a103-d530f95eee80-env-overrides\") pod \"ovnkube-node-z4bj4\" (UID: \"77aeb6df-2cbe-4e4d-a103-d530f95eee80\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4bj4" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.859386 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/77aeb6df-2cbe-4e4d-a103-d530f95eee80-ovnkube-script-lib\") pod \"ovnkube-node-z4bj4\" (UID: \"77aeb6df-2cbe-4e4d-a103-d530f95eee80\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4bj4" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.859456 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/77aeb6df-2cbe-4e4d-a103-d530f95eee80-run-systemd\") pod \"ovnkube-node-z4bj4\" (UID: \"77aeb6df-2cbe-4e4d-a103-d530f95eee80\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4bj4" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.859495 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/77aeb6df-2cbe-4e4d-a103-d530f95eee80-host-run-netns\") pod \"ovnkube-node-z4bj4\" (UID: \"77aeb6df-2cbe-4e4d-a103-d530f95eee80\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4bj4" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.859567 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/77aeb6df-2cbe-4e4d-a103-d530f95eee80-etc-openvswitch\") pod \"ovnkube-node-z4bj4\" (UID: \"77aeb6df-2cbe-4e4d-a103-d530f95eee80\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4bj4" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.859596 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/77aeb6df-2cbe-4e4d-a103-d530f95eee80-node-log\") pod \"ovnkube-node-z4bj4\" (UID: \"77aeb6df-2cbe-4e4d-a103-d530f95eee80\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4bj4" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.859624 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/77aeb6df-2cbe-4e4d-a103-d530f95eee80-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-z4bj4\" (UID: \"77aeb6df-2cbe-4e4d-a103-d530f95eee80\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4bj4" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.859655 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/77aeb6df-2cbe-4e4d-a103-d530f95eee80-systemd-units\") pod \"ovnkube-node-z4bj4\" (UID: \"77aeb6df-2cbe-4e4d-a103-d530f95eee80\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4bj4" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.859683 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/77aeb6df-2cbe-4e4d-a103-d530f95eee80-host-cni-netd\") pod \"ovnkube-node-z4bj4\" (UID: \"77aeb6df-2cbe-4e4d-a103-d530f95eee80\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4bj4" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.859709 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/77aeb6df-2cbe-4e4d-a103-d530f95eee80-run-openvswitch\") pod \"ovnkube-node-z4bj4\" (UID: \"77aeb6df-2cbe-4e4d-a103-d530f95eee80\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4bj4" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.859738 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/77aeb6df-2cbe-4e4d-a103-d530f95eee80-host-kubelet\") pod \"ovnkube-node-z4bj4\" (UID: \"77aeb6df-2cbe-4e4d-a103-d530f95eee80\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4bj4" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.859847 5072 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-run-systemd\") on node \"crc\" DevicePath \"\"" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.859861 5072 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.859870 5072 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.859879 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-trpxh\" (UniqueName: \"kubernetes.io/projected/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-kube-api-access-trpxh\") on node \"crc\" DevicePath \"\"" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.859888 5072 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/80fda759-ddfd-438a-b5a2-cb775ee1bf7e-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.960816 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/77aeb6df-2cbe-4e4d-a103-d530f95eee80-run-ovn\") pod \"ovnkube-node-z4bj4\" (UID: \"77aeb6df-2cbe-4e4d-a103-d530f95eee80\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4bj4" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.960889 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/77aeb6df-2cbe-4e4d-a103-d530f95eee80-host-slash\") pod \"ovnkube-node-z4bj4\" (UID: \"77aeb6df-2cbe-4e4d-a103-d530f95eee80\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4bj4" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.960929 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbrkw\" (UniqueName: \"kubernetes.io/projected/77aeb6df-2cbe-4e4d-a103-d530f95eee80-kube-api-access-xbrkw\") pod \"ovnkube-node-z4bj4\" (UID: \"77aeb6df-2cbe-4e4d-a103-d530f95eee80\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4bj4" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.960967 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/77aeb6df-2cbe-4e4d-a103-d530f95eee80-ovnkube-config\") pod \"ovnkube-node-z4bj4\" (UID: \"77aeb6df-2cbe-4e4d-a103-d530f95eee80\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4bj4" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.960980 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/77aeb6df-2cbe-4e4d-a103-d530f95eee80-run-ovn\") pod \"ovnkube-node-z4bj4\" (UID: \"77aeb6df-2cbe-4e4d-a103-d530f95eee80\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4bj4" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.961045 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/77aeb6df-2cbe-4e4d-a103-d530f95eee80-log-socket\") pod \"ovnkube-node-z4bj4\" (UID: \"77aeb6df-2cbe-4e4d-a103-d530f95eee80\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4bj4" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.960999 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/77aeb6df-2cbe-4e4d-a103-d530f95eee80-log-socket\") pod \"ovnkube-node-z4bj4\" (UID: \"77aeb6df-2cbe-4e4d-a103-d530f95eee80\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4bj4" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.961120 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/77aeb6df-2cbe-4e4d-a103-d530f95eee80-ovn-node-metrics-cert\") pod \"ovnkube-node-z4bj4\" (UID: \"77aeb6df-2cbe-4e4d-a103-d530f95eee80\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4bj4" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.960992 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/77aeb6df-2cbe-4e4d-a103-d530f95eee80-host-slash\") pod \"ovnkube-node-z4bj4\" (UID: \"77aeb6df-2cbe-4e4d-a103-d530f95eee80\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4bj4" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.961151 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/77aeb6df-2cbe-4e4d-a103-d530f95eee80-env-overrides\") pod \"ovnkube-node-z4bj4\" (UID: \"77aeb6df-2cbe-4e4d-a103-d530f95eee80\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4bj4" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.961273 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/77aeb6df-2cbe-4e4d-a103-d530f95eee80-ovnkube-script-lib\") pod \"ovnkube-node-z4bj4\" (UID: \"77aeb6df-2cbe-4e4d-a103-d530f95eee80\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4bj4" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.961339 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/77aeb6df-2cbe-4e4d-a103-d530f95eee80-run-systemd\") pod \"ovnkube-node-z4bj4\" (UID: \"77aeb6df-2cbe-4e4d-a103-d530f95eee80\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4bj4" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.961434 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/77aeb6df-2cbe-4e4d-a103-d530f95eee80-host-run-netns\") pod \"ovnkube-node-z4bj4\" (UID: \"77aeb6df-2cbe-4e4d-a103-d530f95eee80\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4bj4" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.961497 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/77aeb6df-2cbe-4e4d-a103-d530f95eee80-etc-openvswitch\") pod \"ovnkube-node-z4bj4\" (UID: \"77aeb6df-2cbe-4e4d-a103-d530f95eee80\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4bj4" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.961544 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/77aeb6df-2cbe-4e4d-a103-d530f95eee80-node-log\") pod \"ovnkube-node-z4bj4\" (UID: \"77aeb6df-2cbe-4e4d-a103-d530f95eee80\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4bj4" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.961588 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/77aeb6df-2cbe-4e4d-a103-d530f95eee80-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-z4bj4\" (UID: \"77aeb6df-2cbe-4e4d-a103-d530f95eee80\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4bj4" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.961637 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/77aeb6df-2cbe-4e4d-a103-d530f95eee80-systemd-units\") pod \"ovnkube-node-z4bj4\" (UID: \"77aeb6df-2cbe-4e4d-a103-d530f95eee80\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4bj4" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.961704 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/77aeb6df-2cbe-4e4d-a103-d530f95eee80-host-cni-netd\") pod \"ovnkube-node-z4bj4\" (UID: \"77aeb6df-2cbe-4e4d-a103-d530f95eee80\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4bj4" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.961783 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/77aeb6df-2cbe-4e4d-a103-d530f95eee80-run-openvswitch\") pod \"ovnkube-node-z4bj4\" (UID: \"77aeb6df-2cbe-4e4d-a103-d530f95eee80\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4bj4" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.961830 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/77aeb6df-2cbe-4e4d-a103-d530f95eee80-host-kubelet\") pod \"ovnkube-node-z4bj4\" (UID: \"77aeb6df-2cbe-4e4d-a103-d530f95eee80\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4bj4" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.961872 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/77aeb6df-2cbe-4e4d-a103-d530f95eee80-host-run-ovn-kubernetes\") pod \"ovnkube-node-z4bj4\" (UID: \"77aeb6df-2cbe-4e4d-a103-d530f95eee80\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4bj4" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.961917 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/77aeb6df-2cbe-4e4d-a103-d530f95eee80-var-lib-openvswitch\") pod \"ovnkube-node-z4bj4\" (UID: \"77aeb6df-2cbe-4e4d-a103-d530f95eee80\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4bj4" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.961972 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/77aeb6df-2cbe-4e4d-a103-d530f95eee80-host-cni-bin\") pod \"ovnkube-node-z4bj4\" (UID: \"77aeb6df-2cbe-4e4d-a103-d530f95eee80\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4bj4" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.962087 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/77aeb6df-2cbe-4e4d-a103-d530f95eee80-host-cni-bin\") pod \"ovnkube-node-z4bj4\" (UID: \"77aeb6df-2cbe-4e4d-a103-d530f95eee80\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4bj4" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.962158 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/77aeb6df-2cbe-4e4d-a103-d530f95eee80-run-systemd\") pod \"ovnkube-node-z4bj4\" (UID: \"77aeb6df-2cbe-4e4d-a103-d530f95eee80\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4bj4" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.962224 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/77aeb6df-2cbe-4e4d-a103-d530f95eee80-host-run-netns\") pod \"ovnkube-node-z4bj4\" (UID: \"77aeb6df-2cbe-4e4d-a103-d530f95eee80\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4bj4" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.962285 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/77aeb6df-2cbe-4e4d-a103-d530f95eee80-etc-openvswitch\") pod \"ovnkube-node-z4bj4\" (UID: \"77aeb6df-2cbe-4e4d-a103-d530f95eee80\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4bj4" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.962350 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/77aeb6df-2cbe-4e4d-a103-d530f95eee80-node-log\") pod \"ovnkube-node-z4bj4\" (UID: \"77aeb6df-2cbe-4e4d-a103-d530f95eee80\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4bj4" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.962447 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/77aeb6df-2cbe-4e4d-a103-d530f95eee80-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-z4bj4\" (UID: \"77aeb6df-2cbe-4e4d-a103-d530f95eee80\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4bj4" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.962487 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/77aeb6df-2cbe-4e4d-a103-d530f95eee80-env-overrides\") pod \"ovnkube-node-z4bj4\" (UID: \"77aeb6df-2cbe-4e4d-a103-d530f95eee80\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4bj4" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.962511 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/77aeb6df-2cbe-4e4d-a103-d530f95eee80-systemd-units\") pod \"ovnkube-node-z4bj4\" (UID: \"77aeb6df-2cbe-4e4d-a103-d530f95eee80\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4bj4" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.962558 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/77aeb6df-2cbe-4e4d-a103-d530f95eee80-host-kubelet\") pod \"ovnkube-node-z4bj4\" (UID: \"77aeb6df-2cbe-4e4d-a103-d530f95eee80\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4bj4" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.962594 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/77aeb6df-2cbe-4e4d-a103-d530f95eee80-host-cni-netd\") pod \"ovnkube-node-z4bj4\" (UID: \"77aeb6df-2cbe-4e4d-a103-d530f95eee80\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4bj4" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.962623 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/77aeb6df-2cbe-4e4d-a103-d530f95eee80-run-openvswitch\") pod \"ovnkube-node-z4bj4\" (UID: \"77aeb6df-2cbe-4e4d-a103-d530f95eee80\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4bj4" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.962652 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/77aeb6df-2cbe-4e4d-a103-d530f95eee80-host-run-ovn-kubernetes\") pod \"ovnkube-node-z4bj4\" (UID: \"77aeb6df-2cbe-4e4d-a103-d530f95eee80\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4bj4" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.962681 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/77aeb6df-2cbe-4e4d-a103-d530f95eee80-var-lib-openvswitch\") pod \"ovnkube-node-z4bj4\" (UID: \"77aeb6df-2cbe-4e4d-a103-d530f95eee80\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4bj4" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.962800 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/77aeb6df-2cbe-4e4d-a103-d530f95eee80-ovnkube-script-lib\") pod \"ovnkube-node-z4bj4\" (UID: \"77aeb6df-2cbe-4e4d-a103-d530f95eee80\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4bj4" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.962945 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/77aeb6df-2cbe-4e4d-a103-d530f95eee80-ovnkube-config\") pod \"ovnkube-node-z4bj4\" (UID: \"77aeb6df-2cbe-4e4d-a103-d530f95eee80\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4bj4" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.967217 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/77aeb6df-2cbe-4e4d-a103-d530f95eee80-ovn-node-metrics-cert\") pod \"ovnkube-node-z4bj4\" (UID: \"77aeb6df-2cbe-4e4d-a103-d530f95eee80\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4bj4" Nov 24 11:19:29 crc kubenswrapper[5072]: I1124 11:19:29.988820 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbrkw\" (UniqueName: \"kubernetes.io/projected/77aeb6df-2cbe-4e4d-a103-d530f95eee80-kube-api-access-xbrkw\") pod \"ovnkube-node-z4bj4\" (UID: \"77aeb6df-2cbe-4e4d-a103-d530f95eee80\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4bj4" Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.085397 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-z4bj4" Nov 24 11:19:30 crc kubenswrapper[5072]: W1124 11:19:30.111548 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod77aeb6df_2cbe_4e4d_a103_d530f95eee80.slice/crio-03a5319129e9527a20f69b9ee73436c01060d2fa5f153f207a002208e4c8f8bd WatchSource:0}: Error finding container 03a5319129e9527a20f69b9ee73436c01060d2fa5f153f207a002208e4c8f8bd: Status 404 returned error can't find the container with id 03a5319129e9527a20f69b9ee73436c01060d2fa5f153f207a002208e4c8f8bd Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.307671 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n4qmw_80fda759-ddfd-438a-b5a2-cb775ee1bf7e/ovnkube-controller/3.log" Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.310340 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n4qmw_80fda759-ddfd-438a-b5a2-cb775ee1bf7e/ovn-acl-logging/0.log" Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.311088 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n4qmw_80fda759-ddfd-438a-b5a2-cb775ee1bf7e/ovn-controller/0.log" Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.312061 5072 generic.go:334] "Generic (PLEG): container finished" podID="80fda759-ddfd-438a-b5a2-cb775ee1bf7e" containerID="742ede6186d9ba2c21d0ef3f6150d749e4713eec1d303faa160b73247570dd93" exitCode=0 Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.312113 5072 generic.go:334] "Generic (PLEG): container finished" podID="80fda759-ddfd-438a-b5a2-cb775ee1bf7e" containerID="af4c3d6857b6aaa6a401604f5423cfb55488de707a08698b4cf9f420b9c07975" exitCode=0 Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.312135 5072 generic.go:334] "Generic (PLEG): container finished" podID="80fda759-ddfd-438a-b5a2-cb775ee1bf7e" containerID="89dd7133a078fe05808fdf20f22b6939004406ae85d3b6ef854a3e4031350491" exitCode=0 Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.312157 5072 generic.go:334] "Generic (PLEG): container finished" podID="80fda759-ddfd-438a-b5a2-cb775ee1bf7e" containerID="9f6526ffcce8bc139bd9442203e460c71b46e2e8cf9e1f0d03beb067f5dc1c39" exitCode=0 Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.314501 5072 generic.go:334] "Generic (PLEG): container finished" podID="80fda759-ddfd-438a-b5a2-cb775ee1bf7e" containerID="c82cb1df0677da29463f84139b09b8ee263695e4c994ef7d17846556260b5c24" exitCode=0 Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.314535 5072 generic.go:334] "Generic (PLEG): container finished" podID="80fda759-ddfd-438a-b5a2-cb775ee1bf7e" containerID="1421e4bd297d99e68c36da933221bbabf8d74aa5fbfa7cbfe831215de52d4790" exitCode=0 Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.314558 5072 generic.go:334] "Generic (PLEG): container finished" podID="80fda759-ddfd-438a-b5a2-cb775ee1bf7e" containerID="98470930757c0529cc831f91feab9f4b004c808efbfdf40e3e95b12e6af1c6d9" exitCode=143 Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.314577 5072 generic.go:334] "Generic (PLEG): container finished" podID="80fda759-ddfd-438a-b5a2-cb775ee1bf7e" containerID="7621cb39fa8d0330ee899d4962150519618be95eabfc592e6678bb5f5fbbdbfb" exitCode=143 Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.312233 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" event={"ID":"80fda759-ddfd-438a-b5a2-cb775ee1bf7e","Type":"ContainerDied","Data":"742ede6186d9ba2c21d0ef3f6150d749e4713eec1d303faa160b73247570dd93"} Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.314708 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" event={"ID":"80fda759-ddfd-438a-b5a2-cb775ee1bf7e","Type":"ContainerDied","Data":"af4c3d6857b6aaa6a401604f5423cfb55488de707a08698b4cf9f420b9c07975"} Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.314742 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" event={"ID":"80fda759-ddfd-438a-b5a2-cb775ee1bf7e","Type":"ContainerDied","Data":"89dd7133a078fe05808fdf20f22b6939004406ae85d3b6ef854a3e4031350491"} Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.314774 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" event={"ID":"80fda759-ddfd-438a-b5a2-cb775ee1bf7e","Type":"ContainerDied","Data":"9f6526ffcce8bc139bd9442203e460c71b46e2e8cf9e1f0d03beb067f5dc1c39"} Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.314799 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" event={"ID":"80fda759-ddfd-438a-b5a2-cb775ee1bf7e","Type":"ContainerDied","Data":"c82cb1df0677da29463f84139b09b8ee263695e4c994ef7d17846556260b5c24"} Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.314823 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" event={"ID":"80fda759-ddfd-438a-b5a2-cb775ee1bf7e","Type":"ContainerDied","Data":"1421e4bd297d99e68c36da933221bbabf8d74aa5fbfa7cbfe831215de52d4790"} Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.314847 5072 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b30fc71ef9fdf26e114844d344131e79b2ea981d3e69760bb92b1279f0b3c434"} Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.314868 5072 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"af4c3d6857b6aaa6a401604f5423cfb55488de707a08698b4cf9f420b9c07975"} Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.314883 5072 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"89dd7133a078fe05808fdf20f22b6939004406ae85d3b6ef854a3e4031350491"} Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.314897 5072 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9f6526ffcce8bc139bd9442203e460c71b46e2e8cf9e1f0d03beb067f5dc1c39"} Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.314910 5072 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c82cb1df0677da29463f84139b09b8ee263695e4c994ef7d17846556260b5c24"} Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.314924 5072 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1421e4bd297d99e68c36da933221bbabf8d74aa5fbfa7cbfe831215de52d4790"} Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.314937 5072 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"98470930757c0529cc831f91feab9f4b004c808efbfdf40e3e95b12e6af1c6d9"} Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.314952 5072 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7621cb39fa8d0330ee899d4962150519618be95eabfc592e6678bb5f5fbbdbfb"} Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.314965 5072 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413"} Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.314985 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" event={"ID":"80fda759-ddfd-438a-b5a2-cb775ee1bf7e","Type":"ContainerDied","Data":"98470930757c0529cc831f91feab9f4b004c808efbfdf40e3e95b12e6af1c6d9"} Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.315005 5072 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"742ede6186d9ba2c21d0ef3f6150d749e4713eec1d303faa160b73247570dd93"} Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.315019 5072 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b30fc71ef9fdf26e114844d344131e79b2ea981d3e69760bb92b1279f0b3c434"} Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.315033 5072 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"af4c3d6857b6aaa6a401604f5423cfb55488de707a08698b4cf9f420b9c07975"} Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.315046 5072 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"89dd7133a078fe05808fdf20f22b6939004406ae85d3b6ef854a3e4031350491"} Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.315060 5072 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9f6526ffcce8bc139bd9442203e460c71b46e2e8cf9e1f0d03beb067f5dc1c39"} Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.315075 5072 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c82cb1df0677da29463f84139b09b8ee263695e4c994ef7d17846556260b5c24"} Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.315091 5072 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1421e4bd297d99e68c36da933221bbabf8d74aa5fbfa7cbfe831215de52d4790"} Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.315105 5072 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"98470930757c0529cc831f91feab9f4b004c808efbfdf40e3e95b12e6af1c6d9"} Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.315122 5072 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7621cb39fa8d0330ee899d4962150519618be95eabfc592e6678bb5f5fbbdbfb"} Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.315138 5072 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413"} Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.315158 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" event={"ID":"80fda759-ddfd-438a-b5a2-cb775ee1bf7e","Type":"ContainerDied","Data":"7621cb39fa8d0330ee899d4962150519618be95eabfc592e6678bb5f5fbbdbfb"} Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.315178 5072 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"742ede6186d9ba2c21d0ef3f6150d749e4713eec1d303faa160b73247570dd93"} Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.315195 5072 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b30fc71ef9fdf26e114844d344131e79b2ea981d3e69760bb92b1279f0b3c434"} Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.315209 5072 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"af4c3d6857b6aaa6a401604f5423cfb55488de707a08698b4cf9f420b9c07975"} Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.315222 5072 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"89dd7133a078fe05808fdf20f22b6939004406ae85d3b6ef854a3e4031350491"} Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.315236 5072 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9f6526ffcce8bc139bd9442203e460c71b46e2e8cf9e1f0d03beb067f5dc1c39"} Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.315249 5072 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c82cb1df0677da29463f84139b09b8ee263695e4c994ef7d17846556260b5c24"} Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.315263 5072 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1421e4bd297d99e68c36da933221bbabf8d74aa5fbfa7cbfe831215de52d4790"} Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.315276 5072 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"98470930757c0529cc831f91feab9f4b004c808efbfdf40e3e95b12e6af1c6d9"} Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.315289 5072 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7621cb39fa8d0330ee899d4962150519618be95eabfc592e6678bb5f5fbbdbfb"} Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.315304 5072 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413"} Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.315328 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" event={"ID":"80fda759-ddfd-438a-b5a2-cb775ee1bf7e","Type":"ContainerDied","Data":"c1373cc5d09a0d75178ee71120ac335cf3b3503e019ef93010195b148b5501b9"} Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.315348 5072 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"742ede6186d9ba2c21d0ef3f6150d749e4713eec1d303faa160b73247570dd93"} Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.315363 5072 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b30fc71ef9fdf26e114844d344131e79b2ea981d3e69760bb92b1279f0b3c434"} Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.315408 5072 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"af4c3d6857b6aaa6a401604f5423cfb55488de707a08698b4cf9f420b9c07975"} Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.315423 5072 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"89dd7133a078fe05808fdf20f22b6939004406ae85d3b6ef854a3e4031350491"} Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.315437 5072 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9f6526ffcce8bc139bd9442203e460c71b46e2e8cf9e1f0d03beb067f5dc1c39"} Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.315451 5072 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c82cb1df0677da29463f84139b09b8ee263695e4c994ef7d17846556260b5c24"} Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.315465 5072 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1421e4bd297d99e68c36da933221bbabf8d74aa5fbfa7cbfe831215de52d4790"} Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.315479 5072 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"98470930757c0529cc831f91feab9f4b004c808efbfdf40e3e95b12e6af1c6d9"} Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.315492 5072 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7621cb39fa8d0330ee899d4962150519618be95eabfc592e6678bb5f5fbbdbfb"} Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.315506 5072 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413"} Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.315535 5072 scope.go:117] "RemoveContainer" containerID="742ede6186d9ba2c21d0ef3f6150d749e4713eec1d303faa160b73247570dd93" Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.312279 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-n4qmw" Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.321218 5072 generic.go:334] "Generic (PLEG): container finished" podID="77aeb6df-2cbe-4e4d-a103-d530f95eee80" containerID="89b0e46bf55d371129d208c528bf3ca4034e7a550437f536452abe4f45b84b9c" exitCode=0 Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.321348 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z4bj4" event={"ID":"77aeb6df-2cbe-4e4d-a103-d530f95eee80","Type":"ContainerDied","Data":"89b0e46bf55d371129d208c528bf3ca4034e7a550437f536452abe4f45b84b9c"} Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.321397 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z4bj4" event={"ID":"77aeb6df-2cbe-4e4d-a103-d530f95eee80","Type":"ContainerStarted","Data":"03a5319129e9527a20f69b9ee73436c01060d2fa5f153f207a002208e4c8f8bd"} Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.325415 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-t8b9x_1a9fe7b3-71a3-4388-8ee4-7531ceef6049/kube-multus/2.log" Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.328536 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-t8b9x_1a9fe7b3-71a3-4388-8ee4-7531ceef6049/kube-multus/1.log" Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.328627 5072 generic.go:334] "Generic (PLEG): container finished" podID="1a9fe7b3-71a3-4388-8ee4-7531ceef6049" containerID="bfd40dad8f619581f0615e6e2037e751d4dfed983e7bf4530c461175ff0bb47f" exitCode=2 Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.328681 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-t8b9x" event={"ID":"1a9fe7b3-71a3-4388-8ee4-7531ceef6049","Type":"ContainerDied","Data":"bfd40dad8f619581f0615e6e2037e751d4dfed983e7bf4530c461175ff0bb47f"} Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.328731 5072 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"db181b35d5ddd8cb7ce31d9293b82a515a8889794cf9696c664b101693247cc6"} Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.329844 5072 scope.go:117] "RemoveContainer" containerID="bfd40dad8f619581f0615e6e2037e751d4dfed983e7bf4530c461175ff0bb47f" Nov 24 11:19:30 crc kubenswrapper[5072]: E1124 11:19:30.330156 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-t8b9x_openshift-multus(1a9fe7b3-71a3-4388-8ee4-7531ceef6049)\"" pod="openshift-multus/multus-t8b9x" podUID="1a9fe7b3-71a3-4388-8ee4-7531ceef6049" Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.343603 5072 scope.go:117] "RemoveContainer" containerID="b30fc71ef9fdf26e114844d344131e79b2ea981d3e69760bb92b1279f0b3c434" Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.403840 5072 scope.go:117] "RemoveContainer" containerID="af4c3d6857b6aaa6a401604f5423cfb55488de707a08698b4cf9f420b9c07975" Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.432244 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-n4qmw"] Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.435074 5072 scope.go:117] "RemoveContainer" containerID="89dd7133a078fe05808fdf20f22b6939004406ae85d3b6ef854a3e4031350491" Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.436274 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-n4qmw"] Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.448064 5072 scope.go:117] "RemoveContainer" containerID="9f6526ffcce8bc139bd9442203e460c71b46e2e8cf9e1f0d03beb067f5dc1c39" Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.463710 5072 scope.go:117] "RemoveContainer" containerID="c82cb1df0677da29463f84139b09b8ee263695e4c994ef7d17846556260b5c24" Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.481020 5072 scope.go:117] "RemoveContainer" containerID="1421e4bd297d99e68c36da933221bbabf8d74aa5fbfa7cbfe831215de52d4790" Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.547794 5072 scope.go:117] "RemoveContainer" containerID="98470930757c0529cc831f91feab9f4b004c808efbfdf40e3e95b12e6af1c6d9" Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.568873 5072 scope.go:117] "RemoveContainer" containerID="7621cb39fa8d0330ee899d4962150519618be95eabfc592e6678bb5f5fbbdbfb" Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.584933 5072 scope.go:117] "RemoveContainer" containerID="c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413" Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.600337 5072 scope.go:117] "RemoveContainer" containerID="742ede6186d9ba2c21d0ef3f6150d749e4713eec1d303faa160b73247570dd93" Nov 24 11:19:30 crc kubenswrapper[5072]: E1124 11:19:30.601416 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"742ede6186d9ba2c21d0ef3f6150d749e4713eec1d303faa160b73247570dd93\": container with ID starting with 742ede6186d9ba2c21d0ef3f6150d749e4713eec1d303faa160b73247570dd93 not found: ID does not exist" containerID="742ede6186d9ba2c21d0ef3f6150d749e4713eec1d303faa160b73247570dd93" Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.601465 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"742ede6186d9ba2c21d0ef3f6150d749e4713eec1d303faa160b73247570dd93"} err="failed to get container status \"742ede6186d9ba2c21d0ef3f6150d749e4713eec1d303faa160b73247570dd93\": rpc error: code = NotFound desc = could not find container \"742ede6186d9ba2c21d0ef3f6150d749e4713eec1d303faa160b73247570dd93\": container with ID starting with 742ede6186d9ba2c21d0ef3f6150d749e4713eec1d303faa160b73247570dd93 not found: ID does not exist" Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.601503 5072 scope.go:117] "RemoveContainer" containerID="b30fc71ef9fdf26e114844d344131e79b2ea981d3e69760bb92b1279f0b3c434" Nov 24 11:19:30 crc kubenswrapper[5072]: E1124 11:19:30.601914 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b30fc71ef9fdf26e114844d344131e79b2ea981d3e69760bb92b1279f0b3c434\": container with ID starting with b30fc71ef9fdf26e114844d344131e79b2ea981d3e69760bb92b1279f0b3c434 not found: ID does not exist" containerID="b30fc71ef9fdf26e114844d344131e79b2ea981d3e69760bb92b1279f0b3c434" Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.601949 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b30fc71ef9fdf26e114844d344131e79b2ea981d3e69760bb92b1279f0b3c434"} err="failed to get container status \"b30fc71ef9fdf26e114844d344131e79b2ea981d3e69760bb92b1279f0b3c434\": rpc error: code = NotFound desc = could not find container \"b30fc71ef9fdf26e114844d344131e79b2ea981d3e69760bb92b1279f0b3c434\": container with ID starting with b30fc71ef9fdf26e114844d344131e79b2ea981d3e69760bb92b1279f0b3c434 not found: ID does not exist" Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.601972 5072 scope.go:117] "RemoveContainer" containerID="af4c3d6857b6aaa6a401604f5423cfb55488de707a08698b4cf9f420b9c07975" Nov 24 11:19:30 crc kubenswrapper[5072]: E1124 11:19:30.602533 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af4c3d6857b6aaa6a401604f5423cfb55488de707a08698b4cf9f420b9c07975\": container with ID starting with af4c3d6857b6aaa6a401604f5423cfb55488de707a08698b4cf9f420b9c07975 not found: ID does not exist" containerID="af4c3d6857b6aaa6a401604f5423cfb55488de707a08698b4cf9f420b9c07975" Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.602572 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af4c3d6857b6aaa6a401604f5423cfb55488de707a08698b4cf9f420b9c07975"} err="failed to get container status \"af4c3d6857b6aaa6a401604f5423cfb55488de707a08698b4cf9f420b9c07975\": rpc error: code = NotFound desc = could not find container \"af4c3d6857b6aaa6a401604f5423cfb55488de707a08698b4cf9f420b9c07975\": container with ID starting with af4c3d6857b6aaa6a401604f5423cfb55488de707a08698b4cf9f420b9c07975 not found: ID does not exist" Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.602594 5072 scope.go:117] "RemoveContainer" containerID="89dd7133a078fe05808fdf20f22b6939004406ae85d3b6ef854a3e4031350491" Nov 24 11:19:30 crc kubenswrapper[5072]: E1124 11:19:30.602986 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89dd7133a078fe05808fdf20f22b6939004406ae85d3b6ef854a3e4031350491\": container with ID starting with 89dd7133a078fe05808fdf20f22b6939004406ae85d3b6ef854a3e4031350491 not found: ID does not exist" containerID="89dd7133a078fe05808fdf20f22b6939004406ae85d3b6ef854a3e4031350491" Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.603019 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89dd7133a078fe05808fdf20f22b6939004406ae85d3b6ef854a3e4031350491"} err="failed to get container status \"89dd7133a078fe05808fdf20f22b6939004406ae85d3b6ef854a3e4031350491\": rpc error: code = NotFound desc = could not find container \"89dd7133a078fe05808fdf20f22b6939004406ae85d3b6ef854a3e4031350491\": container with ID starting with 89dd7133a078fe05808fdf20f22b6939004406ae85d3b6ef854a3e4031350491 not found: ID does not exist" Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.603042 5072 scope.go:117] "RemoveContainer" containerID="9f6526ffcce8bc139bd9442203e460c71b46e2e8cf9e1f0d03beb067f5dc1c39" Nov 24 11:19:30 crc kubenswrapper[5072]: E1124 11:19:30.603546 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f6526ffcce8bc139bd9442203e460c71b46e2e8cf9e1f0d03beb067f5dc1c39\": container with ID starting with 9f6526ffcce8bc139bd9442203e460c71b46e2e8cf9e1f0d03beb067f5dc1c39 not found: ID does not exist" containerID="9f6526ffcce8bc139bd9442203e460c71b46e2e8cf9e1f0d03beb067f5dc1c39" Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.603584 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f6526ffcce8bc139bd9442203e460c71b46e2e8cf9e1f0d03beb067f5dc1c39"} err="failed to get container status \"9f6526ffcce8bc139bd9442203e460c71b46e2e8cf9e1f0d03beb067f5dc1c39\": rpc error: code = NotFound desc = could not find container \"9f6526ffcce8bc139bd9442203e460c71b46e2e8cf9e1f0d03beb067f5dc1c39\": container with ID starting with 9f6526ffcce8bc139bd9442203e460c71b46e2e8cf9e1f0d03beb067f5dc1c39 not found: ID does not exist" Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.603606 5072 scope.go:117] "RemoveContainer" containerID="c82cb1df0677da29463f84139b09b8ee263695e4c994ef7d17846556260b5c24" Nov 24 11:19:30 crc kubenswrapper[5072]: E1124 11:19:30.603973 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c82cb1df0677da29463f84139b09b8ee263695e4c994ef7d17846556260b5c24\": container with ID starting with c82cb1df0677da29463f84139b09b8ee263695e4c994ef7d17846556260b5c24 not found: ID does not exist" containerID="c82cb1df0677da29463f84139b09b8ee263695e4c994ef7d17846556260b5c24" Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.604007 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c82cb1df0677da29463f84139b09b8ee263695e4c994ef7d17846556260b5c24"} err="failed to get container status \"c82cb1df0677da29463f84139b09b8ee263695e4c994ef7d17846556260b5c24\": rpc error: code = NotFound desc = could not find container \"c82cb1df0677da29463f84139b09b8ee263695e4c994ef7d17846556260b5c24\": container with ID starting with c82cb1df0677da29463f84139b09b8ee263695e4c994ef7d17846556260b5c24 not found: ID does not exist" Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.604031 5072 scope.go:117] "RemoveContainer" containerID="1421e4bd297d99e68c36da933221bbabf8d74aa5fbfa7cbfe831215de52d4790" Nov 24 11:19:30 crc kubenswrapper[5072]: E1124 11:19:30.605357 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1421e4bd297d99e68c36da933221bbabf8d74aa5fbfa7cbfe831215de52d4790\": container with ID starting with 1421e4bd297d99e68c36da933221bbabf8d74aa5fbfa7cbfe831215de52d4790 not found: ID does not exist" containerID="1421e4bd297d99e68c36da933221bbabf8d74aa5fbfa7cbfe831215de52d4790" Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.605440 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1421e4bd297d99e68c36da933221bbabf8d74aa5fbfa7cbfe831215de52d4790"} err="failed to get container status \"1421e4bd297d99e68c36da933221bbabf8d74aa5fbfa7cbfe831215de52d4790\": rpc error: code = NotFound desc = could not find container \"1421e4bd297d99e68c36da933221bbabf8d74aa5fbfa7cbfe831215de52d4790\": container with ID starting with 1421e4bd297d99e68c36da933221bbabf8d74aa5fbfa7cbfe831215de52d4790 not found: ID does not exist" Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.605464 5072 scope.go:117] "RemoveContainer" containerID="98470930757c0529cc831f91feab9f4b004c808efbfdf40e3e95b12e6af1c6d9" Nov 24 11:19:30 crc kubenswrapper[5072]: E1124 11:19:30.607895 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98470930757c0529cc831f91feab9f4b004c808efbfdf40e3e95b12e6af1c6d9\": container with ID starting with 98470930757c0529cc831f91feab9f4b004c808efbfdf40e3e95b12e6af1c6d9 not found: ID does not exist" containerID="98470930757c0529cc831f91feab9f4b004c808efbfdf40e3e95b12e6af1c6d9" Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.607943 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98470930757c0529cc831f91feab9f4b004c808efbfdf40e3e95b12e6af1c6d9"} err="failed to get container status \"98470930757c0529cc831f91feab9f4b004c808efbfdf40e3e95b12e6af1c6d9\": rpc error: code = NotFound desc = could not find container \"98470930757c0529cc831f91feab9f4b004c808efbfdf40e3e95b12e6af1c6d9\": container with ID starting with 98470930757c0529cc831f91feab9f4b004c808efbfdf40e3e95b12e6af1c6d9 not found: ID does not exist" Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.607974 5072 scope.go:117] "RemoveContainer" containerID="7621cb39fa8d0330ee899d4962150519618be95eabfc592e6678bb5f5fbbdbfb" Nov 24 11:19:30 crc kubenswrapper[5072]: E1124 11:19:30.608494 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7621cb39fa8d0330ee899d4962150519618be95eabfc592e6678bb5f5fbbdbfb\": container with ID starting with 7621cb39fa8d0330ee899d4962150519618be95eabfc592e6678bb5f5fbbdbfb not found: ID does not exist" containerID="7621cb39fa8d0330ee899d4962150519618be95eabfc592e6678bb5f5fbbdbfb" Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.608528 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7621cb39fa8d0330ee899d4962150519618be95eabfc592e6678bb5f5fbbdbfb"} err="failed to get container status \"7621cb39fa8d0330ee899d4962150519618be95eabfc592e6678bb5f5fbbdbfb\": rpc error: code = NotFound desc = could not find container \"7621cb39fa8d0330ee899d4962150519618be95eabfc592e6678bb5f5fbbdbfb\": container with ID starting with 7621cb39fa8d0330ee899d4962150519618be95eabfc592e6678bb5f5fbbdbfb not found: ID does not exist" Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.608546 5072 scope.go:117] "RemoveContainer" containerID="c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413" Nov 24 11:19:30 crc kubenswrapper[5072]: E1124 11:19:30.608893 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413\": container with ID starting with c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413 not found: ID does not exist" containerID="c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413" Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.608924 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413"} err="failed to get container status \"c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413\": rpc error: code = NotFound desc = could not find container \"c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413\": container with ID starting with c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413 not found: ID does not exist" Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.608946 5072 scope.go:117] "RemoveContainer" containerID="742ede6186d9ba2c21d0ef3f6150d749e4713eec1d303faa160b73247570dd93" Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.609462 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"742ede6186d9ba2c21d0ef3f6150d749e4713eec1d303faa160b73247570dd93"} err="failed to get container status \"742ede6186d9ba2c21d0ef3f6150d749e4713eec1d303faa160b73247570dd93\": rpc error: code = NotFound desc = could not find container \"742ede6186d9ba2c21d0ef3f6150d749e4713eec1d303faa160b73247570dd93\": container with ID starting with 742ede6186d9ba2c21d0ef3f6150d749e4713eec1d303faa160b73247570dd93 not found: ID does not exist" Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.609498 5072 scope.go:117] "RemoveContainer" containerID="b30fc71ef9fdf26e114844d344131e79b2ea981d3e69760bb92b1279f0b3c434" Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.609928 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b30fc71ef9fdf26e114844d344131e79b2ea981d3e69760bb92b1279f0b3c434"} err="failed to get container status \"b30fc71ef9fdf26e114844d344131e79b2ea981d3e69760bb92b1279f0b3c434\": rpc error: code = NotFound desc = could not find container \"b30fc71ef9fdf26e114844d344131e79b2ea981d3e69760bb92b1279f0b3c434\": container with ID starting with b30fc71ef9fdf26e114844d344131e79b2ea981d3e69760bb92b1279f0b3c434 not found: ID does not exist" Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.609956 5072 scope.go:117] "RemoveContainer" containerID="af4c3d6857b6aaa6a401604f5423cfb55488de707a08698b4cf9f420b9c07975" Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.610317 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af4c3d6857b6aaa6a401604f5423cfb55488de707a08698b4cf9f420b9c07975"} err="failed to get container status \"af4c3d6857b6aaa6a401604f5423cfb55488de707a08698b4cf9f420b9c07975\": rpc error: code = NotFound desc = could not find container \"af4c3d6857b6aaa6a401604f5423cfb55488de707a08698b4cf9f420b9c07975\": container with ID starting with af4c3d6857b6aaa6a401604f5423cfb55488de707a08698b4cf9f420b9c07975 not found: ID does not exist" Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.610353 5072 scope.go:117] "RemoveContainer" containerID="89dd7133a078fe05808fdf20f22b6939004406ae85d3b6ef854a3e4031350491" Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.610777 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89dd7133a078fe05808fdf20f22b6939004406ae85d3b6ef854a3e4031350491"} err="failed to get container status \"89dd7133a078fe05808fdf20f22b6939004406ae85d3b6ef854a3e4031350491\": rpc error: code = NotFound desc = could not find container \"89dd7133a078fe05808fdf20f22b6939004406ae85d3b6ef854a3e4031350491\": container with ID starting with 89dd7133a078fe05808fdf20f22b6939004406ae85d3b6ef854a3e4031350491 not found: ID does not exist" Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.610836 5072 scope.go:117] "RemoveContainer" containerID="9f6526ffcce8bc139bd9442203e460c71b46e2e8cf9e1f0d03beb067f5dc1c39" Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.611469 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f6526ffcce8bc139bd9442203e460c71b46e2e8cf9e1f0d03beb067f5dc1c39"} err="failed to get container status \"9f6526ffcce8bc139bd9442203e460c71b46e2e8cf9e1f0d03beb067f5dc1c39\": rpc error: code = NotFound desc = could not find container \"9f6526ffcce8bc139bd9442203e460c71b46e2e8cf9e1f0d03beb067f5dc1c39\": container with ID starting with 9f6526ffcce8bc139bd9442203e460c71b46e2e8cf9e1f0d03beb067f5dc1c39 not found: ID does not exist" Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.611499 5072 scope.go:117] "RemoveContainer" containerID="c82cb1df0677da29463f84139b09b8ee263695e4c994ef7d17846556260b5c24" Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.611723 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c82cb1df0677da29463f84139b09b8ee263695e4c994ef7d17846556260b5c24"} err="failed to get container status \"c82cb1df0677da29463f84139b09b8ee263695e4c994ef7d17846556260b5c24\": rpc error: code = NotFound desc = could not find container \"c82cb1df0677da29463f84139b09b8ee263695e4c994ef7d17846556260b5c24\": container with ID starting with c82cb1df0677da29463f84139b09b8ee263695e4c994ef7d17846556260b5c24 not found: ID does not exist" Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.611750 5072 scope.go:117] "RemoveContainer" containerID="1421e4bd297d99e68c36da933221bbabf8d74aa5fbfa7cbfe831215de52d4790" Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.612046 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1421e4bd297d99e68c36da933221bbabf8d74aa5fbfa7cbfe831215de52d4790"} err="failed to get container status \"1421e4bd297d99e68c36da933221bbabf8d74aa5fbfa7cbfe831215de52d4790\": rpc error: code = NotFound desc = could not find container \"1421e4bd297d99e68c36da933221bbabf8d74aa5fbfa7cbfe831215de52d4790\": container with ID starting with 1421e4bd297d99e68c36da933221bbabf8d74aa5fbfa7cbfe831215de52d4790 not found: ID does not exist" Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.612067 5072 scope.go:117] "RemoveContainer" containerID="98470930757c0529cc831f91feab9f4b004c808efbfdf40e3e95b12e6af1c6d9" Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.612428 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98470930757c0529cc831f91feab9f4b004c808efbfdf40e3e95b12e6af1c6d9"} err="failed to get container status \"98470930757c0529cc831f91feab9f4b004c808efbfdf40e3e95b12e6af1c6d9\": rpc error: code = NotFound desc = could not find container \"98470930757c0529cc831f91feab9f4b004c808efbfdf40e3e95b12e6af1c6d9\": container with ID starting with 98470930757c0529cc831f91feab9f4b004c808efbfdf40e3e95b12e6af1c6d9 not found: ID does not exist" Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.612452 5072 scope.go:117] "RemoveContainer" containerID="7621cb39fa8d0330ee899d4962150519618be95eabfc592e6678bb5f5fbbdbfb" Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.612710 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7621cb39fa8d0330ee899d4962150519618be95eabfc592e6678bb5f5fbbdbfb"} err="failed to get container status \"7621cb39fa8d0330ee899d4962150519618be95eabfc592e6678bb5f5fbbdbfb\": rpc error: code = NotFound desc = could not find container \"7621cb39fa8d0330ee899d4962150519618be95eabfc592e6678bb5f5fbbdbfb\": container with ID starting with 7621cb39fa8d0330ee899d4962150519618be95eabfc592e6678bb5f5fbbdbfb not found: ID does not exist" Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.612733 5072 scope.go:117] "RemoveContainer" containerID="c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413" Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.612928 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413"} err="failed to get container status \"c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413\": rpc error: code = NotFound desc = could not find container \"c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413\": container with ID starting with c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413 not found: ID does not exist" Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.612985 5072 scope.go:117] "RemoveContainer" containerID="742ede6186d9ba2c21d0ef3f6150d749e4713eec1d303faa160b73247570dd93" Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.613400 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"742ede6186d9ba2c21d0ef3f6150d749e4713eec1d303faa160b73247570dd93"} err="failed to get container status \"742ede6186d9ba2c21d0ef3f6150d749e4713eec1d303faa160b73247570dd93\": rpc error: code = NotFound desc = could not find container \"742ede6186d9ba2c21d0ef3f6150d749e4713eec1d303faa160b73247570dd93\": container with ID starting with 742ede6186d9ba2c21d0ef3f6150d749e4713eec1d303faa160b73247570dd93 not found: ID does not exist" Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.613447 5072 scope.go:117] "RemoveContainer" containerID="b30fc71ef9fdf26e114844d344131e79b2ea981d3e69760bb92b1279f0b3c434" Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.613737 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b30fc71ef9fdf26e114844d344131e79b2ea981d3e69760bb92b1279f0b3c434"} err="failed to get container status \"b30fc71ef9fdf26e114844d344131e79b2ea981d3e69760bb92b1279f0b3c434\": rpc error: code = NotFound desc = could not find container \"b30fc71ef9fdf26e114844d344131e79b2ea981d3e69760bb92b1279f0b3c434\": container with ID starting with b30fc71ef9fdf26e114844d344131e79b2ea981d3e69760bb92b1279f0b3c434 not found: ID does not exist" Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.613761 5072 scope.go:117] "RemoveContainer" containerID="af4c3d6857b6aaa6a401604f5423cfb55488de707a08698b4cf9f420b9c07975" Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.614283 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af4c3d6857b6aaa6a401604f5423cfb55488de707a08698b4cf9f420b9c07975"} err="failed to get container status \"af4c3d6857b6aaa6a401604f5423cfb55488de707a08698b4cf9f420b9c07975\": rpc error: code = NotFound desc = could not find container \"af4c3d6857b6aaa6a401604f5423cfb55488de707a08698b4cf9f420b9c07975\": container with ID starting with af4c3d6857b6aaa6a401604f5423cfb55488de707a08698b4cf9f420b9c07975 not found: ID does not exist" Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.614337 5072 scope.go:117] "RemoveContainer" containerID="89dd7133a078fe05808fdf20f22b6939004406ae85d3b6ef854a3e4031350491" Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.614664 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89dd7133a078fe05808fdf20f22b6939004406ae85d3b6ef854a3e4031350491"} err="failed to get container status \"89dd7133a078fe05808fdf20f22b6939004406ae85d3b6ef854a3e4031350491\": rpc error: code = NotFound desc = could not find container \"89dd7133a078fe05808fdf20f22b6939004406ae85d3b6ef854a3e4031350491\": container with ID starting with 89dd7133a078fe05808fdf20f22b6939004406ae85d3b6ef854a3e4031350491 not found: ID does not exist" Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.614694 5072 scope.go:117] "RemoveContainer" containerID="9f6526ffcce8bc139bd9442203e460c71b46e2e8cf9e1f0d03beb067f5dc1c39" Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.615161 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f6526ffcce8bc139bd9442203e460c71b46e2e8cf9e1f0d03beb067f5dc1c39"} err="failed to get container status \"9f6526ffcce8bc139bd9442203e460c71b46e2e8cf9e1f0d03beb067f5dc1c39\": rpc error: code = NotFound desc = could not find container \"9f6526ffcce8bc139bd9442203e460c71b46e2e8cf9e1f0d03beb067f5dc1c39\": container with ID starting with 9f6526ffcce8bc139bd9442203e460c71b46e2e8cf9e1f0d03beb067f5dc1c39 not found: ID does not exist" Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.615185 5072 scope.go:117] "RemoveContainer" containerID="c82cb1df0677da29463f84139b09b8ee263695e4c994ef7d17846556260b5c24" Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.615637 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c82cb1df0677da29463f84139b09b8ee263695e4c994ef7d17846556260b5c24"} err="failed to get container status \"c82cb1df0677da29463f84139b09b8ee263695e4c994ef7d17846556260b5c24\": rpc error: code = NotFound desc = could not find container \"c82cb1df0677da29463f84139b09b8ee263695e4c994ef7d17846556260b5c24\": container with ID starting with c82cb1df0677da29463f84139b09b8ee263695e4c994ef7d17846556260b5c24 not found: ID does not exist" Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.615689 5072 scope.go:117] "RemoveContainer" containerID="1421e4bd297d99e68c36da933221bbabf8d74aa5fbfa7cbfe831215de52d4790" Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.616144 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1421e4bd297d99e68c36da933221bbabf8d74aa5fbfa7cbfe831215de52d4790"} err="failed to get container status \"1421e4bd297d99e68c36da933221bbabf8d74aa5fbfa7cbfe831215de52d4790\": rpc error: code = NotFound desc = could not find container \"1421e4bd297d99e68c36da933221bbabf8d74aa5fbfa7cbfe831215de52d4790\": container with ID starting with 1421e4bd297d99e68c36da933221bbabf8d74aa5fbfa7cbfe831215de52d4790 not found: ID does not exist" Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.616165 5072 scope.go:117] "RemoveContainer" containerID="98470930757c0529cc831f91feab9f4b004c808efbfdf40e3e95b12e6af1c6d9" Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.616550 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98470930757c0529cc831f91feab9f4b004c808efbfdf40e3e95b12e6af1c6d9"} err="failed to get container status \"98470930757c0529cc831f91feab9f4b004c808efbfdf40e3e95b12e6af1c6d9\": rpc error: code = NotFound desc = could not find container \"98470930757c0529cc831f91feab9f4b004c808efbfdf40e3e95b12e6af1c6d9\": container with ID starting with 98470930757c0529cc831f91feab9f4b004c808efbfdf40e3e95b12e6af1c6d9 not found: ID does not exist" Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.616572 5072 scope.go:117] "RemoveContainer" containerID="7621cb39fa8d0330ee899d4962150519618be95eabfc592e6678bb5f5fbbdbfb" Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.616885 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7621cb39fa8d0330ee899d4962150519618be95eabfc592e6678bb5f5fbbdbfb"} err="failed to get container status \"7621cb39fa8d0330ee899d4962150519618be95eabfc592e6678bb5f5fbbdbfb\": rpc error: code = NotFound desc = could not find container \"7621cb39fa8d0330ee899d4962150519618be95eabfc592e6678bb5f5fbbdbfb\": container with ID starting with 7621cb39fa8d0330ee899d4962150519618be95eabfc592e6678bb5f5fbbdbfb not found: ID does not exist" Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.616908 5072 scope.go:117] "RemoveContainer" containerID="c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413" Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.617188 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413"} err="failed to get container status \"c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413\": rpc error: code = NotFound desc = could not find container \"c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413\": container with ID starting with c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413 not found: ID does not exist" Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.617206 5072 scope.go:117] "RemoveContainer" containerID="742ede6186d9ba2c21d0ef3f6150d749e4713eec1d303faa160b73247570dd93" Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.617491 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"742ede6186d9ba2c21d0ef3f6150d749e4713eec1d303faa160b73247570dd93"} err="failed to get container status \"742ede6186d9ba2c21d0ef3f6150d749e4713eec1d303faa160b73247570dd93\": rpc error: code = NotFound desc = could not find container \"742ede6186d9ba2c21d0ef3f6150d749e4713eec1d303faa160b73247570dd93\": container with ID starting with 742ede6186d9ba2c21d0ef3f6150d749e4713eec1d303faa160b73247570dd93 not found: ID does not exist" Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.617517 5072 scope.go:117] "RemoveContainer" containerID="b30fc71ef9fdf26e114844d344131e79b2ea981d3e69760bb92b1279f0b3c434" Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.617802 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b30fc71ef9fdf26e114844d344131e79b2ea981d3e69760bb92b1279f0b3c434"} err="failed to get container status \"b30fc71ef9fdf26e114844d344131e79b2ea981d3e69760bb92b1279f0b3c434\": rpc error: code = NotFound desc = could not find container \"b30fc71ef9fdf26e114844d344131e79b2ea981d3e69760bb92b1279f0b3c434\": container with ID starting with b30fc71ef9fdf26e114844d344131e79b2ea981d3e69760bb92b1279f0b3c434 not found: ID does not exist" Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.617822 5072 scope.go:117] "RemoveContainer" containerID="af4c3d6857b6aaa6a401604f5423cfb55488de707a08698b4cf9f420b9c07975" Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.618082 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af4c3d6857b6aaa6a401604f5423cfb55488de707a08698b4cf9f420b9c07975"} err="failed to get container status \"af4c3d6857b6aaa6a401604f5423cfb55488de707a08698b4cf9f420b9c07975\": rpc error: code = NotFound desc = could not find container \"af4c3d6857b6aaa6a401604f5423cfb55488de707a08698b4cf9f420b9c07975\": container with ID starting with af4c3d6857b6aaa6a401604f5423cfb55488de707a08698b4cf9f420b9c07975 not found: ID does not exist" Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.618122 5072 scope.go:117] "RemoveContainer" containerID="89dd7133a078fe05808fdf20f22b6939004406ae85d3b6ef854a3e4031350491" Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.618438 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89dd7133a078fe05808fdf20f22b6939004406ae85d3b6ef854a3e4031350491"} err="failed to get container status \"89dd7133a078fe05808fdf20f22b6939004406ae85d3b6ef854a3e4031350491\": rpc error: code = NotFound desc = could not find container \"89dd7133a078fe05808fdf20f22b6939004406ae85d3b6ef854a3e4031350491\": container with ID starting with 89dd7133a078fe05808fdf20f22b6939004406ae85d3b6ef854a3e4031350491 not found: ID does not exist" Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.618465 5072 scope.go:117] "RemoveContainer" containerID="9f6526ffcce8bc139bd9442203e460c71b46e2e8cf9e1f0d03beb067f5dc1c39" Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.618798 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f6526ffcce8bc139bd9442203e460c71b46e2e8cf9e1f0d03beb067f5dc1c39"} err="failed to get container status \"9f6526ffcce8bc139bd9442203e460c71b46e2e8cf9e1f0d03beb067f5dc1c39\": rpc error: code = NotFound desc = could not find container \"9f6526ffcce8bc139bd9442203e460c71b46e2e8cf9e1f0d03beb067f5dc1c39\": container with ID starting with 9f6526ffcce8bc139bd9442203e460c71b46e2e8cf9e1f0d03beb067f5dc1c39 not found: ID does not exist" Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.618817 5072 scope.go:117] "RemoveContainer" containerID="c82cb1df0677da29463f84139b09b8ee263695e4c994ef7d17846556260b5c24" Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.619440 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c82cb1df0677da29463f84139b09b8ee263695e4c994ef7d17846556260b5c24"} err="failed to get container status \"c82cb1df0677da29463f84139b09b8ee263695e4c994ef7d17846556260b5c24\": rpc error: code = NotFound desc = could not find container \"c82cb1df0677da29463f84139b09b8ee263695e4c994ef7d17846556260b5c24\": container with ID starting with c82cb1df0677da29463f84139b09b8ee263695e4c994ef7d17846556260b5c24 not found: ID does not exist" Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.619464 5072 scope.go:117] "RemoveContainer" containerID="1421e4bd297d99e68c36da933221bbabf8d74aa5fbfa7cbfe831215de52d4790" Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.619759 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1421e4bd297d99e68c36da933221bbabf8d74aa5fbfa7cbfe831215de52d4790"} err="failed to get container status \"1421e4bd297d99e68c36da933221bbabf8d74aa5fbfa7cbfe831215de52d4790\": rpc error: code = NotFound desc = could not find container \"1421e4bd297d99e68c36da933221bbabf8d74aa5fbfa7cbfe831215de52d4790\": container with ID starting with 1421e4bd297d99e68c36da933221bbabf8d74aa5fbfa7cbfe831215de52d4790 not found: ID does not exist" Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.619780 5072 scope.go:117] "RemoveContainer" containerID="98470930757c0529cc831f91feab9f4b004c808efbfdf40e3e95b12e6af1c6d9" Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.620079 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98470930757c0529cc831f91feab9f4b004c808efbfdf40e3e95b12e6af1c6d9"} err="failed to get container status \"98470930757c0529cc831f91feab9f4b004c808efbfdf40e3e95b12e6af1c6d9\": rpc error: code = NotFound desc = could not find container \"98470930757c0529cc831f91feab9f4b004c808efbfdf40e3e95b12e6af1c6d9\": container with ID starting with 98470930757c0529cc831f91feab9f4b004c808efbfdf40e3e95b12e6af1c6d9 not found: ID does not exist" Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.620105 5072 scope.go:117] "RemoveContainer" containerID="7621cb39fa8d0330ee899d4962150519618be95eabfc592e6678bb5f5fbbdbfb" Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.620468 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7621cb39fa8d0330ee899d4962150519618be95eabfc592e6678bb5f5fbbdbfb"} err="failed to get container status \"7621cb39fa8d0330ee899d4962150519618be95eabfc592e6678bb5f5fbbdbfb\": rpc error: code = NotFound desc = could not find container \"7621cb39fa8d0330ee899d4962150519618be95eabfc592e6678bb5f5fbbdbfb\": container with ID starting with 7621cb39fa8d0330ee899d4962150519618be95eabfc592e6678bb5f5fbbdbfb not found: ID does not exist" Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.620486 5072 scope.go:117] "RemoveContainer" containerID="c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413" Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.620779 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413"} err="failed to get container status \"c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413\": rpc error: code = NotFound desc = could not find container \"c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413\": container with ID starting with c0e42de297a1b5aa168b806343c5d536986b1c64e73965e377e5d412ef4f7413 not found: ID does not exist" Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.620796 5072 scope.go:117] "RemoveContainer" containerID="742ede6186d9ba2c21d0ef3f6150d749e4713eec1d303faa160b73247570dd93" Nov 24 11:19:30 crc kubenswrapper[5072]: I1124 11:19:30.621057 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"742ede6186d9ba2c21d0ef3f6150d749e4713eec1d303faa160b73247570dd93"} err="failed to get container status \"742ede6186d9ba2c21d0ef3f6150d749e4713eec1d303faa160b73247570dd93\": rpc error: code = NotFound desc = could not find container \"742ede6186d9ba2c21d0ef3f6150d749e4713eec1d303faa160b73247570dd93\": container with ID starting with 742ede6186d9ba2c21d0ef3f6150d749e4713eec1d303faa160b73247570dd93 not found: ID does not exist" Nov 24 11:19:31 crc kubenswrapper[5072]: I1124 11:19:31.026218 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80fda759-ddfd-438a-b5a2-cb775ee1bf7e" path="/var/lib/kubelet/pods/80fda759-ddfd-438a-b5a2-cb775ee1bf7e/volumes" Nov 24 11:19:31 crc kubenswrapper[5072]: I1124 11:19:31.343635 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z4bj4" event={"ID":"77aeb6df-2cbe-4e4d-a103-d530f95eee80","Type":"ContainerStarted","Data":"bac1f206a1ea20b5ccba9931b7e57f50c278dd12f50bf4b24216921e578d2c8c"} Nov 24 11:19:31 crc kubenswrapper[5072]: I1124 11:19:31.343918 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z4bj4" event={"ID":"77aeb6df-2cbe-4e4d-a103-d530f95eee80","Type":"ContainerStarted","Data":"6c5ca14bb4ee114bcf24e605ad8b44d4811f6ea4ae47dfc8173648c960a11d0d"} Nov 24 11:19:31 crc kubenswrapper[5072]: I1124 11:19:31.343945 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z4bj4" event={"ID":"77aeb6df-2cbe-4e4d-a103-d530f95eee80","Type":"ContainerStarted","Data":"b39bf808540ddc1d5616b455831393b1d4075f1c64b74b086c090fc8c9d80ea7"} Nov 24 11:19:31 crc kubenswrapper[5072]: I1124 11:19:31.343964 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z4bj4" event={"ID":"77aeb6df-2cbe-4e4d-a103-d530f95eee80","Type":"ContainerStarted","Data":"a2b3b0fff8e1ad079732a48ce5acef8e6667cd3449cee4218036a4f2897e77ef"} Nov 24 11:19:31 crc kubenswrapper[5072]: I1124 11:19:31.343982 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z4bj4" event={"ID":"77aeb6df-2cbe-4e4d-a103-d530f95eee80","Type":"ContainerStarted","Data":"8e87c24349c407c5f348026d2e37dca759dce333297ce1dfba47f091a032121a"} Nov 24 11:19:31 crc kubenswrapper[5072]: I1124 11:19:31.344000 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z4bj4" event={"ID":"77aeb6df-2cbe-4e4d-a103-d530f95eee80","Type":"ContainerStarted","Data":"74643b0cc4d342e023370dd54c9b131bfb23b613287aa34f5f613961f79c1c51"} Nov 24 11:19:34 crc kubenswrapper[5072]: I1124 11:19:34.372712 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z4bj4" event={"ID":"77aeb6df-2cbe-4e4d-a103-d530f95eee80","Type":"ContainerStarted","Data":"aa6be6df03cb316956bdda543016ef54eb8ec197cb950a8cc96ca1ff51d3c2a9"} Nov 24 11:19:36 crc kubenswrapper[5072]: I1124 11:19:36.391862 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z4bj4" event={"ID":"77aeb6df-2cbe-4e4d-a103-d530f95eee80","Type":"ContainerStarted","Data":"fcdf66cdf1e98567c5369392ec6f9efd49a2399608417236c87fb6e13bc7e2ac"} Nov 24 11:19:36 crc kubenswrapper[5072]: I1124 11:19:36.392944 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-z4bj4" Nov 24 11:19:36 crc kubenswrapper[5072]: I1124 11:19:36.393043 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-z4bj4" Nov 24 11:19:36 crc kubenswrapper[5072]: I1124 11:19:36.393127 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-z4bj4" Nov 24 11:19:36 crc kubenswrapper[5072]: I1124 11:19:36.420946 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-z4bj4" podStartSLOduration=7.420926284 podStartE2EDuration="7.420926284s" podCreationTimestamp="2025-11-24 11:19:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:19:36.420560795 +0000 UTC m=+628.132085311" watchObservedRunningTime="2025-11-24 11:19:36.420926284 +0000 UTC m=+628.132450770" Nov 24 11:19:36 crc kubenswrapper[5072]: I1124 11:19:36.425940 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-z4bj4" Nov 24 11:19:36 crc kubenswrapper[5072]: I1124 11:19:36.428018 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-z4bj4" Nov 24 11:19:42 crc kubenswrapper[5072]: I1124 11:19:42.016885 5072 scope.go:117] "RemoveContainer" containerID="bfd40dad8f619581f0615e6e2037e751d4dfed983e7bf4530c461175ff0bb47f" Nov 24 11:19:42 crc kubenswrapper[5072]: E1124 11:19:42.017817 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-t8b9x_openshift-multus(1a9fe7b3-71a3-4388-8ee4-7531ceef6049)\"" pod="openshift-multus/multus-t8b9x" podUID="1a9fe7b3-71a3-4388-8ee4-7531ceef6049" Nov 24 11:19:53 crc kubenswrapper[5072]: I1124 11:19:53.017122 5072 scope.go:117] "RemoveContainer" containerID="bfd40dad8f619581f0615e6e2037e751d4dfed983e7bf4530c461175ff0bb47f" Nov 24 11:19:53 crc kubenswrapper[5072]: I1124 11:19:53.955833 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-t8b9x_1a9fe7b3-71a3-4388-8ee4-7531ceef6049/kube-multus/2.log" Nov 24 11:19:53 crc kubenswrapper[5072]: I1124 11:19:53.956585 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-t8b9x_1a9fe7b3-71a3-4388-8ee4-7531ceef6049/kube-multus/1.log" Nov 24 11:19:53 crc kubenswrapper[5072]: I1124 11:19:53.956663 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-t8b9x" event={"ID":"1a9fe7b3-71a3-4388-8ee4-7531ceef6049","Type":"ContainerStarted","Data":"9f0f68346753b92e742d87ca1aebe90aaf75907c3b9fbab3d4f46727ca621cac"} Nov 24 11:20:00 crc kubenswrapper[5072]: I1124 11:20:00.120457 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-z4bj4" Nov 24 11:20:08 crc kubenswrapper[5072]: I1124 11:20:08.813643 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ez65cw"] Nov 24 11:20:08 crc kubenswrapper[5072]: I1124 11:20:08.815713 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ez65cw" Nov 24 11:20:08 crc kubenswrapper[5072]: I1124 11:20:08.818611 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 24 11:20:08 crc kubenswrapper[5072]: I1124 11:20:08.822237 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ez65cw"] Nov 24 11:20:08 crc kubenswrapper[5072]: I1124 11:20:08.965796 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0b557c16-ec3a-4ee2-96cb-f1fbcfa23f76-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ez65cw\" (UID: \"0b557c16-ec3a-4ee2-96cb-f1fbcfa23f76\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ez65cw" Nov 24 11:20:08 crc kubenswrapper[5072]: I1124 11:20:08.966643 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0b557c16-ec3a-4ee2-96cb-f1fbcfa23f76-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ez65cw\" (UID: \"0b557c16-ec3a-4ee2-96cb-f1fbcfa23f76\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ez65cw" Nov 24 11:20:08 crc kubenswrapper[5072]: I1124 11:20:08.966779 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46qqx\" (UniqueName: \"kubernetes.io/projected/0b557c16-ec3a-4ee2-96cb-f1fbcfa23f76-kube-api-access-46qqx\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ez65cw\" (UID: \"0b557c16-ec3a-4ee2-96cb-f1fbcfa23f76\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ez65cw" Nov 24 11:20:09 crc kubenswrapper[5072]: I1124 11:20:09.068272 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46qqx\" (UniqueName: \"kubernetes.io/projected/0b557c16-ec3a-4ee2-96cb-f1fbcfa23f76-kube-api-access-46qqx\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ez65cw\" (UID: \"0b557c16-ec3a-4ee2-96cb-f1fbcfa23f76\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ez65cw" Nov 24 11:20:09 crc kubenswrapper[5072]: I1124 11:20:09.068362 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0b557c16-ec3a-4ee2-96cb-f1fbcfa23f76-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ez65cw\" (UID: \"0b557c16-ec3a-4ee2-96cb-f1fbcfa23f76\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ez65cw" Nov 24 11:20:09 crc kubenswrapper[5072]: I1124 11:20:09.068437 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0b557c16-ec3a-4ee2-96cb-f1fbcfa23f76-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ez65cw\" (UID: \"0b557c16-ec3a-4ee2-96cb-f1fbcfa23f76\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ez65cw" Nov 24 11:20:09 crc kubenswrapper[5072]: I1124 11:20:09.069068 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0b557c16-ec3a-4ee2-96cb-f1fbcfa23f76-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ez65cw\" (UID: \"0b557c16-ec3a-4ee2-96cb-f1fbcfa23f76\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ez65cw" Nov 24 11:20:09 crc kubenswrapper[5072]: I1124 11:20:09.069098 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0b557c16-ec3a-4ee2-96cb-f1fbcfa23f76-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ez65cw\" (UID: \"0b557c16-ec3a-4ee2-96cb-f1fbcfa23f76\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ez65cw" Nov 24 11:20:09 crc kubenswrapper[5072]: I1124 11:20:09.093972 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46qqx\" (UniqueName: \"kubernetes.io/projected/0b557c16-ec3a-4ee2-96cb-f1fbcfa23f76-kube-api-access-46qqx\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ez65cw\" (UID: \"0b557c16-ec3a-4ee2-96cb-f1fbcfa23f76\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ez65cw" Nov 24 11:20:09 crc kubenswrapper[5072]: I1124 11:20:09.139356 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 24 11:20:09 crc kubenswrapper[5072]: I1124 11:20:09.148080 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ez65cw" Nov 24 11:20:09 crc kubenswrapper[5072]: I1124 11:20:09.269843 5072 scope.go:117] "RemoveContainer" containerID="db181b35d5ddd8cb7ce31d9293b82a515a8889794cf9696c664b101693247cc6" Nov 24 11:20:09 crc kubenswrapper[5072]: I1124 11:20:09.386176 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ez65cw"] Nov 24 11:20:09 crc kubenswrapper[5072]: W1124 11:20:09.390094 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b557c16_ec3a_4ee2_96cb_f1fbcfa23f76.slice/crio-a5f53b8f7bd003ee95f3c01d758ea188e3e87293c7c18db7e614b675647343f5 WatchSource:0}: Error finding container a5f53b8f7bd003ee95f3c01d758ea188e3e87293c7c18db7e614b675647343f5: Status 404 returned error can't find the container with id a5f53b8f7bd003ee95f3c01d758ea188e3e87293c7c18db7e614b675647343f5 Nov 24 11:20:10 crc kubenswrapper[5072]: I1124 11:20:10.072457 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-t8b9x_1a9fe7b3-71a3-4388-8ee4-7531ceef6049/kube-multus/2.log" Nov 24 11:20:10 crc kubenswrapper[5072]: I1124 11:20:10.077275 5072 generic.go:334] "Generic (PLEG): container finished" podID="0b557c16-ec3a-4ee2-96cb-f1fbcfa23f76" containerID="077f0ad5b258ccb4db30e47fedf4edcb9424e0f52eace9d21d5925b25e2b0d9e" exitCode=0 Nov 24 11:20:10 crc kubenswrapper[5072]: I1124 11:20:10.077320 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ez65cw" event={"ID":"0b557c16-ec3a-4ee2-96cb-f1fbcfa23f76","Type":"ContainerDied","Data":"077f0ad5b258ccb4db30e47fedf4edcb9424e0f52eace9d21d5925b25e2b0d9e"} Nov 24 11:20:10 crc kubenswrapper[5072]: I1124 11:20:10.077346 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ez65cw" event={"ID":"0b557c16-ec3a-4ee2-96cb-f1fbcfa23f76","Type":"ContainerStarted","Data":"a5f53b8f7bd003ee95f3c01d758ea188e3e87293c7c18db7e614b675647343f5"} Nov 24 11:20:12 crc kubenswrapper[5072]: I1124 11:20:12.094445 5072 generic.go:334] "Generic (PLEG): container finished" podID="0b557c16-ec3a-4ee2-96cb-f1fbcfa23f76" containerID="b1f022d0e092dd922de8b2214508a1683a92a544d1ac4ba160c968876fee1063" exitCode=0 Nov 24 11:20:12 crc kubenswrapper[5072]: I1124 11:20:12.094531 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ez65cw" event={"ID":"0b557c16-ec3a-4ee2-96cb-f1fbcfa23f76","Type":"ContainerDied","Data":"b1f022d0e092dd922de8b2214508a1683a92a544d1ac4ba160c968876fee1063"} Nov 24 11:20:13 crc kubenswrapper[5072]: I1124 11:20:13.106893 5072 generic.go:334] "Generic (PLEG): container finished" podID="0b557c16-ec3a-4ee2-96cb-f1fbcfa23f76" containerID="17aeb74f2df4d2ac96854dd6bbd6553882769674efdc0bf43e3944c3628b22f6" exitCode=0 Nov 24 11:20:13 crc kubenswrapper[5072]: I1124 11:20:13.106929 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ez65cw" event={"ID":"0b557c16-ec3a-4ee2-96cb-f1fbcfa23f76","Type":"ContainerDied","Data":"17aeb74f2df4d2ac96854dd6bbd6553882769674efdc0bf43e3944c3628b22f6"} Nov 24 11:20:14 crc kubenswrapper[5072]: I1124 11:20:14.466871 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ez65cw" Nov 24 11:20:14 crc kubenswrapper[5072]: I1124 11:20:14.550236 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-46qqx\" (UniqueName: \"kubernetes.io/projected/0b557c16-ec3a-4ee2-96cb-f1fbcfa23f76-kube-api-access-46qqx\") pod \"0b557c16-ec3a-4ee2-96cb-f1fbcfa23f76\" (UID: \"0b557c16-ec3a-4ee2-96cb-f1fbcfa23f76\") " Nov 24 11:20:14 crc kubenswrapper[5072]: I1124 11:20:14.550339 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0b557c16-ec3a-4ee2-96cb-f1fbcfa23f76-bundle\") pod \"0b557c16-ec3a-4ee2-96cb-f1fbcfa23f76\" (UID: \"0b557c16-ec3a-4ee2-96cb-f1fbcfa23f76\") " Nov 24 11:20:14 crc kubenswrapper[5072]: I1124 11:20:14.550459 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0b557c16-ec3a-4ee2-96cb-f1fbcfa23f76-util\") pod \"0b557c16-ec3a-4ee2-96cb-f1fbcfa23f76\" (UID: \"0b557c16-ec3a-4ee2-96cb-f1fbcfa23f76\") " Nov 24 11:20:14 crc kubenswrapper[5072]: I1124 11:20:14.551313 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b557c16-ec3a-4ee2-96cb-f1fbcfa23f76-bundle" (OuterVolumeSpecName: "bundle") pod "0b557c16-ec3a-4ee2-96cb-f1fbcfa23f76" (UID: "0b557c16-ec3a-4ee2-96cb-f1fbcfa23f76"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:20:14 crc kubenswrapper[5072]: I1124 11:20:14.556813 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b557c16-ec3a-4ee2-96cb-f1fbcfa23f76-kube-api-access-46qqx" (OuterVolumeSpecName: "kube-api-access-46qqx") pod "0b557c16-ec3a-4ee2-96cb-f1fbcfa23f76" (UID: "0b557c16-ec3a-4ee2-96cb-f1fbcfa23f76"). InnerVolumeSpecName "kube-api-access-46qqx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:20:14 crc kubenswrapper[5072]: I1124 11:20:14.579066 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b557c16-ec3a-4ee2-96cb-f1fbcfa23f76-util" (OuterVolumeSpecName: "util") pod "0b557c16-ec3a-4ee2-96cb-f1fbcfa23f76" (UID: "0b557c16-ec3a-4ee2-96cb-f1fbcfa23f76"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:20:14 crc kubenswrapper[5072]: I1124 11:20:14.653368 5072 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0b557c16-ec3a-4ee2-96cb-f1fbcfa23f76-util\") on node \"crc\" DevicePath \"\"" Nov 24 11:20:14 crc kubenswrapper[5072]: I1124 11:20:14.654216 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-46qqx\" (UniqueName: \"kubernetes.io/projected/0b557c16-ec3a-4ee2-96cb-f1fbcfa23f76-kube-api-access-46qqx\") on node \"crc\" DevicePath \"\"" Nov 24 11:20:14 crc kubenswrapper[5072]: I1124 11:20:14.654343 5072 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0b557c16-ec3a-4ee2-96cb-f1fbcfa23f76-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:20:15 crc kubenswrapper[5072]: I1124 11:20:15.124300 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ez65cw" event={"ID":"0b557c16-ec3a-4ee2-96cb-f1fbcfa23f76","Type":"ContainerDied","Data":"a5f53b8f7bd003ee95f3c01d758ea188e3e87293c7c18db7e614b675647343f5"} Nov 24 11:20:15 crc kubenswrapper[5072]: I1124 11:20:15.124362 5072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5f53b8f7bd003ee95f3c01d758ea188e3e87293c7c18db7e614b675647343f5" Nov 24 11:20:15 crc kubenswrapper[5072]: I1124 11:20:15.124396 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ez65cw" Nov 24 11:20:16 crc kubenswrapper[5072]: I1124 11:20:16.267213 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-q824z"] Nov 24 11:20:16 crc kubenswrapper[5072]: E1124 11:20:16.267439 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b557c16-ec3a-4ee2-96cb-f1fbcfa23f76" containerName="util" Nov 24 11:20:16 crc kubenswrapper[5072]: I1124 11:20:16.267452 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b557c16-ec3a-4ee2-96cb-f1fbcfa23f76" containerName="util" Nov 24 11:20:16 crc kubenswrapper[5072]: E1124 11:20:16.267464 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b557c16-ec3a-4ee2-96cb-f1fbcfa23f76" containerName="pull" Nov 24 11:20:16 crc kubenswrapper[5072]: I1124 11:20:16.267471 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b557c16-ec3a-4ee2-96cb-f1fbcfa23f76" containerName="pull" Nov 24 11:20:16 crc kubenswrapper[5072]: E1124 11:20:16.267494 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b557c16-ec3a-4ee2-96cb-f1fbcfa23f76" containerName="extract" Nov 24 11:20:16 crc kubenswrapper[5072]: I1124 11:20:16.267502 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b557c16-ec3a-4ee2-96cb-f1fbcfa23f76" containerName="extract" Nov 24 11:20:16 crc kubenswrapper[5072]: I1124 11:20:16.267626 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b557c16-ec3a-4ee2-96cb-f1fbcfa23f76" containerName="extract" Nov 24 11:20:16 crc kubenswrapper[5072]: I1124 11:20:16.268047 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-557fdffb88-q824z" Nov 24 11:20:16 crc kubenswrapper[5072]: I1124 11:20:16.270059 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Nov 24 11:20:16 crc kubenswrapper[5072]: I1124 11:20:16.270078 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-dpkvx" Nov 24 11:20:16 crc kubenswrapper[5072]: I1124 11:20:16.270312 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Nov 24 11:20:16 crc kubenswrapper[5072]: I1124 11:20:16.276799 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-q824z"] Nov 24 11:20:16 crc kubenswrapper[5072]: I1124 11:20:16.379166 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q64ld\" (UniqueName: \"kubernetes.io/projected/b5b7e963-3dd2-4073-9297-2b03a0411ff3-kube-api-access-q64ld\") pod \"nmstate-operator-557fdffb88-q824z\" (UID: \"b5b7e963-3dd2-4073-9297-2b03a0411ff3\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-q824z" Nov 24 11:20:16 crc kubenswrapper[5072]: I1124 11:20:16.480569 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q64ld\" (UniqueName: \"kubernetes.io/projected/b5b7e963-3dd2-4073-9297-2b03a0411ff3-kube-api-access-q64ld\") pod \"nmstate-operator-557fdffb88-q824z\" (UID: \"b5b7e963-3dd2-4073-9297-2b03a0411ff3\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-q824z" Nov 24 11:20:16 crc kubenswrapper[5072]: I1124 11:20:16.502099 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q64ld\" (UniqueName: \"kubernetes.io/projected/b5b7e963-3dd2-4073-9297-2b03a0411ff3-kube-api-access-q64ld\") pod \"nmstate-operator-557fdffb88-q824z\" (UID: \"b5b7e963-3dd2-4073-9297-2b03a0411ff3\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-q824z" Nov 24 11:20:16 crc kubenswrapper[5072]: I1124 11:20:16.628846 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-557fdffb88-q824z" Nov 24 11:20:16 crc kubenswrapper[5072]: I1124 11:20:16.891406 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-q824z"] Nov 24 11:20:16 crc kubenswrapper[5072]: W1124 11:20:16.903583 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb5b7e963_3dd2_4073_9297_2b03a0411ff3.slice/crio-85087aab8a1dcf9bb00e4f63f07d2543e2264cfcf4262f1bf32cefe6a95dc900 WatchSource:0}: Error finding container 85087aab8a1dcf9bb00e4f63f07d2543e2264cfcf4262f1bf32cefe6a95dc900: Status 404 returned error can't find the container with id 85087aab8a1dcf9bb00e4f63f07d2543e2264cfcf4262f1bf32cefe6a95dc900 Nov 24 11:20:17 crc kubenswrapper[5072]: I1124 11:20:17.137220 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-557fdffb88-q824z" event={"ID":"b5b7e963-3dd2-4073-9297-2b03a0411ff3","Type":"ContainerStarted","Data":"85087aab8a1dcf9bb00e4f63f07d2543e2264cfcf4262f1bf32cefe6a95dc900"} Nov 24 11:20:19 crc kubenswrapper[5072]: I1124 11:20:19.154018 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-557fdffb88-q824z" event={"ID":"b5b7e963-3dd2-4073-9297-2b03a0411ff3","Type":"ContainerStarted","Data":"c9b3b3ca2e7b82826885b3135f699dd7aed772c0f89039410464abea327ff908"} Nov 24 11:20:19 crc kubenswrapper[5072]: I1124 11:20:19.175442 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-557fdffb88-q824z" podStartSLOduration=1.309805055 podStartE2EDuration="3.175416167s" podCreationTimestamp="2025-11-24 11:20:16 +0000 UTC" firstStartedPulling="2025-11-24 11:20:16.906500789 +0000 UTC m=+668.618025265" lastFinishedPulling="2025-11-24 11:20:18.772111901 +0000 UTC m=+670.483636377" observedRunningTime="2025-11-24 11:20:19.174175017 +0000 UTC m=+670.885699533" watchObservedRunningTime="2025-11-24 11:20:19.175416167 +0000 UTC m=+670.886940683" Nov 24 11:20:20 crc kubenswrapper[5072]: I1124 11:20:20.058258 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-2ntqs"] Nov 24 11:20:20 crc kubenswrapper[5072]: I1124 11:20:20.059280 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-2ntqs" Nov 24 11:20:20 crc kubenswrapper[5072]: I1124 11:20:20.063260 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-hlpmv" Nov 24 11:20:20 crc kubenswrapper[5072]: I1124 11:20:20.072585 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-2ntqs"] Nov 24 11:20:20 crc kubenswrapper[5072]: I1124 11:20:20.079183 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-9x2g2"] Nov 24 11:20:20 crc kubenswrapper[5072]: I1124 11:20:20.079975 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-9x2g2" Nov 24 11:20:20 crc kubenswrapper[5072]: I1124 11:20:20.085938 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Nov 24 11:20:20 crc kubenswrapper[5072]: I1124 11:20:20.107451 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-9x2g2"] Nov 24 11:20:20 crc kubenswrapper[5072]: I1124 11:20:20.113089 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-hhvlc"] Nov 24 11:20:20 crc kubenswrapper[5072]: I1124 11:20:20.113814 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-hhvlc" Nov 24 11:20:20 crc kubenswrapper[5072]: I1124 11:20:20.131260 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gt88\" (UniqueName: \"kubernetes.io/projected/186c5c36-95cc-427c-af18-4ba4d0c8ea58-kube-api-access-8gt88\") pod \"nmstate-metrics-5dcf9c57c5-2ntqs\" (UID: \"186c5c36-95cc-427c-af18-4ba4d0c8ea58\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-2ntqs" Nov 24 11:20:20 crc kubenswrapper[5072]: I1124 11:20:20.131348 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/56a60d6f-8026-4722-95ad-aa81efc124f8-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-9x2g2\" (UID: \"56a60d6f-8026-4722-95ad-aa81efc124f8\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-9x2g2" Nov 24 11:20:20 crc kubenswrapper[5072]: I1124 11:20:20.131430 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzkdl\" (UniqueName: \"kubernetes.io/projected/56a60d6f-8026-4722-95ad-aa81efc124f8-kube-api-access-mzkdl\") pod \"nmstate-webhook-6b89b748d8-9x2g2\" (UID: \"56a60d6f-8026-4722-95ad-aa81efc124f8\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-9x2g2" Nov 24 11:20:20 crc kubenswrapper[5072]: I1124 11:20:20.188965 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-ppjv5"] Nov 24 11:20:20 crc kubenswrapper[5072]: I1124 11:20:20.190032 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-ppjv5" Nov 24 11:20:20 crc kubenswrapper[5072]: I1124 11:20:20.191881 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Nov 24 11:20:20 crc kubenswrapper[5072]: I1124 11:20:20.192530 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-5zzjk" Nov 24 11:20:20 crc kubenswrapper[5072]: I1124 11:20:20.192562 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Nov 24 11:20:20 crc kubenswrapper[5072]: I1124 11:20:20.201126 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-ppjv5"] Nov 24 11:20:20 crc kubenswrapper[5072]: I1124 11:20:20.232082 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/9b1242fa-766e-4ef6-b41f-0cc670aa35c2-nmstate-lock\") pod \"nmstate-handler-hhvlc\" (UID: \"9b1242fa-766e-4ef6-b41f-0cc670aa35c2\") " pod="openshift-nmstate/nmstate-handler-hhvlc" Nov 24 11:20:20 crc kubenswrapper[5072]: I1124 11:20:20.232146 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/56a60d6f-8026-4722-95ad-aa81efc124f8-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-9x2g2\" (UID: \"56a60d6f-8026-4722-95ad-aa81efc124f8\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-9x2g2" Nov 24 11:20:20 crc kubenswrapper[5072]: I1124 11:20:20.232171 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/9b1242fa-766e-4ef6-b41f-0cc670aa35c2-dbus-socket\") pod \"nmstate-handler-hhvlc\" (UID: \"9b1242fa-766e-4ef6-b41f-0cc670aa35c2\") " pod="openshift-nmstate/nmstate-handler-hhvlc" Nov 24 11:20:20 crc kubenswrapper[5072]: I1124 11:20:20.232204 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/abe6e260-c56f-46ff-b5a7-a7da6df2b64f-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-ppjv5\" (UID: \"abe6e260-c56f-46ff-b5a7-a7da6df2b64f\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-ppjv5" Nov 24 11:20:20 crc kubenswrapper[5072]: I1124 11:20:20.232235 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbp8s\" (UniqueName: \"kubernetes.io/projected/abe6e260-c56f-46ff-b5a7-a7da6df2b64f-kube-api-access-rbp8s\") pod \"nmstate-console-plugin-5874bd7bc5-ppjv5\" (UID: \"abe6e260-c56f-46ff-b5a7-a7da6df2b64f\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-ppjv5" Nov 24 11:20:20 crc kubenswrapper[5072]: I1124 11:20:20.232251 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4tt2\" (UniqueName: \"kubernetes.io/projected/9b1242fa-766e-4ef6-b41f-0cc670aa35c2-kube-api-access-q4tt2\") pod \"nmstate-handler-hhvlc\" (UID: \"9b1242fa-766e-4ef6-b41f-0cc670aa35c2\") " pod="openshift-nmstate/nmstate-handler-hhvlc" Nov 24 11:20:20 crc kubenswrapper[5072]: I1124 11:20:20.232275 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/abe6e260-c56f-46ff-b5a7-a7da6df2b64f-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-ppjv5\" (UID: \"abe6e260-c56f-46ff-b5a7-a7da6df2b64f\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-ppjv5" Nov 24 11:20:20 crc kubenswrapper[5072]: I1124 11:20:20.232304 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzkdl\" (UniqueName: \"kubernetes.io/projected/56a60d6f-8026-4722-95ad-aa81efc124f8-kube-api-access-mzkdl\") pod \"nmstate-webhook-6b89b748d8-9x2g2\" (UID: \"56a60d6f-8026-4722-95ad-aa81efc124f8\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-9x2g2" Nov 24 11:20:20 crc kubenswrapper[5072]: I1124 11:20:20.232320 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/9b1242fa-766e-4ef6-b41f-0cc670aa35c2-ovs-socket\") pod \"nmstate-handler-hhvlc\" (UID: \"9b1242fa-766e-4ef6-b41f-0cc670aa35c2\") " pod="openshift-nmstate/nmstate-handler-hhvlc" Nov 24 11:20:20 crc kubenswrapper[5072]: I1124 11:20:20.232344 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gt88\" (UniqueName: \"kubernetes.io/projected/186c5c36-95cc-427c-af18-4ba4d0c8ea58-kube-api-access-8gt88\") pod \"nmstate-metrics-5dcf9c57c5-2ntqs\" (UID: \"186c5c36-95cc-427c-af18-4ba4d0c8ea58\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-2ntqs" Nov 24 11:20:20 crc kubenswrapper[5072]: E1124 11:20:20.232781 5072 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Nov 24 11:20:20 crc kubenswrapper[5072]: E1124 11:20:20.232848 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56a60d6f-8026-4722-95ad-aa81efc124f8-tls-key-pair podName:56a60d6f-8026-4722-95ad-aa81efc124f8 nodeName:}" failed. No retries permitted until 2025-11-24 11:20:20.73282795 +0000 UTC m=+672.444352426 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/56a60d6f-8026-4722-95ad-aa81efc124f8-tls-key-pair") pod "nmstate-webhook-6b89b748d8-9x2g2" (UID: "56a60d6f-8026-4722-95ad-aa81efc124f8") : secret "openshift-nmstate-webhook" not found Nov 24 11:20:20 crc kubenswrapper[5072]: I1124 11:20:20.250441 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gt88\" (UniqueName: \"kubernetes.io/projected/186c5c36-95cc-427c-af18-4ba4d0c8ea58-kube-api-access-8gt88\") pod \"nmstate-metrics-5dcf9c57c5-2ntqs\" (UID: \"186c5c36-95cc-427c-af18-4ba4d0c8ea58\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-2ntqs" Nov 24 11:20:20 crc kubenswrapper[5072]: I1124 11:20:20.254342 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzkdl\" (UniqueName: \"kubernetes.io/projected/56a60d6f-8026-4722-95ad-aa81efc124f8-kube-api-access-mzkdl\") pod \"nmstate-webhook-6b89b748d8-9x2g2\" (UID: \"56a60d6f-8026-4722-95ad-aa81efc124f8\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-9x2g2" Nov 24 11:20:20 crc kubenswrapper[5072]: I1124 11:20:20.333858 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/9b1242fa-766e-4ef6-b41f-0cc670aa35c2-dbus-socket\") pod \"nmstate-handler-hhvlc\" (UID: \"9b1242fa-766e-4ef6-b41f-0cc670aa35c2\") " pod="openshift-nmstate/nmstate-handler-hhvlc" Nov 24 11:20:20 crc kubenswrapper[5072]: I1124 11:20:20.333909 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/abe6e260-c56f-46ff-b5a7-a7da6df2b64f-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-ppjv5\" (UID: \"abe6e260-c56f-46ff-b5a7-a7da6df2b64f\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-ppjv5" Nov 24 11:20:20 crc kubenswrapper[5072]: I1124 11:20:20.333941 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbp8s\" (UniqueName: \"kubernetes.io/projected/abe6e260-c56f-46ff-b5a7-a7da6df2b64f-kube-api-access-rbp8s\") pod \"nmstate-console-plugin-5874bd7bc5-ppjv5\" (UID: \"abe6e260-c56f-46ff-b5a7-a7da6df2b64f\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-ppjv5" Nov 24 11:20:20 crc kubenswrapper[5072]: I1124 11:20:20.333957 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4tt2\" (UniqueName: \"kubernetes.io/projected/9b1242fa-766e-4ef6-b41f-0cc670aa35c2-kube-api-access-q4tt2\") pod \"nmstate-handler-hhvlc\" (UID: \"9b1242fa-766e-4ef6-b41f-0cc670aa35c2\") " pod="openshift-nmstate/nmstate-handler-hhvlc" Nov 24 11:20:20 crc kubenswrapper[5072]: I1124 11:20:20.333981 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/abe6e260-c56f-46ff-b5a7-a7da6df2b64f-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-ppjv5\" (UID: \"abe6e260-c56f-46ff-b5a7-a7da6df2b64f\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-ppjv5" Nov 24 11:20:20 crc kubenswrapper[5072]: I1124 11:20:20.333998 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/9b1242fa-766e-4ef6-b41f-0cc670aa35c2-ovs-socket\") pod \"nmstate-handler-hhvlc\" (UID: \"9b1242fa-766e-4ef6-b41f-0cc670aa35c2\") " pod="openshift-nmstate/nmstate-handler-hhvlc" Nov 24 11:20:20 crc kubenswrapper[5072]: I1124 11:20:20.334023 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/9b1242fa-766e-4ef6-b41f-0cc670aa35c2-nmstate-lock\") pod \"nmstate-handler-hhvlc\" (UID: \"9b1242fa-766e-4ef6-b41f-0cc670aa35c2\") " pod="openshift-nmstate/nmstate-handler-hhvlc" Nov 24 11:20:20 crc kubenswrapper[5072]: I1124 11:20:20.334089 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/9b1242fa-766e-4ef6-b41f-0cc670aa35c2-nmstate-lock\") pod \"nmstate-handler-hhvlc\" (UID: \"9b1242fa-766e-4ef6-b41f-0cc670aa35c2\") " pod="openshift-nmstate/nmstate-handler-hhvlc" Nov 24 11:20:20 crc kubenswrapper[5072]: I1124 11:20:20.334305 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/9b1242fa-766e-4ef6-b41f-0cc670aa35c2-dbus-socket\") pod \"nmstate-handler-hhvlc\" (UID: \"9b1242fa-766e-4ef6-b41f-0cc670aa35c2\") " pod="openshift-nmstate/nmstate-handler-hhvlc" Nov 24 11:20:20 crc kubenswrapper[5072]: I1124 11:20:20.334729 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/9b1242fa-766e-4ef6-b41f-0cc670aa35c2-ovs-socket\") pod \"nmstate-handler-hhvlc\" (UID: \"9b1242fa-766e-4ef6-b41f-0cc670aa35c2\") " pod="openshift-nmstate/nmstate-handler-hhvlc" Nov 24 11:20:20 crc kubenswrapper[5072]: I1124 11:20:20.336692 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/abe6e260-c56f-46ff-b5a7-a7da6df2b64f-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-ppjv5\" (UID: \"abe6e260-c56f-46ff-b5a7-a7da6df2b64f\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-ppjv5" Nov 24 11:20:20 crc kubenswrapper[5072]: I1124 11:20:20.351228 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/abe6e260-c56f-46ff-b5a7-a7da6df2b64f-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-ppjv5\" (UID: \"abe6e260-c56f-46ff-b5a7-a7da6df2b64f\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-ppjv5" Nov 24 11:20:20 crc kubenswrapper[5072]: I1124 11:20:20.360065 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbp8s\" (UniqueName: \"kubernetes.io/projected/abe6e260-c56f-46ff-b5a7-a7da6df2b64f-kube-api-access-rbp8s\") pod \"nmstate-console-plugin-5874bd7bc5-ppjv5\" (UID: \"abe6e260-c56f-46ff-b5a7-a7da6df2b64f\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-ppjv5" Nov 24 11:20:20 crc kubenswrapper[5072]: I1124 11:20:20.362567 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4tt2\" (UniqueName: \"kubernetes.io/projected/9b1242fa-766e-4ef6-b41f-0cc670aa35c2-kube-api-access-q4tt2\") pod \"nmstate-handler-hhvlc\" (UID: \"9b1242fa-766e-4ef6-b41f-0cc670aa35c2\") " pod="openshift-nmstate/nmstate-handler-hhvlc" Nov 24 11:20:20 crc kubenswrapper[5072]: I1124 11:20:20.375728 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-77b98456d9-np5m6"] Nov 24 11:20:20 crc kubenswrapper[5072]: I1124 11:20:20.376322 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-77b98456d9-np5m6" Nov 24 11:20:20 crc kubenswrapper[5072]: I1124 11:20:20.376958 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-2ntqs" Nov 24 11:20:20 crc kubenswrapper[5072]: I1124 11:20:20.392118 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-77b98456d9-np5m6"] Nov 24 11:20:20 crc kubenswrapper[5072]: I1124 11:20:20.434805 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9h6jd\" (UniqueName: \"kubernetes.io/projected/5ad3209f-a1cb-4445-bcca-ecf7bec7d2b3-kube-api-access-9h6jd\") pod \"console-77b98456d9-np5m6\" (UID: \"5ad3209f-a1cb-4445-bcca-ecf7bec7d2b3\") " pod="openshift-console/console-77b98456d9-np5m6" Nov 24 11:20:20 crc kubenswrapper[5072]: I1124 11:20:20.434850 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5ad3209f-a1cb-4445-bcca-ecf7bec7d2b3-oauth-serving-cert\") pod \"console-77b98456d9-np5m6\" (UID: \"5ad3209f-a1cb-4445-bcca-ecf7bec7d2b3\") " pod="openshift-console/console-77b98456d9-np5m6" Nov 24 11:20:20 crc kubenswrapper[5072]: I1124 11:20:20.434869 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5ad3209f-a1cb-4445-bcca-ecf7bec7d2b3-console-serving-cert\") pod \"console-77b98456d9-np5m6\" (UID: \"5ad3209f-a1cb-4445-bcca-ecf7bec7d2b3\") " pod="openshift-console/console-77b98456d9-np5m6" Nov 24 11:20:20 crc kubenswrapper[5072]: I1124 11:20:20.435122 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5ad3209f-a1cb-4445-bcca-ecf7bec7d2b3-console-oauth-config\") pod \"console-77b98456d9-np5m6\" (UID: \"5ad3209f-a1cb-4445-bcca-ecf7bec7d2b3\") " pod="openshift-console/console-77b98456d9-np5m6" Nov 24 11:20:20 crc kubenswrapper[5072]: I1124 11:20:20.435153 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5ad3209f-a1cb-4445-bcca-ecf7bec7d2b3-service-ca\") pod \"console-77b98456d9-np5m6\" (UID: \"5ad3209f-a1cb-4445-bcca-ecf7bec7d2b3\") " pod="openshift-console/console-77b98456d9-np5m6" Nov 24 11:20:20 crc kubenswrapper[5072]: I1124 11:20:20.435192 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5ad3209f-a1cb-4445-bcca-ecf7bec7d2b3-console-config\") pod \"console-77b98456d9-np5m6\" (UID: \"5ad3209f-a1cb-4445-bcca-ecf7bec7d2b3\") " pod="openshift-console/console-77b98456d9-np5m6" Nov 24 11:20:20 crc kubenswrapper[5072]: I1124 11:20:20.435362 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ad3209f-a1cb-4445-bcca-ecf7bec7d2b3-trusted-ca-bundle\") pod \"console-77b98456d9-np5m6\" (UID: \"5ad3209f-a1cb-4445-bcca-ecf7bec7d2b3\") " pod="openshift-console/console-77b98456d9-np5m6" Nov 24 11:20:20 crc kubenswrapper[5072]: I1124 11:20:20.437709 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-hhvlc" Nov 24 11:20:20 crc kubenswrapper[5072]: I1124 11:20:20.505706 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-ppjv5" Nov 24 11:20:20 crc kubenswrapper[5072]: I1124 11:20:20.536537 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5ad3209f-a1cb-4445-bcca-ecf7bec7d2b3-console-oauth-config\") pod \"console-77b98456d9-np5m6\" (UID: \"5ad3209f-a1cb-4445-bcca-ecf7bec7d2b3\") " pod="openshift-console/console-77b98456d9-np5m6" Nov 24 11:20:20 crc kubenswrapper[5072]: I1124 11:20:20.536567 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5ad3209f-a1cb-4445-bcca-ecf7bec7d2b3-service-ca\") pod \"console-77b98456d9-np5m6\" (UID: \"5ad3209f-a1cb-4445-bcca-ecf7bec7d2b3\") " pod="openshift-console/console-77b98456d9-np5m6" Nov 24 11:20:20 crc kubenswrapper[5072]: I1124 11:20:20.536593 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5ad3209f-a1cb-4445-bcca-ecf7bec7d2b3-console-config\") pod \"console-77b98456d9-np5m6\" (UID: \"5ad3209f-a1cb-4445-bcca-ecf7bec7d2b3\") " pod="openshift-console/console-77b98456d9-np5m6" Nov 24 11:20:20 crc kubenswrapper[5072]: I1124 11:20:20.536666 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ad3209f-a1cb-4445-bcca-ecf7bec7d2b3-trusted-ca-bundle\") pod \"console-77b98456d9-np5m6\" (UID: \"5ad3209f-a1cb-4445-bcca-ecf7bec7d2b3\") " pod="openshift-console/console-77b98456d9-np5m6" Nov 24 11:20:20 crc kubenswrapper[5072]: I1124 11:20:20.536683 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9h6jd\" (UniqueName: \"kubernetes.io/projected/5ad3209f-a1cb-4445-bcca-ecf7bec7d2b3-kube-api-access-9h6jd\") pod \"console-77b98456d9-np5m6\" (UID: \"5ad3209f-a1cb-4445-bcca-ecf7bec7d2b3\") " pod="openshift-console/console-77b98456d9-np5m6" Nov 24 11:20:20 crc kubenswrapper[5072]: I1124 11:20:20.536701 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5ad3209f-a1cb-4445-bcca-ecf7bec7d2b3-oauth-serving-cert\") pod \"console-77b98456d9-np5m6\" (UID: \"5ad3209f-a1cb-4445-bcca-ecf7bec7d2b3\") " pod="openshift-console/console-77b98456d9-np5m6" Nov 24 11:20:20 crc kubenswrapper[5072]: I1124 11:20:20.536723 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5ad3209f-a1cb-4445-bcca-ecf7bec7d2b3-console-serving-cert\") pod \"console-77b98456d9-np5m6\" (UID: \"5ad3209f-a1cb-4445-bcca-ecf7bec7d2b3\") " pod="openshift-console/console-77b98456d9-np5m6" Nov 24 11:20:20 crc kubenswrapper[5072]: I1124 11:20:20.537591 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5ad3209f-a1cb-4445-bcca-ecf7bec7d2b3-console-config\") pod \"console-77b98456d9-np5m6\" (UID: \"5ad3209f-a1cb-4445-bcca-ecf7bec7d2b3\") " pod="openshift-console/console-77b98456d9-np5m6" Nov 24 11:20:20 crc kubenswrapper[5072]: I1124 11:20:20.538894 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5ad3209f-a1cb-4445-bcca-ecf7bec7d2b3-service-ca\") pod \"console-77b98456d9-np5m6\" (UID: \"5ad3209f-a1cb-4445-bcca-ecf7bec7d2b3\") " pod="openshift-console/console-77b98456d9-np5m6" Nov 24 11:20:20 crc kubenswrapper[5072]: I1124 11:20:20.539671 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ad3209f-a1cb-4445-bcca-ecf7bec7d2b3-trusted-ca-bundle\") pod \"console-77b98456d9-np5m6\" (UID: \"5ad3209f-a1cb-4445-bcca-ecf7bec7d2b3\") " pod="openshift-console/console-77b98456d9-np5m6" Nov 24 11:20:20 crc kubenswrapper[5072]: I1124 11:20:20.542431 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5ad3209f-a1cb-4445-bcca-ecf7bec7d2b3-console-oauth-config\") pod \"console-77b98456d9-np5m6\" (UID: \"5ad3209f-a1cb-4445-bcca-ecf7bec7d2b3\") " pod="openshift-console/console-77b98456d9-np5m6" Nov 24 11:20:20 crc kubenswrapper[5072]: I1124 11:20:20.543060 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5ad3209f-a1cb-4445-bcca-ecf7bec7d2b3-oauth-serving-cert\") pod \"console-77b98456d9-np5m6\" (UID: \"5ad3209f-a1cb-4445-bcca-ecf7bec7d2b3\") " pod="openshift-console/console-77b98456d9-np5m6" Nov 24 11:20:20 crc kubenswrapper[5072]: I1124 11:20:20.543537 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5ad3209f-a1cb-4445-bcca-ecf7bec7d2b3-console-serving-cert\") pod \"console-77b98456d9-np5m6\" (UID: \"5ad3209f-a1cb-4445-bcca-ecf7bec7d2b3\") " pod="openshift-console/console-77b98456d9-np5m6" Nov 24 11:20:20 crc kubenswrapper[5072]: I1124 11:20:20.555029 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9h6jd\" (UniqueName: \"kubernetes.io/projected/5ad3209f-a1cb-4445-bcca-ecf7bec7d2b3-kube-api-access-9h6jd\") pod \"console-77b98456d9-np5m6\" (UID: \"5ad3209f-a1cb-4445-bcca-ecf7bec7d2b3\") " pod="openshift-console/console-77b98456d9-np5m6" Nov 24 11:20:20 crc kubenswrapper[5072]: I1124 11:20:20.600090 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-2ntqs"] Nov 24 11:20:20 crc kubenswrapper[5072]: I1124 11:20:20.704512 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-ppjv5"] Nov 24 11:20:20 crc kubenswrapper[5072]: W1124 11:20:20.713050 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podabe6e260_c56f_46ff_b5a7_a7da6df2b64f.slice/crio-22c9575f14d3f61f0f0f0c85fb56d6dae2700592f4865b674796ab98e3122b2d WatchSource:0}: Error finding container 22c9575f14d3f61f0f0f0c85fb56d6dae2700592f4865b674796ab98e3122b2d: Status 404 returned error can't find the container with id 22c9575f14d3f61f0f0f0c85fb56d6dae2700592f4865b674796ab98e3122b2d Nov 24 11:20:20 crc kubenswrapper[5072]: I1124 11:20:20.735796 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-77b98456d9-np5m6" Nov 24 11:20:20 crc kubenswrapper[5072]: I1124 11:20:20.739284 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/56a60d6f-8026-4722-95ad-aa81efc124f8-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-9x2g2\" (UID: \"56a60d6f-8026-4722-95ad-aa81efc124f8\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-9x2g2" Nov 24 11:20:20 crc kubenswrapper[5072]: I1124 11:20:20.745436 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/56a60d6f-8026-4722-95ad-aa81efc124f8-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-9x2g2\" (UID: \"56a60d6f-8026-4722-95ad-aa81efc124f8\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-9x2g2" Nov 24 11:20:21 crc kubenswrapper[5072]: I1124 11:20:21.003866 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-9x2g2" Nov 24 11:20:21 crc kubenswrapper[5072]: I1124 11:20:21.130779 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-77b98456d9-np5m6"] Nov 24 11:20:21 crc kubenswrapper[5072]: W1124 11:20:21.139450 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5ad3209f_a1cb_4445_bcca_ecf7bec7d2b3.slice/crio-1fe9bdf7bf81ff5bccca481be368906f6f8600e4c0de0979b5c1907858017512 WatchSource:0}: Error finding container 1fe9bdf7bf81ff5bccca481be368906f6f8600e4c0de0979b5c1907858017512: Status 404 returned error can't find the container with id 1fe9bdf7bf81ff5bccca481be368906f6f8600e4c0de0979b5c1907858017512 Nov 24 11:20:21 crc kubenswrapper[5072]: I1124 11:20:21.168062 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-ppjv5" event={"ID":"abe6e260-c56f-46ff-b5a7-a7da6df2b64f","Type":"ContainerStarted","Data":"22c9575f14d3f61f0f0f0c85fb56d6dae2700592f4865b674796ab98e3122b2d"} Nov 24 11:20:21 crc kubenswrapper[5072]: I1124 11:20:21.170231 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-77b98456d9-np5m6" event={"ID":"5ad3209f-a1cb-4445-bcca-ecf7bec7d2b3","Type":"ContainerStarted","Data":"1fe9bdf7bf81ff5bccca481be368906f6f8600e4c0de0979b5c1907858017512"} Nov 24 11:20:21 crc kubenswrapper[5072]: I1124 11:20:21.171252 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-2ntqs" event={"ID":"186c5c36-95cc-427c-af18-4ba4d0c8ea58","Type":"ContainerStarted","Data":"9f985ebf2656185a90636763507275eaced98fef90d1140962fc2ac7fe119490"} Nov 24 11:20:21 crc kubenswrapper[5072]: I1124 11:20:21.173431 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-hhvlc" event={"ID":"9b1242fa-766e-4ef6-b41f-0cc670aa35c2","Type":"ContainerStarted","Data":"a2eb25c24fb398588bb3a99dc9e3874b889f2c127b69cc8b285951040f3a049d"} Nov 24 11:20:21 crc kubenswrapper[5072]: I1124 11:20:21.211117 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-9x2g2"] Nov 24 11:20:22 crc kubenswrapper[5072]: I1124 11:20:22.192414 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-77b98456d9-np5m6" event={"ID":"5ad3209f-a1cb-4445-bcca-ecf7bec7d2b3","Type":"ContainerStarted","Data":"2104bb1c3b57af889bb19a4960d03096ba7552a5ada34fe8e21f0ed3391979f7"} Nov 24 11:20:22 crc kubenswrapper[5072]: I1124 11:20:22.193790 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-9x2g2" event={"ID":"56a60d6f-8026-4722-95ad-aa81efc124f8","Type":"ContainerStarted","Data":"d7cdb513a2a13754189b2c2f2e7bb8a52f9f178f7a7fa6f4fe2d6eeabb82aa26"} Nov 24 11:20:22 crc kubenswrapper[5072]: I1124 11:20:22.212593 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-77b98456d9-np5m6" podStartSLOduration=2.212578333 podStartE2EDuration="2.212578333s" podCreationTimestamp="2025-11-24 11:20:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:20:22.209687764 +0000 UTC m=+673.921212260" watchObservedRunningTime="2025-11-24 11:20:22.212578333 +0000 UTC m=+673.924102809" Nov 24 11:20:24 crc kubenswrapper[5072]: I1124 11:20:24.207842 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-hhvlc" event={"ID":"9b1242fa-766e-4ef6-b41f-0cc670aa35c2","Type":"ContainerStarted","Data":"05156831dcf3b29175d08b080d00e95fbf863973758b9f04fd13b8bff2b2ad0b"} Nov 24 11:20:24 crc kubenswrapper[5072]: I1124 11:20:24.208509 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-hhvlc" Nov 24 11:20:24 crc kubenswrapper[5072]: I1124 11:20:24.210571 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-ppjv5" event={"ID":"abe6e260-c56f-46ff-b5a7-a7da6df2b64f","Type":"ContainerStarted","Data":"34a920a1407a2ac188cc5054d62cf9311c0f717cfd3feeb0b435cd530a71790f"} Nov 24 11:20:24 crc kubenswrapper[5072]: I1124 11:20:24.213842 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-9x2g2" event={"ID":"56a60d6f-8026-4722-95ad-aa81efc124f8","Type":"ContainerStarted","Data":"fa42f0032afd2e67df2ea6e2a5239de5f7814508ee5df7f1434d5e786d97dfe7"} Nov 24 11:20:24 crc kubenswrapper[5072]: I1124 11:20:24.214279 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-9x2g2" Nov 24 11:20:24 crc kubenswrapper[5072]: I1124 11:20:24.216649 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-2ntqs" event={"ID":"186c5c36-95cc-427c-af18-4ba4d0c8ea58","Type":"ContainerStarted","Data":"ed867b170bd00eb1122d88d4a07a903853a311f1c320c8ac721222da7bfb66ba"} Nov 24 11:20:24 crc kubenswrapper[5072]: I1124 11:20:24.235778 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-hhvlc" podStartSLOduration=1.489073042 podStartE2EDuration="4.235727822s" podCreationTimestamp="2025-11-24 11:20:20 +0000 UTC" firstStartedPulling="2025-11-24 11:20:20.469026911 +0000 UTC m=+672.180551387" lastFinishedPulling="2025-11-24 11:20:23.215681691 +0000 UTC m=+674.927206167" observedRunningTime="2025-11-24 11:20:24.233205822 +0000 UTC m=+675.944730328" watchObservedRunningTime="2025-11-24 11:20:24.235727822 +0000 UTC m=+675.947252298" Nov 24 11:20:24 crc kubenswrapper[5072]: I1124 11:20:24.265685 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-ppjv5" podStartSLOduration=1.776895026 podStartE2EDuration="4.265653366s" podCreationTimestamp="2025-11-24 11:20:20 +0000 UTC" firstStartedPulling="2025-11-24 11:20:20.71514502 +0000 UTC m=+672.426669496" lastFinishedPulling="2025-11-24 11:20:23.20390336 +0000 UTC m=+674.915427836" observedRunningTime="2025-11-24 11:20:24.25282787 +0000 UTC m=+675.964352356" watchObservedRunningTime="2025-11-24 11:20:24.265653366 +0000 UTC m=+675.977177912" Nov 24 11:20:24 crc kubenswrapper[5072]: I1124 11:20:24.279951 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-9x2g2" podStartSLOduration=2.279795416 podStartE2EDuration="4.279930976s" podCreationTimestamp="2025-11-24 11:20:20 +0000 UTC" firstStartedPulling="2025-11-24 11:20:21.220735944 +0000 UTC m=+672.932260420" lastFinishedPulling="2025-11-24 11:20:23.220871504 +0000 UTC m=+674.932395980" observedRunningTime="2025-11-24 11:20:24.279420404 +0000 UTC m=+675.990944900" watchObservedRunningTime="2025-11-24 11:20:24.279930976 +0000 UTC m=+675.991455442" Nov 24 11:20:26 crc kubenswrapper[5072]: I1124 11:20:26.231139 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-2ntqs" event={"ID":"186c5c36-95cc-427c-af18-4ba4d0c8ea58","Type":"ContainerStarted","Data":"fce0ce658fe2f0474188d9d32a633f91cae750fce2b4b41487953c34e91378a9"} Nov 24 11:20:26 crc kubenswrapper[5072]: I1124 11:20:26.247119 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-2ntqs" podStartSLOduration=1.196014345 podStartE2EDuration="6.24709765s" podCreationTimestamp="2025-11-24 11:20:20 +0000 UTC" firstStartedPulling="2025-11-24 11:20:20.616315963 +0000 UTC m=+672.327840439" lastFinishedPulling="2025-11-24 11:20:25.667399268 +0000 UTC m=+677.378923744" observedRunningTime="2025-11-24 11:20:26.246754972 +0000 UTC m=+677.958279468" watchObservedRunningTime="2025-11-24 11:20:26.24709765 +0000 UTC m=+677.958622146" Nov 24 11:20:30 crc kubenswrapper[5072]: I1124 11:20:30.469081 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-hhvlc" Nov 24 11:20:30 crc kubenswrapper[5072]: I1124 11:20:30.736233 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-77b98456d9-np5m6" Nov 24 11:20:30 crc kubenswrapper[5072]: I1124 11:20:30.736717 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-77b98456d9-np5m6" Nov 24 11:20:30 crc kubenswrapper[5072]: I1124 11:20:30.745565 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-77b98456d9-np5m6" Nov 24 11:20:31 crc kubenswrapper[5072]: I1124 11:20:31.285353 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-77b98456d9-np5m6" Nov 24 11:20:31 crc kubenswrapper[5072]: I1124 11:20:31.362701 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-798pd"] Nov 24 11:20:41 crc kubenswrapper[5072]: I1124 11:20:41.014840 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-9x2g2" Nov 24 11:20:43 crc kubenswrapper[5072]: I1124 11:20:43.644600 5072 patch_prober.go:28] interesting pod/machine-config-daemon-jfxnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 11:20:43 crc kubenswrapper[5072]: I1124 11:20:43.644972 5072 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 11:20:56 crc kubenswrapper[5072]: I1124 11:20:56.074463 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6dxr5m"] Nov 24 11:20:56 crc kubenswrapper[5072]: I1124 11:20:56.077630 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6dxr5m" Nov 24 11:20:56 crc kubenswrapper[5072]: I1124 11:20:56.080728 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 24 11:20:56 crc kubenswrapper[5072]: I1124 11:20:56.085751 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6dxr5m"] Nov 24 11:20:56 crc kubenswrapper[5072]: I1124 11:20:56.229210 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e5fd58fa-412d-4812-b49a-ad193626aed8-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6dxr5m\" (UID: \"e5fd58fa-412d-4812-b49a-ad193626aed8\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6dxr5m" Nov 24 11:20:56 crc kubenswrapper[5072]: I1124 11:20:56.229284 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7wfs\" (UniqueName: \"kubernetes.io/projected/e5fd58fa-412d-4812-b49a-ad193626aed8-kube-api-access-r7wfs\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6dxr5m\" (UID: \"e5fd58fa-412d-4812-b49a-ad193626aed8\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6dxr5m" Nov 24 11:20:56 crc kubenswrapper[5072]: I1124 11:20:56.229357 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e5fd58fa-412d-4812-b49a-ad193626aed8-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6dxr5m\" (UID: \"e5fd58fa-412d-4812-b49a-ad193626aed8\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6dxr5m" Nov 24 11:20:56 crc kubenswrapper[5072]: I1124 11:20:56.331678 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e5fd58fa-412d-4812-b49a-ad193626aed8-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6dxr5m\" (UID: \"e5fd58fa-412d-4812-b49a-ad193626aed8\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6dxr5m" Nov 24 11:20:56 crc kubenswrapper[5072]: I1124 11:20:56.331846 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e5fd58fa-412d-4812-b49a-ad193626aed8-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6dxr5m\" (UID: \"e5fd58fa-412d-4812-b49a-ad193626aed8\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6dxr5m" Nov 24 11:20:56 crc kubenswrapper[5072]: I1124 11:20:56.331934 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7wfs\" (UniqueName: \"kubernetes.io/projected/e5fd58fa-412d-4812-b49a-ad193626aed8-kube-api-access-r7wfs\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6dxr5m\" (UID: \"e5fd58fa-412d-4812-b49a-ad193626aed8\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6dxr5m" Nov 24 11:20:56 crc kubenswrapper[5072]: I1124 11:20:56.332632 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e5fd58fa-412d-4812-b49a-ad193626aed8-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6dxr5m\" (UID: \"e5fd58fa-412d-4812-b49a-ad193626aed8\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6dxr5m" Nov 24 11:20:56 crc kubenswrapper[5072]: I1124 11:20:56.332810 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e5fd58fa-412d-4812-b49a-ad193626aed8-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6dxr5m\" (UID: \"e5fd58fa-412d-4812-b49a-ad193626aed8\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6dxr5m" Nov 24 11:20:56 crc kubenswrapper[5072]: I1124 11:20:56.364842 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7wfs\" (UniqueName: \"kubernetes.io/projected/e5fd58fa-412d-4812-b49a-ad193626aed8-kube-api-access-r7wfs\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6dxr5m\" (UID: \"e5fd58fa-412d-4812-b49a-ad193626aed8\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6dxr5m" Nov 24 11:20:56 crc kubenswrapper[5072]: I1124 11:20:56.397863 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6dxr5m" Nov 24 11:20:56 crc kubenswrapper[5072]: I1124 11:20:56.455228 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-798pd" podUID="9d30ed7a-3577-40f4-8d32-eec9f851ab19" containerName="console" containerID="cri-o://86db00fa613322d83f7edb0d0995dcdb70016cd829e8f458d7f9b1b086d78b94" gracePeriod=15 Nov 24 11:20:56 crc kubenswrapper[5072]: I1124 11:20:56.607540 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6dxr5m"] Nov 24 11:20:56 crc kubenswrapper[5072]: I1124 11:20:56.819925 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-798pd_9d30ed7a-3577-40f4-8d32-eec9f851ab19/console/0.log" Nov 24 11:20:56 crc kubenswrapper[5072]: I1124 11:20:56.820238 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-798pd" Nov 24 11:20:56 crc kubenswrapper[5072]: I1124 11:20:56.940318 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxnz7\" (UniqueName: \"kubernetes.io/projected/9d30ed7a-3577-40f4-8d32-eec9f851ab19-kube-api-access-sxnz7\") pod \"9d30ed7a-3577-40f4-8d32-eec9f851ab19\" (UID: \"9d30ed7a-3577-40f4-8d32-eec9f851ab19\") " Nov 24 11:20:56 crc kubenswrapper[5072]: I1124 11:20:56.940448 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d30ed7a-3577-40f4-8d32-eec9f851ab19-trusted-ca-bundle\") pod \"9d30ed7a-3577-40f4-8d32-eec9f851ab19\" (UID: \"9d30ed7a-3577-40f4-8d32-eec9f851ab19\") " Nov 24 11:20:56 crc kubenswrapper[5072]: I1124 11:20:56.940542 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9d30ed7a-3577-40f4-8d32-eec9f851ab19-oauth-serving-cert\") pod \"9d30ed7a-3577-40f4-8d32-eec9f851ab19\" (UID: \"9d30ed7a-3577-40f4-8d32-eec9f851ab19\") " Nov 24 11:20:56 crc kubenswrapper[5072]: I1124 11:20:56.940578 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9d30ed7a-3577-40f4-8d32-eec9f851ab19-console-oauth-config\") pod \"9d30ed7a-3577-40f4-8d32-eec9f851ab19\" (UID: \"9d30ed7a-3577-40f4-8d32-eec9f851ab19\") " Nov 24 11:20:56 crc kubenswrapper[5072]: I1124 11:20:56.940631 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9d30ed7a-3577-40f4-8d32-eec9f851ab19-service-ca\") pod \"9d30ed7a-3577-40f4-8d32-eec9f851ab19\" (UID: \"9d30ed7a-3577-40f4-8d32-eec9f851ab19\") " Nov 24 11:20:56 crc kubenswrapper[5072]: I1124 11:20:56.940654 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9d30ed7a-3577-40f4-8d32-eec9f851ab19-console-config\") pod \"9d30ed7a-3577-40f4-8d32-eec9f851ab19\" (UID: \"9d30ed7a-3577-40f4-8d32-eec9f851ab19\") " Nov 24 11:20:56 crc kubenswrapper[5072]: I1124 11:20:56.940742 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9d30ed7a-3577-40f4-8d32-eec9f851ab19-console-serving-cert\") pod \"9d30ed7a-3577-40f4-8d32-eec9f851ab19\" (UID: \"9d30ed7a-3577-40f4-8d32-eec9f851ab19\") " Nov 24 11:20:56 crc kubenswrapper[5072]: I1124 11:20:56.941551 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d30ed7a-3577-40f4-8d32-eec9f851ab19-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "9d30ed7a-3577-40f4-8d32-eec9f851ab19" (UID: "9d30ed7a-3577-40f4-8d32-eec9f851ab19"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:20:56 crc kubenswrapper[5072]: I1124 11:20:56.941801 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d30ed7a-3577-40f4-8d32-eec9f851ab19-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "9d30ed7a-3577-40f4-8d32-eec9f851ab19" (UID: "9d30ed7a-3577-40f4-8d32-eec9f851ab19"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:20:56 crc kubenswrapper[5072]: I1124 11:20:56.941844 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d30ed7a-3577-40f4-8d32-eec9f851ab19-service-ca" (OuterVolumeSpecName: "service-ca") pod "9d30ed7a-3577-40f4-8d32-eec9f851ab19" (UID: "9d30ed7a-3577-40f4-8d32-eec9f851ab19"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:20:56 crc kubenswrapper[5072]: I1124 11:20:56.941857 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d30ed7a-3577-40f4-8d32-eec9f851ab19-console-config" (OuterVolumeSpecName: "console-config") pod "9d30ed7a-3577-40f4-8d32-eec9f851ab19" (UID: "9d30ed7a-3577-40f4-8d32-eec9f851ab19"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:20:56 crc kubenswrapper[5072]: I1124 11:20:56.946298 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d30ed7a-3577-40f4-8d32-eec9f851ab19-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "9d30ed7a-3577-40f4-8d32-eec9f851ab19" (UID: "9d30ed7a-3577-40f4-8d32-eec9f851ab19"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:20:56 crc kubenswrapper[5072]: I1124 11:20:56.946421 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d30ed7a-3577-40f4-8d32-eec9f851ab19-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "9d30ed7a-3577-40f4-8d32-eec9f851ab19" (UID: "9d30ed7a-3577-40f4-8d32-eec9f851ab19"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:20:56 crc kubenswrapper[5072]: I1124 11:20:56.946425 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d30ed7a-3577-40f4-8d32-eec9f851ab19-kube-api-access-sxnz7" (OuterVolumeSpecName: "kube-api-access-sxnz7") pod "9d30ed7a-3577-40f4-8d32-eec9f851ab19" (UID: "9d30ed7a-3577-40f4-8d32-eec9f851ab19"). InnerVolumeSpecName "kube-api-access-sxnz7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:20:57 crc kubenswrapper[5072]: I1124 11:20:57.041810 5072 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9d30ed7a-3577-40f4-8d32-eec9f851ab19-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:20:57 crc kubenswrapper[5072]: I1124 11:20:57.041836 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sxnz7\" (UniqueName: \"kubernetes.io/projected/9d30ed7a-3577-40f4-8d32-eec9f851ab19-kube-api-access-sxnz7\") on node \"crc\" DevicePath \"\"" Nov 24 11:20:57 crc kubenswrapper[5072]: I1124 11:20:57.041846 5072 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d30ed7a-3577-40f4-8d32-eec9f851ab19-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:20:57 crc kubenswrapper[5072]: I1124 11:20:57.041856 5072 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9d30ed7a-3577-40f4-8d32-eec9f851ab19-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:20:57 crc kubenswrapper[5072]: I1124 11:20:57.041865 5072 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9d30ed7a-3577-40f4-8d32-eec9f851ab19-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:20:57 crc kubenswrapper[5072]: I1124 11:20:57.041875 5072 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9d30ed7a-3577-40f4-8d32-eec9f851ab19-service-ca\") on node \"crc\" DevicePath \"\"" Nov 24 11:20:57 crc kubenswrapper[5072]: I1124 11:20:57.041916 5072 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9d30ed7a-3577-40f4-8d32-eec9f851ab19-console-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:20:57 crc kubenswrapper[5072]: I1124 11:20:57.463156 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-798pd_9d30ed7a-3577-40f4-8d32-eec9f851ab19/console/0.log" Nov 24 11:20:57 crc kubenswrapper[5072]: I1124 11:20:57.463218 5072 generic.go:334] "Generic (PLEG): container finished" podID="9d30ed7a-3577-40f4-8d32-eec9f851ab19" containerID="86db00fa613322d83f7edb0d0995dcdb70016cd829e8f458d7f9b1b086d78b94" exitCode=2 Nov 24 11:20:57 crc kubenswrapper[5072]: I1124 11:20:57.463318 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-798pd" Nov 24 11:20:57 crc kubenswrapper[5072]: I1124 11:20:57.463547 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-798pd" event={"ID":"9d30ed7a-3577-40f4-8d32-eec9f851ab19","Type":"ContainerDied","Data":"86db00fa613322d83f7edb0d0995dcdb70016cd829e8f458d7f9b1b086d78b94"} Nov 24 11:20:57 crc kubenswrapper[5072]: I1124 11:20:57.463592 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-798pd" event={"ID":"9d30ed7a-3577-40f4-8d32-eec9f851ab19","Type":"ContainerDied","Data":"16b8bb70a3c0c6a3aa3cde9816118e6c8174c822fe59fe7d3a2903f6c558076d"} Nov 24 11:20:57 crc kubenswrapper[5072]: I1124 11:20:57.463809 5072 scope.go:117] "RemoveContainer" containerID="86db00fa613322d83f7edb0d0995dcdb70016cd829e8f458d7f9b1b086d78b94" Nov 24 11:20:57 crc kubenswrapper[5072]: I1124 11:20:57.468112 5072 generic.go:334] "Generic (PLEG): container finished" podID="e5fd58fa-412d-4812-b49a-ad193626aed8" containerID="7c0bb500a7c0af6254e01838a8be830548727cfe50b4d549811801d2d3d0df4e" exitCode=0 Nov 24 11:20:57 crc kubenswrapper[5072]: I1124 11:20:57.468164 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6dxr5m" event={"ID":"e5fd58fa-412d-4812-b49a-ad193626aed8","Type":"ContainerDied","Data":"7c0bb500a7c0af6254e01838a8be830548727cfe50b4d549811801d2d3d0df4e"} Nov 24 11:20:57 crc kubenswrapper[5072]: I1124 11:20:57.468194 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6dxr5m" event={"ID":"e5fd58fa-412d-4812-b49a-ad193626aed8","Type":"ContainerStarted","Data":"3cc9141cf3fe7e7dd483ce6a9ea82bf0131e557360f13890f628017ed0917ca8"} Nov 24 11:20:57 crc kubenswrapper[5072]: I1124 11:20:57.490605 5072 scope.go:117] "RemoveContainer" containerID="86db00fa613322d83f7edb0d0995dcdb70016cd829e8f458d7f9b1b086d78b94" Nov 24 11:20:57 crc kubenswrapper[5072]: E1124 11:20:57.492413 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86db00fa613322d83f7edb0d0995dcdb70016cd829e8f458d7f9b1b086d78b94\": container with ID starting with 86db00fa613322d83f7edb0d0995dcdb70016cd829e8f458d7f9b1b086d78b94 not found: ID does not exist" containerID="86db00fa613322d83f7edb0d0995dcdb70016cd829e8f458d7f9b1b086d78b94" Nov 24 11:20:57 crc kubenswrapper[5072]: I1124 11:20:57.492495 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86db00fa613322d83f7edb0d0995dcdb70016cd829e8f458d7f9b1b086d78b94"} err="failed to get container status \"86db00fa613322d83f7edb0d0995dcdb70016cd829e8f458d7f9b1b086d78b94\": rpc error: code = NotFound desc = could not find container \"86db00fa613322d83f7edb0d0995dcdb70016cd829e8f458d7f9b1b086d78b94\": container with ID starting with 86db00fa613322d83f7edb0d0995dcdb70016cd829e8f458d7f9b1b086d78b94 not found: ID does not exist" Nov 24 11:20:57 crc kubenswrapper[5072]: I1124 11:20:57.505626 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-798pd"] Nov 24 11:20:57 crc kubenswrapper[5072]: I1124 11:20:57.511362 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-798pd"] Nov 24 11:20:59 crc kubenswrapper[5072]: I1124 11:20:59.026207 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d30ed7a-3577-40f4-8d32-eec9f851ab19" path="/var/lib/kubelet/pods/9d30ed7a-3577-40f4-8d32-eec9f851ab19/volumes" Nov 24 11:20:59 crc kubenswrapper[5072]: I1124 11:20:59.494643 5072 generic.go:334] "Generic (PLEG): container finished" podID="e5fd58fa-412d-4812-b49a-ad193626aed8" containerID="4dd705acb9ce6a8724288fc92aae38d07178a2ba246c3550b771baaf7231172f" exitCode=0 Nov 24 11:20:59 crc kubenswrapper[5072]: I1124 11:20:59.494701 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6dxr5m" event={"ID":"e5fd58fa-412d-4812-b49a-ad193626aed8","Type":"ContainerDied","Data":"4dd705acb9ce6a8724288fc92aae38d07178a2ba246c3550b771baaf7231172f"} Nov 24 11:21:00 crc kubenswrapper[5072]: I1124 11:21:00.504036 5072 generic.go:334] "Generic (PLEG): container finished" podID="e5fd58fa-412d-4812-b49a-ad193626aed8" containerID="cd56c432ca85e519589153ca1e7cf1fda657d2b8e54515ae51db9675ebbe45e8" exitCode=0 Nov 24 11:21:00 crc kubenswrapper[5072]: I1124 11:21:00.504088 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6dxr5m" event={"ID":"e5fd58fa-412d-4812-b49a-ad193626aed8","Type":"ContainerDied","Data":"cd56c432ca85e519589153ca1e7cf1fda657d2b8e54515ae51db9675ebbe45e8"} Nov 24 11:21:01 crc kubenswrapper[5072]: I1124 11:21:01.798784 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6dxr5m" Nov 24 11:21:01 crc kubenswrapper[5072]: I1124 11:21:01.907133 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e5fd58fa-412d-4812-b49a-ad193626aed8-util\") pod \"e5fd58fa-412d-4812-b49a-ad193626aed8\" (UID: \"e5fd58fa-412d-4812-b49a-ad193626aed8\") " Nov 24 11:21:01 crc kubenswrapper[5072]: I1124 11:21:01.907217 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7wfs\" (UniqueName: \"kubernetes.io/projected/e5fd58fa-412d-4812-b49a-ad193626aed8-kube-api-access-r7wfs\") pod \"e5fd58fa-412d-4812-b49a-ad193626aed8\" (UID: \"e5fd58fa-412d-4812-b49a-ad193626aed8\") " Nov 24 11:21:01 crc kubenswrapper[5072]: I1124 11:21:01.907287 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e5fd58fa-412d-4812-b49a-ad193626aed8-bundle\") pod \"e5fd58fa-412d-4812-b49a-ad193626aed8\" (UID: \"e5fd58fa-412d-4812-b49a-ad193626aed8\") " Nov 24 11:21:01 crc kubenswrapper[5072]: I1124 11:21:01.908298 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5fd58fa-412d-4812-b49a-ad193626aed8-bundle" (OuterVolumeSpecName: "bundle") pod "e5fd58fa-412d-4812-b49a-ad193626aed8" (UID: "e5fd58fa-412d-4812-b49a-ad193626aed8"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:21:01 crc kubenswrapper[5072]: I1124 11:21:01.911999 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5fd58fa-412d-4812-b49a-ad193626aed8-kube-api-access-r7wfs" (OuterVolumeSpecName: "kube-api-access-r7wfs") pod "e5fd58fa-412d-4812-b49a-ad193626aed8" (UID: "e5fd58fa-412d-4812-b49a-ad193626aed8"). InnerVolumeSpecName "kube-api-access-r7wfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:21:01 crc kubenswrapper[5072]: I1124 11:21:01.921141 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5fd58fa-412d-4812-b49a-ad193626aed8-util" (OuterVolumeSpecName: "util") pod "e5fd58fa-412d-4812-b49a-ad193626aed8" (UID: "e5fd58fa-412d-4812-b49a-ad193626aed8"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:21:02 crc kubenswrapper[5072]: I1124 11:21:02.009823 5072 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e5fd58fa-412d-4812-b49a-ad193626aed8-util\") on node \"crc\" DevicePath \"\"" Nov 24 11:21:02 crc kubenswrapper[5072]: I1124 11:21:02.009907 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r7wfs\" (UniqueName: \"kubernetes.io/projected/e5fd58fa-412d-4812-b49a-ad193626aed8-kube-api-access-r7wfs\") on node \"crc\" DevicePath \"\"" Nov 24 11:21:02 crc kubenswrapper[5072]: I1124 11:21:02.009927 5072 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e5fd58fa-412d-4812-b49a-ad193626aed8-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:21:02 crc kubenswrapper[5072]: I1124 11:21:02.525561 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6dxr5m" event={"ID":"e5fd58fa-412d-4812-b49a-ad193626aed8","Type":"ContainerDied","Data":"3cc9141cf3fe7e7dd483ce6a9ea82bf0131e557360f13890f628017ed0917ca8"} Nov 24 11:21:02 crc kubenswrapper[5072]: I1124 11:21:02.525609 5072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3cc9141cf3fe7e7dd483ce6a9ea82bf0131e557360f13890f628017ed0917ca8" Nov 24 11:21:02 crc kubenswrapper[5072]: I1124 11:21:02.525663 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6dxr5m" Nov 24 11:21:13 crc kubenswrapper[5072]: I1124 11:21:13.645445 5072 patch_prober.go:28] interesting pod/machine-config-daemon-jfxnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 11:21:13 crc kubenswrapper[5072]: I1124 11:21:13.646936 5072 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 11:21:14 crc kubenswrapper[5072]: I1124 11:21:14.260765 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-b6dc8dd56-6d5x5"] Nov 24 11:21:14 crc kubenswrapper[5072]: E1124 11:21:14.260973 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d30ed7a-3577-40f4-8d32-eec9f851ab19" containerName="console" Nov 24 11:21:14 crc kubenswrapper[5072]: I1124 11:21:14.260986 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d30ed7a-3577-40f4-8d32-eec9f851ab19" containerName="console" Nov 24 11:21:14 crc kubenswrapper[5072]: E1124 11:21:14.261004 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5fd58fa-412d-4812-b49a-ad193626aed8" containerName="util" Nov 24 11:21:14 crc kubenswrapper[5072]: I1124 11:21:14.261012 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5fd58fa-412d-4812-b49a-ad193626aed8" containerName="util" Nov 24 11:21:14 crc kubenswrapper[5072]: E1124 11:21:14.261024 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5fd58fa-412d-4812-b49a-ad193626aed8" containerName="extract" Nov 24 11:21:14 crc kubenswrapper[5072]: I1124 11:21:14.261031 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5fd58fa-412d-4812-b49a-ad193626aed8" containerName="extract" Nov 24 11:21:14 crc kubenswrapper[5072]: E1124 11:21:14.261041 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5fd58fa-412d-4812-b49a-ad193626aed8" containerName="pull" Nov 24 11:21:14 crc kubenswrapper[5072]: I1124 11:21:14.261050 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5fd58fa-412d-4812-b49a-ad193626aed8" containerName="pull" Nov 24 11:21:14 crc kubenswrapper[5072]: I1124 11:21:14.261162 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5fd58fa-412d-4812-b49a-ad193626aed8" containerName="extract" Nov 24 11:21:14 crc kubenswrapper[5072]: I1124 11:21:14.261176 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d30ed7a-3577-40f4-8d32-eec9f851ab19" containerName="console" Nov 24 11:21:14 crc kubenswrapper[5072]: I1124 11:21:14.261651 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-b6dc8dd56-6d5x5" Nov 24 11:21:14 crc kubenswrapper[5072]: I1124 11:21:14.265728 5072 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Nov 24 11:21:14 crc kubenswrapper[5072]: I1124 11:21:14.266343 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Nov 24 11:21:14 crc kubenswrapper[5072]: I1124 11:21:14.266671 5072 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-4g92d" Nov 24 11:21:14 crc kubenswrapper[5072]: I1124 11:21:14.267512 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Nov 24 11:21:14 crc kubenswrapper[5072]: I1124 11:21:14.268112 5072 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Nov 24 11:21:14 crc kubenswrapper[5072]: I1124 11:21:14.278931 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-b6dc8dd56-6d5x5"] Nov 24 11:21:14 crc kubenswrapper[5072]: I1124 11:21:14.289280 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbnwg\" (UniqueName: \"kubernetes.io/projected/30512acc-64dc-4a20-88e5-565a69d8f95c-kube-api-access-jbnwg\") pod \"metallb-operator-controller-manager-b6dc8dd56-6d5x5\" (UID: \"30512acc-64dc-4a20-88e5-565a69d8f95c\") " pod="metallb-system/metallb-operator-controller-manager-b6dc8dd56-6d5x5" Nov 24 11:21:14 crc kubenswrapper[5072]: I1124 11:21:14.289365 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/30512acc-64dc-4a20-88e5-565a69d8f95c-webhook-cert\") pod \"metallb-operator-controller-manager-b6dc8dd56-6d5x5\" (UID: \"30512acc-64dc-4a20-88e5-565a69d8f95c\") " pod="metallb-system/metallb-operator-controller-manager-b6dc8dd56-6d5x5" Nov 24 11:21:14 crc kubenswrapper[5072]: I1124 11:21:14.289406 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/30512acc-64dc-4a20-88e5-565a69d8f95c-apiservice-cert\") pod \"metallb-operator-controller-manager-b6dc8dd56-6d5x5\" (UID: \"30512acc-64dc-4a20-88e5-565a69d8f95c\") " pod="metallb-system/metallb-operator-controller-manager-b6dc8dd56-6d5x5" Nov 24 11:21:14 crc kubenswrapper[5072]: I1124 11:21:14.390572 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/30512acc-64dc-4a20-88e5-565a69d8f95c-webhook-cert\") pod \"metallb-operator-controller-manager-b6dc8dd56-6d5x5\" (UID: \"30512acc-64dc-4a20-88e5-565a69d8f95c\") " pod="metallb-system/metallb-operator-controller-manager-b6dc8dd56-6d5x5" Nov 24 11:21:14 crc kubenswrapper[5072]: I1124 11:21:14.390621 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/30512acc-64dc-4a20-88e5-565a69d8f95c-apiservice-cert\") pod \"metallb-operator-controller-manager-b6dc8dd56-6d5x5\" (UID: \"30512acc-64dc-4a20-88e5-565a69d8f95c\") " pod="metallb-system/metallb-operator-controller-manager-b6dc8dd56-6d5x5" Nov 24 11:21:14 crc kubenswrapper[5072]: I1124 11:21:14.390665 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbnwg\" (UniqueName: \"kubernetes.io/projected/30512acc-64dc-4a20-88e5-565a69d8f95c-kube-api-access-jbnwg\") pod \"metallb-operator-controller-manager-b6dc8dd56-6d5x5\" (UID: \"30512acc-64dc-4a20-88e5-565a69d8f95c\") " pod="metallb-system/metallb-operator-controller-manager-b6dc8dd56-6d5x5" Nov 24 11:21:14 crc kubenswrapper[5072]: I1124 11:21:14.396153 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/30512acc-64dc-4a20-88e5-565a69d8f95c-apiservice-cert\") pod \"metallb-operator-controller-manager-b6dc8dd56-6d5x5\" (UID: \"30512acc-64dc-4a20-88e5-565a69d8f95c\") " pod="metallb-system/metallb-operator-controller-manager-b6dc8dd56-6d5x5" Nov 24 11:21:14 crc kubenswrapper[5072]: I1124 11:21:14.409270 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/30512acc-64dc-4a20-88e5-565a69d8f95c-webhook-cert\") pod \"metallb-operator-controller-manager-b6dc8dd56-6d5x5\" (UID: \"30512acc-64dc-4a20-88e5-565a69d8f95c\") " pod="metallb-system/metallb-operator-controller-manager-b6dc8dd56-6d5x5" Nov 24 11:21:14 crc kubenswrapper[5072]: I1124 11:21:14.409787 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbnwg\" (UniqueName: \"kubernetes.io/projected/30512acc-64dc-4a20-88e5-565a69d8f95c-kube-api-access-jbnwg\") pod \"metallb-operator-controller-manager-b6dc8dd56-6d5x5\" (UID: \"30512acc-64dc-4a20-88e5-565a69d8f95c\") " pod="metallb-system/metallb-operator-controller-manager-b6dc8dd56-6d5x5" Nov 24 11:21:14 crc kubenswrapper[5072]: I1124 11:21:14.580688 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-b6dc8dd56-6d5x5" Nov 24 11:21:14 crc kubenswrapper[5072]: I1124 11:21:14.679945 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-75d856c88d-rz946"] Nov 24 11:21:14 crc kubenswrapper[5072]: I1124 11:21:14.681853 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-75d856c88d-rz946" Nov 24 11:21:14 crc kubenswrapper[5072]: I1124 11:21:14.683863 5072 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-dw9gq" Nov 24 11:21:14 crc kubenswrapper[5072]: I1124 11:21:14.685233 5072 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 24 11:21:14 crc kubenswrapper[5072]: I1124 11:21:14.690190 5072 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Nov 24 11:21:14 crc kubenswrapper[5072]: I1124 11:21:14.694089 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-75d856c88d-rz946"] Nov 24 11:21:14 crc kubenswrapper[5072]: I1124 11:21:14.794383 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e3c19ac2-dba1-4b49-acb0-1f93285f60b2-apiservice-cert\") pod \"metallb-operator-webhook-server-75d856c88d-rz946\" (UID: \"e3c19ac2-dba1-4b49-acb0-1f93285f60b2\") " pod="metallb-system/metallb-operator-webhook-server-75d856c88d-rz946" Nov 24 11:21:14 crc kubenswrapper[5072]: I1124 11:21:14.794419 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e3c19ac2-dba1-4b49-acb0-1f93285f60b2-webhook-cert\") pod \"metallb-operator-webhook-server-75d856c88d-rz946\" (UID: \"e3c19ac2-dba1-4b49-acb0-1f93285f60b2\") " pod="metallb-system/metallb-operator-webhook-server-75d856c88d-rz946" Nov 24 11:21:14 crc kubenswrapper[5072]: I1124 11:21:14.794455 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hflgb\" (UniqueName: \"kubernetes.io/projected/e3c19ac2-dba1-4b49-acb0-1f93285f60b2-kube-api-access-hflgb\") pod \"metallb-operator-webhook-server-75d856c88d-rz946\" (UID: \"e3c19ac2-dba1-4b49-acb0-1f93285f60b2\") " pod="metallb-system/metallb-operator-webhook-server-75d856c88d-rz946" Nov 24 11:21:14 crc kubenswrapper[5072]: I1124 11:21:14.860722 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-b6dc8dd56-6d5x5"] Nov 24 11:21:14 crc kubenswrapper[5072]: W1124 11:21:14.865203 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod30512acc_64dc_4a20_88e5_565a69d8f95c.slice/crio-203d0d1692128c297d15e3792cb361660401658cfd0d0736cad7ccea8a2e2d48 WatchSource:0}: Error finding container 203d0d1692128c297d15e3792cb361660401658cfd0d0736cad7ccea8a2e2d48: Status 404 returned error can't find the container with id 203d0d1692128c297d15e3792cb361660401658cfd0d0736cad7ccea8a2e2d48 Nov 24 11:21:14 crc kubenswrapper[5072]: I1124 11:21:14.895417 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e3c19ac2-dba1-4b49-acb0-1f93285f60b2-apiservice-cert\") pod \"metallb-operator-webhook-server-75d856c88d-rz946\" (UID: \"e3c19ac2-dba1-4b49-acb0-1f93285f60b2\") " pod="metallb-system/metallb-operator-webhook-server-75d856c88d-rz946" Nov 24 11:21:14 crc kubenswrapper[5072]: I1124 11:21:14.895475 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e3c19ac2-dba1-4b49-acb0-1f93285f60b2-webhook-cert\") pod \"metallb-operator-webhook-server-75d856c88d-rz946\" (UID: \"e3c19ac2-dba1-4b49-acb0-1f93285f60b2\") " pod="metallb-system/metallb-operator-webhook-server-75d856c88d-rz946" Nov 24 11:21:14 crc kubenswrapper[5072]: I1124 11:21:14.895553 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hflgb\" (UniqueName: \"kubernetes.io/projected/e3c19ac2-dba1-4b49-acb0-1f93285f60b2-kube-api-access-hflgb\") pod \"metallb-operator-webhook-server-75d856c88d-rz946\" (UID: \"e3c19ac2-dba1-4b49-acb0-1f93285f60b2\") " pod="metallb-system/metallb-operator-webhook-server-75d856c88d-rz946" Nov 24 11:21:14 crc kubenswrapper[5072]: I1124 11:21:14.900542 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e3c19ac2-dba1-4b49-acb0-1f93285f60b2-apiservice-cert\") pod \"metallb-operator-webhook-server-75d856c88d-rz946\" (UID: \"e3c19ac2-dba1-4b49-acb0-1f93285f60b2\") " pod="metallb-system/metallb-operator-webhook-server-75d856c88d-rz946" Nov 24 11:21:14 crc kubenswrapper[5072]: I1124 11:21:14.901505 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e3c19ac2-dba1-4b49-acb0-1f93285f60b2-webhook-cert\") pod \"metallb-operator-webhook-server-75d856c88d-rz946\" (UID: \"e3c19ac2-dba1-4b49-acb0-1f93285f60b2\") " pod="metallb-system/metallb-operator-webhook-server-75d856c88d-rz946" Nov 24 11:21:14 crc kubenswrapper[5072]: I1124 11:21:14.912216 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hflgb\" (UniqueName: \"kubernetes.io/projected/e3c19ac2-dba1-4b49-acb0-1f93285f60b2-kube-api-access-hflgb\") pod \"metallb-operator-webhook-server-75d856c88d-rz946\" (UID: \"e3c19ac2-dba1-4b49-acb0-1f93285f60b2\") " pod="metallb-system/metallb-operator-webhook-server-75d856c88d-rz946" Nov 24 11:21:14 crc kubenswrapper[5072]: I1124 11:21:14.994862 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-75d856c88d-rz946" Nov 24 11:21:15 crc kubenswrapper[5072]: I1124 11:21:15.274483 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-75d856c88d-rz946"] Nov 24 11:21:15 crc kubenswrapper[5072]: W1124 11:21:15.285059 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode3c19ac2_dba1_4b49_acb0_1f93285f60b2.slice/crio-50734c4bb4ee3cec376a1160227010273de267ea87c2fd63841221a06c0e9c9e WatchSource:0}: Error finding container 50734c4bb4ee3cec376a1160227010273de267ea87c2fd63841221a06c0e9c9e: Status 404 returned error can't find the container with id 50734c4bb4ee3cec376a1160227010273de267ea87c2fd63841221a06c0e9c9e Nov 24 11:21:15 crc kubenswrapper[5072]: I1124 11:21:15.621474 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-b6dc8dd56-6d5x5" event={"ID":"30512acc-64dc-4a20-88e5-565a69d8f95c","Type":"ContainerStarted","Data":"203d0d1692128c297d15e3792cb361660401658cfd0d0736cad7ccea8a2e2d48"} Nov 24 11:21:15 crc kubenswrapper[5072]: I1124 11:21:15.622889 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-75d856c88d-rz946" event={"ID":"e3c19ac2-dba1-4b49-acb0-1f93285f60b2","Type":"ContainerStarted","Data":"50734c4bb4ee3cec376a1160227010273de267ea87c2fd63841221a06c0e9c9e"} Nov 24 11:21:20 crc kubenswrapper[5072]: I1124 11:21:20.677308 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-75d856c88d-rz946" event={"ID":"e3c19ac2-dba1-4b49-acb0-1f93285f60b2","Type":"ContainerStarted","Data":"59f70eece16ce99e59aee1a2a956d118870e0d9571bf1d32cef466fc1c1c5f83"} Nov 24 11:21:20 crc kubenswrapper[5072]: I1124 11:21:20.677684 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-75d856c88d-rz946" Nov 24 11:21:20 crc kubenswrapper[5072]: I1124 11:21:20.678673 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-b6dc8dd56-6d5x5" event={"ID":"30512acc-64dc-4a20-88e5-565a69d8f95c","Type":"ContainerStarted","Data":"92b1435d6eead070db625b05c40eab0823edcd871b670e0569e28153c93c1003"} Nov 24 11:21:20 crc kubenswrapper[5072]: I1124 11:21:20.678844 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-b6dc8dd56-6d5x5" Nov 24 11:21:20 crc kubenswrapper[5072]: I1124 11:21:20.696245 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-75d856c88d-rz946" podStartSLOduration=1.745914032 podStartE2EDuration="6.696222967s" podCreationTimestamp="2025-11-24 11:21:14 +0000 UTC" firstStartedPulling="2025-11-24 11:21:15.287579032 +0000 UTC m=+726.999103508" lastFinishedPulling="2025-11-24 11:21:20.237887967 +0000 UTC m=+731.949412443" observedRunningTime="2025-11-24 11:21:20.692232156 +0000 UTC m=+732.403756632" watchObservedRunningTime="2025-11-24 11:21:20.696222967 +0000 UTC m=+732.407747453" Nov 24 11:21:20 crc kubenswrapper[5072]: I1124 11:21:20.721445 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-b6dc8dd56-6d5x5" podStartSLOduration=1.370193464 podStartE2EDuration="6.721422362s" podCreationTimestamp="2025-11-24 11:21:14 +0000 UTC" firstStartedPulling="2025-11-24 11:21:14.868105221 +0000 UTC m=+726.579629707" lastFinishedPulling="2025-11-24 11:21:20.219334129 +0000 UTC m=+731.930858605" observedRunningTime="2025-11-24 11:21:20.719865353 +0000 UTC m=+732.431389839" watchObservedRunningTime="2025-11-24 11:21:20.721422362 +0000 UTC m=+732.432946848" Nov 24 11:21:31 crc kubenswrapper[5072]: I1124 11:21:31.234702 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-km2xf"] Nov 24 11:21:31 crc kubenswrapper[5072]: I1124 11:21:31.235377 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-km2xf" podUID="421f29d9-28d7-4e85-852e-d25b0529497a" containerName="controller-manager" containerID="cri-o://3094c361101979baf09885afdf03b95d3f681054275d5a2c5f220c9cdcbd3d20" gracePeriod=30 Nov 24 11:21:31 crc kubenswrapper[5072]: I1124 11:21:31.356954 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-mzvpf"] Nov 24 11:21:31 crc kubenswrapper[5072]: I1124 11:21:31.357190 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mzvpf" podUID="ca699c4e-ccec-4ff8-895f-109777beca4c" containerName="route-controller-manager" containerID="cri-o://8fa7a95d108472a5a96017a67b76f1e4c64d97ae1be0d1e7b64586b60918620c" gracePeriod=30 Nov 24 11:21:31 crc kubenswrapper[5072]: I1124 11:21:31.684225 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-km2xf" Nov 24 11:21:31 crc kubenswrapper[5072]: I1124 11:21:31.722472 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/421f29d9-28d7-4e85-852e-d25b0529497a-serving-cert\") pod \"421f29d9-28d7-4e85-852e-d25b0529497a\" (UID: \"421f29d9-28d7-4e85-852e-d25b0529497a\") " Nov 24 11:21:31 crc kubenswrapper[5072]: I1124 11:21:31.722528 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/421f29d9-28d7-4e85-852e-d25b0529497a-proxy-ca-bundles\") pod \"421f29d9-28d7-4e85-852e-d25b0529497a\" (UID: \"421f29d9-28d7-4e85-852e-d25b0529497a\") " Nov 24 11:21:31 crc kubenswrapper[5072]: I1124 11:21:31.722550 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/421f29d9-28d7-4e85-852e-d25b0529497a-client-ca\") pod \"421f29d9-28d7-4e85-852e-d25b0529497a\" (UID: \"421f29d9-28d7-4e85-852e-d25b0529497a\") " Nov 24 11:21:31 crc kubenswrapper[5072]: I1124 11:21:31.722612 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hkwnb\" (UniqueName: \"kubernetes.io/projected/421f29d9-28d7-4e85-852e-d25b0529497a-kube-api-access-hkwnb\") pod \"421f29d9-28d7-4e85-852e-d25b0529497a\" (UID: \"421f29d9-28d7-4e85-852e-d25b0529497a\") " Nov 24 11:21:31 crc kubenswrapper[5072]: I1124 11:21:31.722632 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/421f29d9-28d7-4e85-852e-d25b0529497a-config\") pod \"421f29d9-28d7-4e85-852e-d25b0529497a\" (UID: \"421f29d9-28d7-4e85-852e-d25b0529497a\") " Nov 24 11:21:31 crc kubenswrapper[5072]: I1124 11:21:31.723649 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/421f29d9-28d7-4e85-852e-d25b0529497a-config" (OuterVolumeSpecName: "config") pod "421f29d9-28d7-4e85-852e-d25b0529497a" (UID: "421f29d9-28d7-4e85-852e-d25b0529497a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:21:31 crc kubenswrapper[5072]: I1124 11:21:31.724885 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/421f29d9-28d7-4e85-852e-d25b0529497a-client-ca" (OuterVolumeSpecName: "client-ca") pod "421f29d9-28d7-4e85-852e-d25b0529497a" (UID: "421f29d9-28d7-4e85-852e-d25b0529497a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:21:31 crc kubenswrapper[5072]: I1124 11:21:31.725234 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/421f29d9-28d7-4e85-852e-d25b0529497a-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "421f29d9-28d7-4e85-852e-d25b0529497a" (UID: "421f29d9-28d7-4e85-852e-d25b0529497a"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:21:31 crc kubenswrapper[5072]: I1124 11:21:31.740046 5072 generic.go:334] "Generic (PLEG): container finished" podID="ca699c4e-ccec-4ff8-895f-109777beca4c" containerID="8fa7a95d108472a5a96017a67b76f1e4c64d97ae1be0d1e7b64586b60918620c" exitCode=0 Nov 24 11:21:31 crc kubenswrapper[5072]: I1124 11:21:31.740103 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mzvpf" event={"ID":"ca699c4e-ccec-4ff8-895f-109777beca4c","Type":"ContainerDied","Data":"8fa7a95d108472a5a96017a67b76f1e4c64d97ae1be0d1e7b64586b60918620c"} Nov 24 11:21:31 crc kubenswrapper[5072]: I1124 11:21:31.740647 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/421f29d9-28d7-4e85-852e-d25b0529497a-kube-api-access-hkwnb" (OuterVolumeSpecName: "kube-api-access-hkwnb") pod "421f29d9-28d7-4e85-852e-d25b0529497a" (UID: "421f29d9-28d7-4e85-852e-d25b0529497a"). InnerVolumeSpecName "kube-api-access-hkwnb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:21:31 crc kubenswrapper[5072]: I1124 11:21:31.740814 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/421f29d9-28d7-4e85-852e-d25b0529497a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "421f29d9-28d7-4e85-852e-d25b0529497a" (UID: "421f29d9-28d7-4e85-852e-d25b0529497a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:21:31 crc kubenswrapper[5072]: I1124 11:21:31.741642 5072 generic.go:334] "Generic (PLEG): container finished" podID="421f29d9-28d7-4e85-852e-d25b0529497a" containerID="3094c361101979baf09885afdf03b95d3f681054275d5a2c5f220c9cdcbd3d20" exitCode=0 Nov 24 11:21:31 crc kubenswrapper[5072]: I1124 11:21:31.741666 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-km2xf" event={"ID":"421f29d9-28d7-4e85-852e-d25b0529497a","Type":"ContainerDied","Data":"3094c361101979baf09885afdf03b95d3f681054275d5a2c5f220c9cdcbd3d20"} Nov 24 11:21:31 crc kubenswrapper[5072]: I1124 11:21:31.741680 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-km2xf" event={"ID":"421f29d9-28d7-4e85-852e-d25b0529497a","Type":"ContainerDied","Data":"fab1a48635d92f98293e5b0b0a4ff1824b6abef1558da5ca3563e04b8677bbc8"} Nov 24 11:21:31 crc kubenswrapper[5072]: I1124 11:21:31.741696 5072 scope.go:117] "RemoveContainer" containerID="3094c361101979baf09885afdf03b95d3f681054275d5a2c5f220c9cdcbd3d20" Nov 24 11:21:31 crc kubenswrapper[5072]: I1124 11:21:31.741794 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-km2xf" Nov 24 11:21:31 crc kubenswrapper[5072]: I1124 11:21:31.772682 5072 scope.go:117] "RemoveContainer" containerID="3094c361101979baf09885afdf03b95d3f681054275d5a2c5f220c9cdcbd3d20" Nov 24 11:21:31 crc kubenswrapper[5072]: E1124 11:21:31.773189 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3094c361101979baf09885afdf03b95d3f681054275d5a2c5f220c9cdcbd3d20\": container with ID starting with 3094c361101979baf09885afdf03b95d3f681054275d5a2c5f220c9cdcbd3d20 not found: ID does not exist" containerID="3094c361101979baf09885afdf03b95d3f681054275d5a2c5f220c9cdcbd3d20" Nov 24 11:21:31 crc kubenswrapper[5072]: I1124 11:21:31.773221 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3094c361101979baf09885afdf03b95d3f681054275d5a2c5f220c9cdcbd3d20"} err="failed to get container status \"3094c361101979baf09885afdf03b95d3f681054275d5a2c5f220c9cdcbd3d20\": rpc error: code = NotFound desc = could not find container \"3094c361101979baf09885afdf03b95d3f681054275d5a2c5f220c9cdcbd3d20\": container with ID starting with 3094c361101979baf09885afdf03b95d3f681054275d5a2c5f220c9cdcbd3d20 not found: ID does not exist" Nov 24 11:21:31 crc kubenswrapper[5072]: I1124 11:21:31.785342 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mzvpf" Nov 24 11:21:31 crc kubenswrapper[5072]: I1124 11:21:31.789157 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-km2xf"] Nov 24 11:21:31 crc kubenswrapper[5072]: I1124 11:21:31.792287 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-km2xf"] Nov 24 11:21:31 crc kubenswrapper[5072]: I1124 11:21:31.823410 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ca699c4e-ccec-4ff8-895f-109777beca4c-client-ca\") pod \"ca699c4e-ccec-4ff8-895f-109777beca4c\" (UID: \"ca699c4e-ccec-4ff8-895f-109777beca4c\") " Nov 24 11:21:31 crc kubenswrapper[5072]: I1124 11:21:31.823475 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9hwng\" (UniqueName: \"kubernetes.io/projected/ca699c4e-ccec-4ff8-895f-109777beca4c-kube-api-access-9hwng\") pod \"ca699c4e-ccec-4ff8-895f-109777beca4c\" (UID: \"ca699c4e-ccec-4ff8-895f-109777beca4c\") " Nov 24 11:21:31 crc kubenswrapper[5072]: I1124 11:21:31.823554 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca699c4e-ccec-4ff8-895f-109777beca4c-config\") pod \"ca699c4e-ccec-4ff8-895f-109777beca4c\" (UID: \"ca699c4e-ccec-4ff8-895f-109777beca4c\") " Nov 24 11:21:31 crc kubenswrapper[5072]: I1124 11:21:31.823623 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca699c4e-ccec-4ff8-895f-109777beca4c-serving-cert\") pod \"ca699c4e-ccec-4ff8-895f-109777beca4c\" (UID: \"ca699c4e-ccec-4ff8-895f-109777beca4c\") " Nov 24 11:21:31 crc kubenswrapper[5072]: I1124 11:21:31.823887 5072 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/421f29d9-28d7-4e85-852e-d25b0529497a-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:21:31 crc kubenswrapper[5072]: I1124 11:21:31.823906 5072 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/421f29d9-28d7-4e85-852e-d25b0529497a-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 24 11:21:31 crc kubenswrapper[5072]: I1124 11:21:31.823918 5072 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/421f29d9-28d7-4e85-852e-d25b0529497a-client-ca\") on node \"crc\" DevicePath \"\"" Nov 24 11:21:31 crc kubenswrapper[5072]: I1124 11:21:31.823929 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hkwnb\" (UniqueName: \"kubernetes.io/projected/421f29d9-28d7-4e85-852e-d25b0529497a-kube-api-access-hkwnb\") on node \"crc\" DevicePath \"\"" Nov 24 11:21:31 crc kubenswrapper[5072]: I1124 11:21:31.823939 5072 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/421f29d9-28d7-4e85-852e-d25b0529497a-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:21:31 crc kubenswrapper[5072]: I1124 11:21:31.825623 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca699c4e-ccec-4ff8-895f-109777beca4c-config" (OuterVolumeSpecName: "config") pod "ca699c4e-ccec-4ff8-895f-109777beca4c" (UID: "ca699c4e-ccec-4ff8-895f-109777beca4c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:21:31 crc kubenswrapper[5072]: I1124 11:21:31.825868 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca699c4e-ccec-4ff8-895f-109777beca4c-client-ca" (OuterVolumeSpecName: "client-ca") pod "ca699c4e-ccec-4ff8-895f-109777beca4c" (UID: "ca699c4e-ccec-4ff8-895f-109777beca4c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:21:31 crc kubenswrapper[5072]: I1124 11:21:31.827503 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca699c4e-ccec-4ff8-895f-109777beca4c-kube-api-access-9hwng" (OuterVolumeSpecName: "kube-api-access-9hwng") pod "ca699c4e-ccec-4ff8-895f-109777beca4c" (UID: "ca699c4e-ccec-4ff8-895f-109777beca4c"). InnerVolumeSpecName "kube-api-access-9hwng". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:21:31 crc kubenswrapper[5072]: I1124 11:21:31.828327 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca699c4e-ccec-4ff8-895f-109777beca4c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ca699c4e-ccec-4ff8-895f-109777beca4c" (UID: "ca699c4e-ccec-4ff8-895f-109777beca4c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:21:31 crc kubenswrapper[5072]: I1124 11:21:31.925319 5072 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca699c4e-ccec-4ff8-895f-109777beca4c-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:21:31 crc kubenswrapper[5072]: I1124 11:21:31.925353 5072 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca699c4e-ccec-4ff8-895f-109777beca4c-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:21:31 crc kubenswrapper[5072]: I1124 11:21:31.925362 5072 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ca699c4e-ccec-4ff8-895f-109777beca4c-client-ca\") on node \"crc\" DevicePath \"\"" Nov 24 11:21:31 crc kubenswrapper[5072]: I1124 11:21:31.925385 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9hwng\" (UniqueName: \"kubernetes.io/projected/ca699c4e-ccec-4ff8-895f-109777beca4c-kube-api-access-9hwng\") on node \"crc\" DevicePath \"\"" Nov 24 11:21:32 crc kubenswrapper[5072]: I1124 11:21:32.750695 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mzvpf" Nov 24 11:21:32 crc kubenswrapper[5072]: I1124 11:21:32.750710 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mzvpf" event={"ID":"ca699c4e-ccec-4ff8-895f-109777beca4c","Type":"ContainerDied","Data":"3ffa303f86dad3facd8517c3c2829894323177b2d82268d2bff3ba2f41b202e7"} Nov 24 11:21:32 crc kubenswrapper[5072]: I1124 11:21:32.751182 5072 scope.go:117] "RemoveContainer" containerID="8fa7a95d108472a5a96017a67b76f1e4c64d97ae1be0d1e7b64586b60918620c" Nov 24 11:21:32 crc kubenswrapper[5072]: I1124 11:21:32.780960 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-mzvpf"] Nov 24 11:21:32 crc kubenswrapper[5072]: I1124 11:21:32.783936 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-mzvpf"] Nov 24 11:21:32 crc kubenswrapper[5072]: I1124 11:21:32.813532 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7765d77f48-khj52"] Nov 24 11:21:32 crc kubenswrapper[5072]: E1124 11:21:32.813784 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca699c4e-ccec-4ff8-895f-109777beca4c" containerName="route-controller-manager" Nov 24 11:21:32 crc kubenswrapper[5072]: I1124 11:21:32.813803 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca699c4e-ccec-4ff8-895f-109777beca4c" containerName="route-controller-manager" Nov 24 11:21:32 crc kubenswrapper[5072]: E1124 11:21:32.813815 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="421f29d9-28d7-4e85-852e-d25b0529497a" containerName="controller-manager" Nov 24 11:21:32 crc kubenswrapper[5072]: I1124 11:21:32.813823 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="421f29d9-28d7-4e85-852e-d25b0529497a" containerName="controller-manager" Nov 24 11:21:32 crc kubenswrapper[5072]: I1124 11:21:32.813949 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca699c4e-ccec-4ff8-895f-109777beca4c" containerName="route-controller-manager" Nov 24 11:21:32 crc kubenswrapper[5072]: I1124 11:21:32.813969 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="421f29d9-28d7-4e85-852e-d25b0529497a" containerName="controller-manager" Nov 24 11:21:32 crc kubenswrapper[5072]: I1124 11:21:32.814456 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7765d77f48-khj52" Nov 24 11:21:32 crc kubenswrapper[5072]: I1124 11:21:32.817246 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 24 11:21:32 crc kubenswrapper[5072]: I1124 11:21:32.817625 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 24 11:21:32 crc kubenswrapper[5072]: I1124 11:21:32.817868 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 24 11:21:32 crc kubenswrapper[5072]: I1124 11:21:32.818075 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 24 11:21:32 crc kubenswrapper[5072]: I1124 11:21:32.818246 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 24 11:21:32 crc kubenswrapper[5072]: I1124 11:21:32.818526 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 24 11:21:32 crc kubenswrapper[5072]: I1124 11:21:32.826265 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 24 11:21:32 crc kubenswrapper[5072]: I1124 11:21:32.826941 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7765d77f48-khj52"] Nov 24 11:21:32 crc kubenswrapper[5072]: I1124 11:21:32.831830 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79966c9c4b-wrjh9"] Nov 24 11:21:32 crc kubenswrapper[5072]: I1124 11:21:32.832659 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-79966c9c4b-wrjh9" Nov 24 11:21:32 crc kubenswrapper[5072]: I1124 11:21:32.834935 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 24 11:21:32 crc kubenswrapper[5072]: I1124 11:21:32.835141 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 24 11:21:32 crc kubenswrapper[5072]: I1124 11:21:32.835240 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 24 11:21:32 crc kubenswrapper[5072]: I1124 11:21:32.835236 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 24 11:21:32 crc kubenswrapper[5072]: I1124 11:21:32.835523 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 24 11:21:32 crc kubenswrapper[5072]: I1124 11:21:32.837894 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a81fb4f9-df13-4981-8f79-84c2c8a1bf98-proxy-ca-bundles\") pod \"controller-manager-7765d77f48-khj52\" (UID: \"a81fb4f9-df13-4981-8f79-84c2c8a1bf98\") " pod="openshift-controller-manager/controller-manager-7765d77f48-khj52" Nov 24 11:21:32 crc kubenswrapper[5072]: I1124 11:21:32.838020 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a81fb4f9-df13-4981-8f79-84c2c8a1bf98-serving-cert\") pod \"controller-manager-7765d77f48-khj52\" (UID: \"a81fb4f9-df13-4981-8f79-84c2c8a1bf98\") " pod="openshift-controller-manager/controller-manager-7765d77f48-khj52" Nov 24 11:21:32 crc kubenswrapper[5072]: I1124 11:21:32.838122 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvwvn\" (UniqueName: \"kubernetes.io/projected/a81fb4f9-df13-4981-8f79-84c2c8a1bf98-kube-api-access-dvwvn\") pod \"controller-manager-7765d77f48-khj52\" (UID: \"a81fb4f9-df13-4981-8f79-84c2c8a1bf98\") " pod="openshift-controller-manager/controller-manager-7765d77f48-khj52" Nov 24 11:21:32 crc kubenswrapper[5072]: I1124 11:21:32.838251 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a81fb4f9-df13-4981-8f79-84c2c8a1bf98-client-ca\") pod \"controller-manager-7765d77f48-khj52\" (UID: \"a81fb4f9-df13-4981-8f79-84c2c8a1bf98\") " pod="openshift-controller-manager/controller-manager-7765d77f48-khj52" Nov 24 11:21:32 crc kubenswrapper[5072]: I1124 11:21:32.838334 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a81fb4f9-df13-4981-8f79-84c2c8a1bf98-config\") pod \"controller-manager-7765d77f48-khj52\" (UID: \"a81fb4f9-df13-4981-8f79-84c2c8a1bf98\") " pod="openshift-controller-manager/controller-manager-7765d77f48-khj52" Nov 24 11:21:32 crc kubenswrapper[5072]: I1124 11:21:32.840334 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 24 11:21:32 crc kubenswrapper[5072]: I1124 11:21:32.848514 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79966c9c4b-wrjh9"] Nov 24 11:21:32 crc kubenswrapper[5072]: I1124 11:21:32.939257 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a81fb4f9-df13-4981-8f79-84c2c8a1bf98-proxy-ca-bundles\") pod \"controller-manager-7765d77f48-khj52\" (UID: \"a81fb4f9-df13-4981-8f79-84c2c8a1bf98\") " pod="openshift-controller-manager/controller-manager-7765d77f48-khj52" Nov 24 11:21:32 crc kubenswrapper[5072]: I1124 11:21:32.939322 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cbd62824-a9c4-4180-9700-b09bf5c7e1df-client-ca\") pod \"route-controller-manager-79966c9c4b-wrjh9\" (UID: \"cbd62824-a9c4-4180-9700-b09bf5c7e1df\") " pod="openshift-route-controller-manager/route-controller-manager-79966c9c4b-wrjh9" Nov 24 11:21:32 crc kubenswrapper[5072]: I1124 11:21:32.939353 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a81fb4f9-df13-4981-8f79-84c2c8a1bf98-serving-cert\") pod \"controller-manager-7765d77f48-khj52\" (UID: \"a81fb4f9-df13-4981-8f79-84c2c8a1bf98\") " pod="openshift-controller-manager/controller-manager-7765d77f48-khj52" Nov 24 11:21:32 crc kubenswrapper[5072]: I1124 11:21:32.939434 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cbd62824-a9c4-4180-9700-b09bf5c7e1df-serving-cert\") pod \"route-controller-manager-79966c9c4b-wrjh9\" (UID: \"cbd62824-a9c4-4180-9700-b09bf5c7e1df\") " pod="openshift-route-controller-manager/route-controller-manager-79966c9c4b-wrjh9" Nov 24 11:21:32 crc kubenswrapper[5072]: I1124 11:21:32.939461 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvwvn\" (UniqueName: \"kubernetes.io/projected/a81fb4f9-df13-4981-8f79-84c2c8a1bf98-kube-api-access-dvwvn\") pod \"controller-manager-7765d77f48-khj52\" (UID: \"a81fb4f9-df13-4981-8f79-84c2c8a1bf98\") " pod="openshift-controller-manager/controller-manager-7765d77f48-khj52" Nov 24 11:21:32 crc kubenswrapper[5072]: I1124 11:21:32.939488 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgtnd\" (UniqueName: \"kubernetes.io/projected/cbd62824-a9c4-4180-9700-b09bf5c7e1df-kube-api-access-fgtnd\") pod \"route-controller-manager-79966c9c4b-wrjh9\" (UID: \"cbd62824-a9c4-4180-9700-b09bf5c7e1df\") " pod="openshift-route-controller-manager/route-controller-manager-79966c9c4b-wrjh9" Nov 24 11:21:32 crc kubenswrapper[5072]: I1124 11:21:32.939512 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbd62824-a9c4-4180-9700-b09bf5c7e1df-config\") pod \"route-controller-manager-79966c9c4b-wrjh9\" (UID: \"cbd62824-a9c4-4180-9700-b09bf5c7e1df\") " pod="openshift-route-controller-manager/route-controller-manager-79966c9c4b-wrjh9" Nov 24 11:21:32 crc kubenswrapper[5072]: I1124 11:21:32.939544 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a81fb4f9-df13-4981-8f79-84c2c8a1bf98-client-ca\") pod \"controller-manager-7765d77f48-khj52\" (UID: \"a81fb4f9-df13-4981-8f79-84c2c8a1bf98\") " pod="openshift-controller-manager/controller-manager-7765d77f48-khj52" Nov 24 11:21:32 crc kubenswrapper[5072]: I1124 11:21:32.939565 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a81fb4f9-df13-4981-8f79-84c2c8a1bf98-config\") pod \"controller-manager-7765d77f48-khj52\" (UID: \"a81fb4f9-df13-4981-8f79-84c2c8a1bf98\") " pod="openshift-controller-manager/controller-manager-7765d77f48-khj52" Nov 24 11:21:32 crc kubenswrapper[5072]: I1124 11:21:32.940656 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a81fb4f9-df13-4981-8f79-84c2c8a1bf98-client-ca\") pod \"controller-manager-7765d77f48-khj52\" (UID: \"a81fb4f9-df13-4981-8f79-84c2c8a1bf98\") " pod="openshift-controller-manager/controller-manager-7765d77f48-khj52" Nov 24 11:21:32 crc kubenswrapper[5072]: I1124 11:21:32.941015 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a81fb4f9-df13-4981-8f79-84c2c8a1bf98-proxy-ca-bundles\") pod \"controller-manager-7765d77f48-khj52\" (UID: \"a81fb4f9-df13-4981-8f79-84c2c8a1bf98\") " pod="openshift-controller-manager/controller-manager-7765d77f48-khj52" Nov 24 11:21:32 crc kubenswrapper[5072]: I1124 11:21:32.941083 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a81fb4f9-df13-4981-8f79-84c2c8a1bf98-config\") pod \"controller-manager-7765d77f48-khj52\" (UID: \"a81fb4f9-df13-4981-8f79-84c2c8a1bf98\") " pod="openshift-controller-manager/controller-manager-7765d77f48-khj52" Nov 24 11:21:32 crc kubenswrapper[5072]: I1124 11:21:32.943968 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a81fb4f9-df13-4981-8f79-84c2c8a1bf98-serving-cert\") pod \"controller-manager-7765d77f48-khj52\" (UID: \"a81fb4f9-df13-4981-8f79-84c2c8a1bf98\") " pod="openshift-controller-manager/controller-manager-7765d77f48-khj52" Nov 24 11:21:32 crc kubenswrapper[5072]: I1124 11:21:32.962673 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvwvn\" (UniqueName: \"kubernetes.io/projected/a81fb4f9-df13-4981-8f79-84c2c8a1bf98-kube-api-access-dvwvn\") pod \"controller-manager-7765d77f48-khj52\" (UID: \"a81fb4f9-df13-4981-8f79-84c2c8a1bf98\") " pod="openshift-controller-manager/controller-manager-7765d77f48-khj52" Nov 24 11:21:33 crc kubenswrapper[5072]: I1124 11:21:33.024239 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="421f29d9-28d7-4e85-852e-d25b0529497a" path="/var/lib/kubelet/pods/421f29d9-28d7-4e85-852e-d25b0529497a/volumes" Nov 24 11:21:33 crc kubenswrapper[5072]: I1124 11:21:33.024761 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca699c4e-ccec-4ff8-895f-109777beca4c" path="/var/lib/kubelet/pods/ca699c4e-ccec-4ff8-895f-109777beca4c/volumes" Nov 24 11:21:33 crc kubenswrapper[5072]: I1124 11:21:33.040517 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cbd62824-a9c4-4180-9700-b09bf5c7e1df-client-ca\") pod \"route-controller-manager-79966c9c4b-wrjh9\" (UID: \"cbd62824-a9c4-4180-9700-b09bf5c7e1df\") " pod="openshift-route-controller-manager/route-controller-manager-79966c9c4b-wrjh9" Nov 24 11:21:33 crc kubenswrapper[5072]: I1124 11:21:33.040580 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cbd62824-a9c4-4180-9700-b09bf5c7e1df-serving-cert\") pod \"route-controller-manager-79966c9c4b-wrjh9\" (UID: \"cbd62824-a9c4-4180-9700-b09bf5c7e1df\") " pod="openshift-route-controller-manager/route-controller-manager-79966c9c4b-wrjh9" Nov 24 11:21:33 crc kubenswrapper[5072]: I1124 11:21:33.040601 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgtnd\" (UniqueName: \"kubernetes.io/projected/cbd62824-a9c4-4180-9700-b09bf5c7e1df-kube-api-access-fgtnd\") pod \"route-controller-manager-79966c9c4b-wrjh9\" (UID: \"cbd62824-a9c4-4180-9700-b09bf5c7e1df\") " pod="openshift-route-controller-manager/route-controller-manager-79966c9c4b-wrjh9" Nov 24 11:21:33 crc kubenswrapper[5072]: I1124 11:21:33.040620 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbd62824-a9c4-4180-9700-b09bf5c7e1df-config\") pod \"route-controller-manager-79966c9c4b-wrjh9\" (UID: \"cbd62824-a9c4-4180-9700-b09bf5c7e1df\") " pod="openshift-route-controller-manager/route-controller-manager-79966c9c4b-wrjh9" Nov 24 11:21:33 crc kubenswrapper[5072]: I1124 11:21:33.041664 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbd62824-a9c4-4180-9700-b09bf5c7e1df-config\") pod \"route-controller-manager-79966c9c4b-wrjh9\" (UID: \"cbd62824-a9c4-4180-9700-b09bf5c7e1df\") " pod="openshift-route-controller-manager/route-controller-manager-79966c9c4b-wrjh9" Nov 24 11:21:33 crc kubenswrapper[5072]: I1124 11:21:33.042164 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cbd62824-a9c4-4180-9700-b09bf5c7e1df-client-ca\") pod \"route-controller-manager-79966c9c4b-wrjh9\" (UID: \"cbd62824-a9c4-4180-9700-b09bf5c7e1df\") " pod="openshift-route-controller-manager/route-controller-manager-79966c9c4b-wrjh9" Nov 24 11:21:33 crc kubenswrapper[5072]: I1124 11:21:33.054949 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cbd62824-a9c4-4180-9700-b09bf5c7e1df-serving-cert\") pod \"route-controller-manager-79966c9c4b-wrjh9\" (UID: \"cbd62824-a9c4-4180-9700-b09bf5c7e1df\") " pod="openshift-route-controller-manager/route-controller-manager-79966c9c4b-wrjh9" Nov 24 11:21:33 crc kubenswrapper[5072]: I1124 11:21:33.062013 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgtnd\" (UniqueName: \"kubernetes.io/projected/cbd62824-a9c4-4180-9700-b09bf5c7e1df-kube-api-access-fgtnd\") pod \"route-controller-manager-79966c9c4b-wrjh9\" (UID: \"cbd62824-a9c4-4180-9700-b09bf5c7e1df\") " pod="openshift-route-controller-manager/route-controller-manager-79966c9c4b-wrjh9" Nov 24 11:21:33 crc kubenswrapper[5072]: I1124 11:21:33.128887 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7765d77f48-khj52" Nov 24 11:21:33 crc kubenswrapper[5072]: I1124 11:21:33.147282 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-79966c9c4b-wrjh9" Nov 24 11:21:33 crc kubenswrapper[5072]: I1124 11:21:33.326133 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7765d77f48-khj52"] Nov 24 11:21:33 crc kubenswrapper[5072]: W1124 11:21:33.336786 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda81fb4f9_df13_4981_8f79_84c2c8a1bf98.slice/crio-0518eb44c4c177a105083f5ae64335451e34a95aa6568e8a70da3dce8cb363bd WatchSource:0}: Error finding container 0518eb44c4c177a105083f5ae64335451e34a95aa6568e8a70da3dce8cb363bd: Status 404 returned error can't find the container with id 0518eb44c4c177a105083f5ae64335451e34a95aa6568e8a70da3dce8cb363bd Nov 24 11:21:33 crc kubenswrapper[5072]: I1124 11:21:33.384079 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79966c9c4b-wrjh9"] Nov 24 11:21:33 crc kubenswrapper[5072]: I1124 11:21:33.767711 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7765d77f48-khj52" event={"ID":"a81fb4f9-df13-4981-8f79-84c2c8a1bf98","Type":"ContainerStarted","Data":"45e2351b62f4e8d0a530abc9d4c16eb8b8cfce79ce55000e2b252295ca312d46"} Nov 24 11:21:33 crc kubenswrapper[5072]: I1124 11:21:33.767764 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7765d77f48-khj52" event={"ID":"a81fb4f9-df13-4981-8f79-84c2c8a1bf98","Type":"ContainerStarted","Data":"0518eb44c4c177a105083f5ae64335451e34a95aa6568e8a70da3dce8cb363bd"} Nov 24 11:21:33 crc kubenswrapper[5072]: I1124 11:21:33.767909 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7765d77f48-khj52" Nov 24 11:21:33 crc kubenswrapper[5072]: I1124 11:21:33.770423 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-79966c9c4b-wrjh9" event={"ID":"cbd62824-a9c4-4180-9700-b09bf5c7e1df","Type":"ContainerStarted","Data":"629d31a152536a8d340af40666015cbde9f15652b6f4799f7ffc36833382eaaf"} Nov 24 11:21:33 crc kubenswrapper[5072]: I1124 11:21:33.770469 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-79966c9c4b-wrjh9" event={"ID":"cbd62824-a9c4-4180-9700-b09bf5c7e1df","Type":"ContainerStarted","Data":"b1609f56c79dbf1dfbe5ea4b1583f149900a84c5dd2a10dd4fee6267fcaa2b26"} Nov 24 11:21:33 crc kubenswrapper[5072]: I1124 11:21:33.770704 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-79966c9c4b-wrjh9" Nov 24 11:21:33 crc kubenswrapper[5072]: I1124 11:21:33.779244 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7765d77f48-khj52" Nov 24 11:21:33 crc kubenswrapper[5072]: I1124 11:21:33.780311 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-79966c9c4b-wrjh9" Nov 24 11:21:33 crc kubenswrapper[5072]: I1124 11:21:33.796067 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7765d77f48-khj52" podStartSLOduration=1.796047424 podStartE2EDuration="1.796047424s" podCreationTimestamp="2025-11-24 11:21:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:21:33.790813192 +0000 UTC m=+745.502337678" watchObservedRunningTime="2025-11-24 11:21:33.796047424 +0000 UTC m=+745.507571920" Nov 24 11:21:35 crc kubenswrapper[5072]: I1124 11:21:35.000507 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-75d856c88d-rz946" Nov 24 11:21:35 crc kubenswrapper[5072]: I1124 11:21:35.022398 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-79966c9c4b-wrjh9" podStartSLOduration=3.022310515 podStartE2EDuration="3.022310515s" podCreationTimestamp="2025-11-24 11:21:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:21:33.850505946 +0000 UTC m=+745.562030422" watchObservedRunningTime="2025-11-24 11:21:35.022310515 +0000 UTC m=+746.733834991" Nov 24 11:21:38 crc kubenswrapper[5072]: I1124 11:21:38.856449 5072 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 24 11:21:43 crc kubenswrapper[5072]: I1124 11:21:43.644916 5072 patch_prober.go:28] interesting pod/machine-config-daemon-jfxnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 11:21:43 crc kubenswrapper[5072]: I1124 11:21:43.645557 5072 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 11:21:43 crc kubenswrapper[5072]: I1124 11:21:43.645620 5072 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" Nov 24 11:21:43 crc kubenswrapper[5072]: I1124 11:21:43.646368 5072 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9acae0aae65eaa2777547c62fd161d329c111af7aec02efa5b970dc26ddc2ae7"} pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 11:21:43 crc kubenswrapper[5072]: I1124 11:21:43.646484 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" containerName="machine-config-daemon" containerID="cri-o://9acae0aae65eaa2777547c62fd161d329c111af7aec02efa5b970dc26ddc2ae7" gracePeriod=600 Nov 24 11:21:43 crc kubenswrapper[5072]: I1124 11:21:43.825400 5072 generic.go:334] "Generic (PLEG): container finished" podID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" containerID="9acae0aae65eaa2777547c62fd161d329c111af7aec02efa5b970dc26ddc2ae7" exitCode=0 Nov 24 11:21:43 crc kubenswrapper[5072]: I1124 11:21:43.825454 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" event={"ID":"85ee6420-36f0-467c-acf4-ebea8b02c8d5","Type":"ContainerDied","Data":"9acae0aae65eaa2777547c62fd161d329c111af7aec02efa5b970dc26ddc2ae7"} Nov 24 11:21:43 crc kubenswrapper[5072]: I1124 11:21:43.825492 5072 scope.go:117] "RemoveContainer" containerID="0a6ebf9514d44fa623afa2ad42e78869426bcafc62c418072ab42294a40efd6e" Nov 24 11:21:44 crc kubenswrapper[5072]: I1124 11:21:44.833844 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" event={"ID":"85ee6420-36f0-467c-acf4-ebea8b02c8d5","Type":"ContainerStarted","Data":"8e2fafce48ed7d24bea410cc4a09f0aa29c5014f23ce7269a5e5cc3ebe7aa12f"} Nov 24 11:21:48 crc kubenswrapper[5072]: I1124 11:21:48.789158 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-d4mnf"] Nov 24 11:21:48 crc kubenswrapper[5072]: I1124 11:21:48.791599 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d4mnf" Nov 24 11:21:48 crc kubenswrapper[5072]: I1124 11:21:48.803552 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d4mnf"] Nov 24 11:21:48 crc kubenswrapper[5072]: I1124 11:21:48.877589 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t45x2\" (UniqueName: \"kubernetes.io/projected/edcb1a80-ffc4-4a75-9f38-07491b5c4c68-kube-api-access-t45x2\") pod \"redhat-operators-d4mnf\" (UID: \"edcb1a80-ffc4-4a75-9f38-07491b5c4c68\") " pod="openshift-marketplace/redhat-operators-d4mnf" Nov 24 11:21:48 crc kubenswrapper[5072]: I1124 11:21:48.877688 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/edcb1a80-ffc4-4a75-9f38-07491b5c4c68-utilities\") pod \"redhat-operators-d4mnf\" (UID: \"edcb1a80-ffc4-4a75-9f38-07491b5c4c68\") " pod="openshift-marketplace/redhat-operators-d4mnf" Nov 24 11:21:48 crc kubenswrapper[5072]: I1124 11:21:48.877723 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/edcb1a80-ffc4-4a75-9f38-07491b5c4c68-catalog-content\") pod \"redhat-operators-d4mnf\" (UID: \"edcb1a80-ffc4-4a75-9f38-07491b5c4c68\") " pod="openshift-marketplace/redhat-operators-d4mnf" Nov 24 11:21:48 crc kubenswrapper[5072]: I1124 11:21:48.978720 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/edcb1a80-ffc4-4a75-9f38-07491b5c4c68-catalog-content\") pod \"redhat-operators-d4mnf\" (UID: \"edcb1a80-ffc4-4a75-9f38-07491b5c4c68\") " pod="openshift-marketplace/redhat-operators-d4mnf" Nov 24 11:21:48 crc kubenswrapper[5072]: I1124 11:21:48.979052 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t45x2\" (UniqueName: \"kubernetes.io/projected/edcb1a80-ffc4-4a75-9f38-07491b5c4c68-kube-api-access-t45x2\") pod \"redhat-operators-d4mnf\" (UID: \"edcb1a80-ffc4-4a75-9f38-07491b5c4c68\") " pod="openshift-marketplace/redhat-operators-d4mnf" Nov 24 11:21:48 crc kubenswrapper[5072]: I1124 11:21:48.979163 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/edcb1a80-ffc4-4a75-9f38-07491b5c4c68-utilities\") pod \"redhat-operators-d4mnf\" (UID: \"edcb1a80-ffc4-4a75-9f38-07491b5c4c68\") " pod="openshift-marketplace/redhat-operators-d4mnf" Nov 24 11:21:48 crc kubenswrapper[5072]: I1124 11:21:48.979406 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/edcb1a80-ffc4-4a75-9f38-07491b5c4c68-catalog-content\") pod \"redhat-operators-d4mnf\" (UID: \"edcb1a80-ffc4-4a75-9f38-07491b5c4c68\") " pod="openshift-marketplace/redhat-operators-d4mnf" Nov 24 11:21:48 crc kubenswrapper[5072]: I1124 11:21:48.979779 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/edcb1a80-ffc4-4a75-9f38-07491b5c4c68-utilities\") pod \"redhat-operators-d4mnf\" (UID: \"edcb1a80-ffc4-4a75-9f38-07491b5c4c68\") " pod="openshift-marketplace/redhat-operators-d4mnf" Nov 24 11:21:49 crc kubenswrapper[5072]: I1124 11:21:49.003419 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t45x2\" (UniqueName: \"kubernetes.io/projected/edcb1a80-ffc4-4a75-9f38-07491b5c4c68-kube-api-access-t45x2\") pod \"redhat-operators-d4mnf\" (UID: \"edcb1a80-ffc4-4a75-9f38-07491b5c4c68\") " pod="openshift-marketplace/redhat-operators-d4mnf" Nov 24 11:21:49 crc kubenswrapper[5072]: I1124 11:21:49.139698 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d4mnf" Nov 24 11:21:49 crc kubenswrapper[5072]: I1124 11:21:49.572177 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d4mnf"] Nov 24 11:21:49 crc kubenswrapper[5072]: W1124 11:21:49.579433 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podedcb1a80_ffc4_4a75_9f38_07491b5c4c68.slice/crio-009b00a77b894538ef5b560001a6d2d6937ad0b16326d5a8ec8515793b36d596 WatchSource:0}: Error finding container 009b00a77b894538ef5b560001a6d2d6937ad0b16326d5a8ec8515793b36d596: Status 404 returned error can't find the container with id 009b00a77b894538ef5b560001a6d2d6937ad0b16326d5a8ec8515793b36d596 Nov 24 11:21:49 crc kubenswrapper[5072]: I1124 11:21:49.860842 5072 generic.go:334] "Generic (PLEG): container finished" podID="edcb1a80-ffc4-4a75-9f38-07491b5c4c68" containerID="235a2666d0468fac05c353f4d573cb345c6acf54cdc345493bd4d3bc4140e6be" exitCode=0 Nov 24 11:21:49 crc kubenswrapper[5072]: I1124 11:21:49.860939 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d4mnf" event={"ID":"edcb1a80-ffc4-4a75-9f38-07491b5c4c68","Type":"ContainerDied","Data":"235a2666d0468fac05c353f4d573cb345c6acf54cdc345493bd4d3bc4140e6be"} Nov 24 11:21:49 crc kubenswrapper[5072]: I1124 11:21:49.861116 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d4mnf" event={"ID":"edcb1a80-ffc4-4a75-9f38-07491b5c4c68","Type":"ContainerStarted","Data":"009b00a77b894538ef5b560001a6d2d6937ad0b16326d5a8ec8515793b36d596"} Nov 24 11:21:50 crc kubenswrapper[5072]: I1124 11:21:50.877423 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d4mnf" event={"ID":"edcb1a80-ffc4-4a75-9f38-07491b5c4c68","Type":"ContainerStarted","Data":"e6c4a4de2e1005447a2d4d496b5c45935e263a15524b42ed3f2c3830acf91254"} Nov 24 11:21:51 crc kubenswrapper[5072]: I1124 11:21:51.891507 5072 generic.go:334] "Generic (PLEG): container finished" podID="edcb1a80-ffc4-4a75-9f38-07491b5c4c68" containerID="e6c4a4de2e1005447a2d4d496b5c45935e263a15524b42ed3f2c3830acf91254" exitCode=0 Nov 24 11:21:51 crc kubenswrapper[5072]: I1124 11:21:51.891555 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d4mnf" event={"ID":"edcb1a80-ffc4-4a75-9f38-07491b5c4c68","Type":"ContainerDied","Data":"e6c4a4de2e1005447a2d4d496b5c45935e263a15524b42ed3f2c3830acf91254"} Nov 24 11:21:52 crc kubenswrapper[5072]: I1124 11:21:52.901557 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d4mnf" event={"ID":"edcb1a80-ffc4-4a75-9f38-07491b5c4c68","Type":"ContainerStarted","Data":"90e04ddde7d4725a1b97f093f372d3103d1cec24f841609fb8ec40a111a6c846"} Nov 24 11:21:52 crc kubenswrapper[5072]: I1124 11:21:52.923416 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-d4mnf" podStartSLOduration=2.335864858 podStartE2EDuration="4.923395232s" podCreationTimestamp="2025-11-24 11:21:48 +0000 UTC" firstStartedPulling="2025-11-24 11:21:49.862602902 +0000 UTC m=+761.574127388" lastFinishedPulling="2025-11-24 11:21:52.450133286 +0000 UTC m=+764.161657762" observedRunningTime="2025-11-24 11:21:52.922358556 +0000 UTC m=+764.633883042" watchObservedRunningTime="2025-11-24 11:21:52.923395232 +0000 UTC m=+764.634919708" Nov 24 11:21:54 crc kubenswrapper[5072]: I1124 11:21:54.584641 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-b6dc8dd56-6d5x5" Nov 24 11:21:55 crc kubenswrapper[5072]: I1124 11:21:55.273589 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-mjmzs"] Nov 24 11:21:55 crc kubenswrapper[5072]: I1124 11:21:55.274552 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-6998585d5-mjmzs" Nov 24 11:21:55 crc kubenswrapper[5072]: I1124 11:21:55.276577 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-2nhqx"] Nov 24 11:21:55 crc kubenswrapper[5072]: I1124 11:21:55.279020 5072 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-7c4sp" Nov 24 11:21:55 crc kubenswrapper[5072]: I1124 11:21:55.284760 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-2nhqx" Nov 24 11:21:55 crc kubenswrapper[5072]: I1124 11:21:55.288888 5072 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Nov 24 11:21:55 crc kubenswrapper[5072]: I1124 11:21:55.289037 5072 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Nov 24 11:21:55 crc kubenswrapper[5072]: I1124 11:21:55.289588 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Nov 24 11:21:55 crc kubenswrapper[5072]: I1124 11:21:55.308057 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-mjmzs"] Nov 24 11:21:55 crc kubenswrapper[5072]: I1124 11:21:55.357025 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5kf6\" (UniqueName: \"kubernetes.io/projected/b1d8a0f3-7f9b-4e19-bfcf-addd8fff3b88-kube-api-access-s5kf6\") pod \"frr-k8s-2nhqx\" (UID: \"b1d8a0f3-7f9b-4e19-bfcf-addd8fff3b88\") " pod="metallb-system/frr-k8s-2nhqx" Nov 24 11:21:55 crc kubenswrapper[5072]: I1124 11:21:55.357077 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a4839b57-91b0-4472-ac9e-fd342a3430c0-cert\") pod \"frr-k8s-webhook-server-6998585d5-mjmzs\" (UID: \"a4839b57-91b0-4472-ac9e-fd342a3430c0\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-mjmzs" Nov 24 11:21:55 crc kubenswrapper[5072]: I1124 11:21:55.357105 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/b1d8a0f3-7f9b-4e19-bfcf-addd8fff3b88-frr-conf\") pod \"frr-k8s-2nhqx\" (UID: \"b1d8a0f3-7f9b-4e19-bfcf-addd8fff3b88\") " pod="metallb-system/frr-k8s-2nhqx" Nov 24 11:21:55 crc kubenswrapper[5072]: I1124 11:21:55.357154 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4q2j\" (UniqueName: \"kubernetes.io/projected/a4839b57-91b0-4472-ac9e-fd342a3430c0-kube-api-access-v4q2j\") pod \"frr-k8s-webhook-server-6998585d5-mjmzs\" (UID: \"a4839b57-91b0-4472-ac9e-fd342a3430c0\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-mjmzs" Nov 24 11:21:55 crc kubenswrapper[5072]: I1124 11:21:55.357179 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/b1d8a0f3-7f9b-4e19-bfcf-addd8fff3b88-frr-sockets\") pod \"frr-k8s-2nhqx\" (UID: \"b1d8a0f3-7f9b-4e19-bfcf-addd8fff3b88\") " pod="metallb-system/frr-k8s-2nhqx" Nov 24 11:21:55 crc kubenswrapper[5072]: I1124 11:21:55.357204 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b1d8a0f3-7f9b-4e19-bfcf-addd8fff3b88-metrics-certs\") pod \"frr-k8s-2nhqx\" (UID: \"b1d8a0f3-7f9b-4e19-bfcf-addd8fff3b88\") " pod="metallb-system/frr-k8s-2nhqx" Nov 24 11:21:55 crc kubenswrapper[5072]: I1124 11:21:55.357297 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/b1d8a0f3-7f9b-4e19-bfcf-addd8fff3b88-metrics\") pod \"frr-k8s-2nhqx\" (UID: \"b1d8a0f3-7f9b-4e19-bfcf-addd8fff3b88\") " pod="metallb-system/frr-k8s-2nhqx" Nov 24 11:21:55 crc kubenswrapper[5072]: I1124 11:21:55.357355 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/b1d8a0f3-7f9b-4e19-bfcf-addd8fff3b88-reloader\") pod \"frr-k8s-2nhqx\" (UID: \"b1d8a0f3-7f9b-4e19-bfcf-addd8fff3b88\") " pod="metallb-system/frr-k8s-2nhqx" Nov 24 11:21:55 crc kubenswrapper[5072]: I1124 11:21:55.357430 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/b1d8a0f3-7f9b-4e19-bfcf-addd8fff3b88-frr-startup\") pod \"frr-k8s-2nhqx\" (UID: \"b1d8a0f3-7f9b-4e19-bfcf-addd8fff3b88\") " pod="metallb-system/frr-k8s-2nhqx" Nov 24 11:21:55 crc kubenswrapper[5072]: I1124 11:21:55.365494 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-xc9ht"] Nov 24 11:21:55 crc kubenswrapper[5072]: I1124 11:21:55.367125 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-xc9ht" Nov 24 11:21:55 crc kubenswrapper[5072]: I1124 11:21:55.370430 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Nov 24 11:21:55 crc kubenswrapper[5072]: I1124 11:21:55.370677 5072 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Nov 24 11:21:55 crc kubenswrapper[5072]: I1124 11:21:55.370845 5072 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-6gbmb" Nov 24 11:21:55 crc kubenswrapper[5072]: I1124 11:21:55.371004 5072 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Nov 24 11:21:55 crc kubenswrapper[5072]: I1124 11:21:55.390962 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6c7b4b5f48-54sxn"] Nov 24 11:21:55 crc kubenswrapper[5072]: I1124 11:21:55.392480 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6c7b4b5f48-54sxn" Nov 24 11:21:55 crc kubenswrapper[5072]: I1124 11:21:55.397475 5072 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Nov 24 11:21:55 crc kubenswrapper[5072]: I1124 11:21:55.405062 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6c7b4b5f48-54sxn"] Nov 24 11:21:55 crc kubenswrapper[5072]: I1124 11:21:55.458416 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b1d8a0f3-7f9b-4e19-bfcf-addd8fff3b88-metrics-certs\") pod \"frr-k8s-2nhqx\" (UID: \"b1d8a0f3-7f9b-4e19-bfcf-addd8fff3b88\") " pod="metallb-system/frr-k8s-2nhqx" Nov 24 11:21:55 crc kubenswrapper[5072]: I1124 11:21:55.458712 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhnf2\" (UniqueName: \"kubernetes.io/projected/e5b09acb-4f8f-45f4-b669-c491f59a52e1-kube-api-access-hhnf2\") pod \"speaker-xc9ht\" (UID: \"e5b09acb-4f8f-45f4-b669-c491f59a52e1\") " pod="metallb-system/speaker-xc9ht" Nov 24 11:21:55 crc kubenswrapper[5072]: I1124 11:21:55.458809 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/b1d8a0f3-7f9b-4e19-bfcf-addd8fff3b88-metrics\") pod \"frr-k8s-2nhqx\" (UID: \"b1d8a0f3-7f9b-4e19-bfcf-addd8fff3b88\") " pod="metallb-system/frr-k8s-2nhqx" Nov 24 11:21:55 crc kubenswrapper[5072]: I1124 11:21:55.458908 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/e5b09acb-4f8f-45f4-b669-c491f59a52e1-memberlist\") pod \"speaker-xc9ht\" (UID: \"e5b09acb-4f8f-45f4-b669-c491f59a52e1\") " pod="metallb-system/speaker-xc9ht" Nov 24 11:21:55 crc kubenswrapper[5072]: I1124 11:21:55.459036 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/b1d8a0f3-7f9b-4e19-bfcf-addd8fff3b88-reloader\") pod \"frr-k8s-2nhqx\" (UID: \"b1d8a0f3-7f9b-4e19-bfcf-addd8fff3b88\") " pod="metallb-system/frr-k8s-2nhqx" Nov 24 11:21:55 crc kubenswrapper[5072]: I1124 11:21:55.459136 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/b1d8a0f3-7f9b-4e19-bfcf-addd8fff3b88-frr-startup\") pod \"frr-k8s-2nhqx\" (UID: \"b1d8a0f3-7f9b-4e19-bfcf-addd8fff3b88\") " pod="metallb-system/frr-k8s-2nhqx" Nov 24 11:21:55 crc kubenswrapper[5072]: I1124 11:21:55.459233 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59lx7\" (UniqueName: \"kubernetes.io/projected/b9a94a05-9a99-48b5-8ba7-a1bd99f05577-kube-api-access-59lx7\") pod \"controller-6c7b4b5f48-54sxn\" (UID: \"b9a94a05-9a99-48b5-8ba7-a1bd99f05577\") " pod="metallb-system/controller-6c7b4b5f48-54sxn" Nov 24 11:21:55 crc kubenswrapper[5072]: I1124 11:21:55.459338 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5kf6\" (UniqueName: \"kubernetes.io/projected/b1d8a0f3-7f9b-4e19-bfcf-addd8fff3b88-kube-api-access-s5kf6\") pod \"frr-k8s-2nhqx\" (UID: \"b1d8a0f3-7f9b-4e19-bfcf-addd8fff3b88\") " pod="metallb-system/frr-k8s-2nhqx" Nov 24 11:21:55 crc kubenswrapper[5072]: I1124 11:21:55.459457 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b9a94a05-9a99-48b5-8ba7-a1bd99f05577-cert\") pod \"controller-6c7b4b5f48-54sxn\" (UID: \"b9a94a05-9a99-48b5-8ba7-a1bd99f05577\") " pod="metallb-system/controller-6c7b4b5f48-54sxn" Nov 24 11:21:55 crc kubenswrapper[5072]: I1124 11:21:55.459570 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a4839b57-91b0-4472-ac9e-fd342a3430c0-cert\") pod \"frr-k8s-webhook-server-6998585d5-mjmzs\" (UID: \"a4839b57-91b0-4472-ac9e-fd342a3430c0\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-mjmzs" Nov 24 11:21:55 crc kubenswrapper[5072]: I1124 11:21:55.459683 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b9a94a05-9a99-48b5-8ba7-a1bd99f05577-metrics-certs\") pod \"controller-6c7b4b5f48-54sxn\" (UID: \"b9a94a05-9a99-48b5-8ba7-a1bd99f05577\") " pod="metallb-system/controller-6c7b4b5f48-54sxn" Nov 24 11:21:55 crc kubenswrapper[5072]: I1124 11:21:55.459785 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/b1d8a0f3-7f9b-4e19-bfcf-addd8fff3b88-frr-conf\") pod \"frr-k8s-2nhqx\" (UID: \"b1d8a0f3-7f9b-4e19-bfcf-addd8fff3b88\") " pod="metallb-system/frr-k8s-2nhqx" Nov 24 11:21:55 crc kubenswrapper[5072]: I1124 11:21:55.459888 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e5b09acb-4f8f-45f4-b669-c491f59a52e1-metrics-certs\") pod \"speaker-xc9ht\" (UID: \"e5b09acb-4f8f-45f4-b669-c491f59a52e1\") " pod="metallb-system/speaker-xc9ht" Nov 24 11:21:55 crc kubenswrapper[5072]: I1124 11:21:55.459998 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/e5b09acb-4f8f-45f4-b669-c491f59a52e1-metallb-excludel2\") pod \"speaker-xc9ht\" (UID: \"e5b09acb-4f8f-45f4-b669-c491f59a52e1\") " pod="metallb-system/speaker-xc9ht" Nov 24 11:21:55 crc kubenswrapper[5072]: I1124 11:21:55.459243 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/b1d8a0f3-7f9b-4e19-bfcf-addd8fff3b88-metrics\") pod \"frr-k8s-2nhqx\" (UID: \"b1d8a0f3-7f9b-4e19-bfcf-addd8fff3b88\") " pod="metallb-system/frr-k8s-2nhqx" Nov 24 11:21:55 crc kubenswrapper[5072]: I1124 11:21:55.460195 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4q2j\" (UniqueName: \"kubernetes.io/projected/a4839b57-91b0-4472-ac9e-fd342a3430c0-kube-api-access-v4q2j\") pod \"frr-k8s-webhook-server-6998585d5-mjmzs\" (UID: \"a4839b57-91b0-4472-ac9e-fd342a3430c0\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-mjmzs" Nov 24 11:21:55 crc kubenswrapper[5072]: I1124 11:21:55.460317 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/b1d8a0f3-7f9b-4e19-bfcf-addd8fff3b88-frr-sockets\") pod \"frr-k8s-2nhqx\" (UID: \"b1d8a0f3-7f9b-4e19-bfcf-addd8fff3b88\") " pod="metallb-system/frr-k8s-2nhqx" Nov 24 11:21:55 crc kubenswrapper[5072]: I1124 11:21:55.460353 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/b1d8a0f3-7f9b-4e19-bfcf-addd8fff3b88-frr-startup\") pod \"frr-k8s-2nhqx\" (UID: \"b1d8a0f3-7f9b-4e19-bfcf-addd8fff3b88\") " pod="metallb-system/frr-k8s-2nhqx" Nov 24 11:21:55 crc kubenswrapper[5072]: I1124 11:21:55.459473 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/b1d8a0f3-7f9b-4e19-bfcf-addd8fff3b88-reloader\") pod \"frr-k8s-2nhqx\" (UID: \"b1d8a0f3-7f9b-4e19-bfcf-addd8fff3b88\") " pod="metallb-system/frr-k8s-2nhqx" Nov 24 11:21:55 crc kubenswrapper[5072]: I1124 11:21:55.460576 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/b1d8a0f3-7f9b-4e19-bfcf-addd8fff3b88-frr-conf\") pod \"frr-k8s-2nhqx\" (UID: \"b1d8a0f3-7f9b-4e19-bfcf-addd8fff3b88\") " pod="metallb-system/frr-k8s-2nhqx" Nov 24 11:21:55 crc kubenswrapper[5072]: I1124 11:21:55.460599 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/b1d8a0f3-7f9b-4e19-bfcf-addd8fff3b88-frr-sockets\") pod \"frr-k8s-2nhqx\" (UID: \"b1d8a0f3-7f9b-4e19-bfcf-addd8fff3b88\") " pod="metallb-system/frr-k8s-2nhqx" Nov 24 11:21:55 crc kubenswrapper[5072]: I1124 11:21:55.465275 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a4839b57-91b0-4472-ac9e-fd342a3430c0-cert\") pod \"frr-k8s-webhook-server-6998585d5-mjmzs\" (UID: \"a4839b57-91b0-4472-ac9e-fd342a3430c0\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-mjmzs" Nov 24 11:21:55 crc kubenswrapper[5072]: I1124 11:21:55.474007 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5kf6\" (UniqueName: \"kubernetes.io/projected/b1d8a0f3-7f9b-4e19-bfcf-addd8fff3b88-kube-api-access-s5kf6\") pod \"frr-k8s-2nhqx\" (UID: \"b1d8a0f3-7f9b-4e19-bfcf-addd8fff3b88\") " pod="metallb-system/frr-k8s-2nhqx" Nov 24 11:21:55 crc kubenswrapper[5072]: I1124 11:21:55.489854 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b1d8a0f3-7f9b-4e19-bfcf-addd8fff3b88-metrics-certs\") pod \"frr-k8s-2nhqx\" (UID: \"b1d8a0f3-7f9b-4e19-bfcf-addd8fff3b88\") " pod="metallb-system/frr-k8s-2nhqx" Nov 24 11:21:55 crc kubenswrapper[5072]: I1124 11:21:55.495114 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4q2j\" (UniqueName: \"kubernetes.io/projected/a4839b57-91b0-4472-ac9e-fd342a3430c0-kube-api-access-v4q2j\") pod \"frr-k8s-webhook-server-6998585d5-mjmzs\" (UID: \"a4839b57-91b0-4472-ac9e-fd342a3430c0\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-mjmzs" Nov 24 11:21:55 crc kubenswrapper[5072]: I1124 11:21:55.561808 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59lx7\" (UniqueName: \"kubernetes.io/projected/b9a94a05-9a99-48b5-8ba7-a1bd99f05577-kube-api-access-59lx7\") pod \"controller-6c7b4b5f48-54sxn\" (UID: \"b9a94a05-9a99-48b5-8ba7-a1bd99f05577\") " pod="metallb-system/controller-6c7b4b5f48-54sxn" Nov 24 11:21:55 crc kubenswrapper[5072]: I1124 11:21:55.561859 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b9a94a05-9a99-48b5-8ba7-a1bd99f05577-cert\") pod \"controller-6c7b4b5f48-54sxn\" (UID: \"b9a94a05-9a99-48b5-8ba7-a1bd99f05577\") " pod="metallb-system/controller-6c7b4b5f48-54sxn" Nov 24 11:21:55 crc kubenswrapper[5072]: I1124 11:21:55.561885 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b9a94a05-9a99-48b5-8ba7-a1bd99f05577-metrics-certs\") pod \"controller-6c7b4b5f48-54sxn\" (UID: \"b9a94a05-9a99-48b5-8ba7-a1bd99f05577\") " pod="metallb-system/controller-6c7b4b5f48-54sxn" Nov 24 11:21:55 crc kubenswrapper[5072]: I1124 11:21:55.561910 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e5b09acb-4f8f-45f4-b669-c491f59a52e1-metrics-certs\") pod \"speaker-xc9ht\" (UID: \"e5b09acb-4f8f-45f4-b669-c491f59a52e1\") " pod="metallb-system/speaker-xc9ht" Nov 24 11:21:55 crc kubenswrapper[5072]: I1124 11:21:55.561942 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/e5b09acb-4f8f-45f4-b669-c491f59a52e1-metallb-excludel2\") pod \"speaker-xc9ht\" (UID: \"e5b09acb-4f8f-45f4-b669-c491f59a52e1\") " pod="metallb-system/speaker-xc9ht" Nov 24 11:21:55 crc kubenswrapper[5072]: I1124 11:21:55.561980 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhnf2\" (UniqueName: \"kubernetes.io/projected/e5b09acb-4f8f-45f4-b669-c491f59a52e1-kube-api-access-hhnf2\") pod \"speaker-xc9ht\" (UID: \"e5b09acb-4f8f-45f4-b669-c491f59a52e1\") " pod="metallb-system/speaker-xc9ht" Nov 24 11:21:55 crc kubenswrapper[5072]: I1124 11:21:55.562003 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/e5b09acb-4f8f-45f4-b669-c491f59a52e1-memberlist\") pod \"speaker-xc9ht\" (UID: \"e5b09acb-4f8f-45f4-b669-c491f59a52e1\") " pod="metallb-system/speaker-xc9ht" Nov 24 11:21:55 crc kubenswrapper[5072]: E1124 11:21:55.562123 5072 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 24 11:21:55 crc kubenswrapper[5072]: E1124 11:21:55.562174 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e5b09acb-4f8f-45f4-b669-c491f59a52e1-memberlist podName:e5b09acb-4f8f-45f4-b669-c491f59a52e1 nodeName:}" failed. No retries permitted until 2025-11-24 11:21:56.062156857 +0000 UTC m=+767.773681333 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/e5b09acb-4f8f-45f4-b669-c491f59a52e1-memberlist") pod "speaker-xc9ht" (UID: "e5b09acb-4f8f-45f4-b669-c491f59a52e1") : secret "metallb-memberlist" not found Nov 24 11:21:55 crc kubenswrapper[5072]: E1124 11:21:55.562524 5072 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Nov 24 11:21:55 crc kubenswrapper[5072]: E1124 11:21:55.562553 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e5b09acb-4f8f-45f4-b669-c491f59a52e1-metrics-certs podName:e5b09acb-4f8f-45f4-b669-c491f59a52e1 nodeName:}" failed. No retries permitted until 2025-11-24 11:21:56.062545687 +0000 UTC m=+767.774070163 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e5b09acb-4f8f-45f4-b669-c491f59a52e1-metrics-certs") pod "speaker-xc9ht" (UID: "e5b09acb-4f8f-45f4-b669-c491f59a52e1") : secret "speaker-certs-secret" not found Nov 24 11:21:55 crc kubenswrapper[5072]: I1124 11:21:55.562842 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/e5b09acb-4f8f-45f4-b669-c491f59a52e1-metallb-excludel2\") pod \"speaker-xc9ht\" (UID: \"e5b09acb-4f8f-45f4-b669-c491f59a52e1\") " pod="metallb-system/speaker-xc9ht" Nov 24 11:21:55 crc kubenswrapper[5072]: I1124 11:21:55.565076 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b9a94a05-9a99-48b5-8ba7-a1bd99f05577-cert\") pod \"controller-6c7b4b5f48-54sxn\" (UID: \"b9a94a05-9a99-48b5-8ba7-a1bd99f05577\") " pod="metallb-system/controller-6c7b4b5f48-54sxn" Nov 24 11:21:55 crc kubenswrapper[5072]: I1124 11:21:55.565629 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b9a94a05-9a99-48b5-8ba7-a1bd99f05577-metrics-certs\") pod \"controller-6c7b4b5f48-54sxn\" (UID: \"b9a94a05-9a99-48b5-8ba7-a1bd99f05577\") " pod="metallb-system/controller-6c7b4b5f48-54sxn" Nov 24 11:21:55 crc kubenswrapper[5072]: I1124 11:21:55.577961 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhnf2\" (UniqueName: \"kubernetes.io/projected/e5b09acb-4f8f-45f4-b669-c491f59a52e1-kube-api-access-hhnf2\") pod \"speaker-xc9ht\" (UID: \"e5b09acb-4f8f-45f4-b669-c491f59a52e1\") " pod="metallb-system/speaker-xc9ht" Nov 24 11:21:55 crc kubenswrapper[5072]: I1124 11:21:55.583325 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59lx7\" (UniqueName: \"kubernetes.io/projected/b9a94a05-9a99-48b5-8ba7-a1bd99f05577-kube-api-access-59lx7\") pod \"controller-6c7b4b5f48-54sxn\" (UID: \"b9a94a05-9a99-48b5-8ba7-a1bd99f05577\") " pod="metallb-system/controller-6c7b4b5f48-54sxn" Nov 24 11:21:55 crc kubenswrapper[5072]: I1124 11:21:55.603022 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-6998585d5-mjmzs" Nov 24 11:21:55 crc kubenswrapper[5072]: I1124 11:21:55.615932 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-2nhqx" Nov 24 11:21:55 crc kubenswrapper[5072]: I1124 11:21:55.709664 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6c7b4b5f48-54sxn" Nov 24 11:21:55 crc kubenswrapper[5072]: I1124 11:21:55.918998 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-2nhqx" event={"ID":"b1d8a0f3-7f9b-4e19-bfcf-addd8fff3b88","Type":"ContainerStarted","Data":"a987bb0a6dc09ef3bd12a39657bfbb229aa58e40c56b95f8e3cb712744628a21"} Nov 24 11:21:56 crc kubenswrapper[5072]: I1124 11:21:56.034422 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-mjmzs"] Nov 24 11:21:56 crc kubenswrapper[5072]: W1124 11:21:56.040118 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda4839b57_91b0_4472_ac9e_fd342a3430c0.slice/crio-1f9eb8bb99740ff97c22d30c96d01478fdb0e35b9ba18df35f99aed7d46fb228 WatchSource:0}: Error finding container 1f9eb8bb99740ff97c22d30c96d01478fdb0e35b9ba18df35f99aed7d46fb228: Status 404 returned error can't find the container with id 1f9eb8bb99740ff97c22d30c96d01478fdb0e35b9ba18df35f99aed7d46fb228 Nov 24 11:21:56 crc kubenswrapper[5072]: I1124 11:21:56.068243 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e5b09acb-4f8f-45f4-b669-c491f59a52e1-metrics-certs\") pod \"speaker-xc9ht\" (UID: \"e5b09acb-4f8f-45f4-b669-c491f59a52e1\") " pod="metallb-system/speaker-xc9ht" Nov 24 11:21:56 crc kubenswrapper[5072]: I1124 11:21:56.068341 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/e5b09acb-4f8f-45f4-b669-c491f59a52e1-memberlist\") pod \"speaker-xc9ht\" (UID: \"e5b09acb-4f8f-45f4-b669-c491f59a52e1\") " pod="metallb-system/speaker-xc9ht" Nov 24 11:21:56 crc kubenswrapper[5072]: E1124 11:21:56.068443 5072 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 24 11:21:56 crc kubenswrapper[5072]: E1124 11:21:56.068498 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e5b09acb-4f8f-45f4-b669-c491f59a52e1-memberlist podName:e5b09acb-4f8f-45f4-b669-c491f59a52e1 nodeName:}" failed. No retries permitted until 2025-11-24 11:21:57.068482106 +0000 UTC m=+768.780006582 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/e5b09acb-4f8f-45f4-b669-c491f59a52e1-memberlist") pod "speaker-xc9ht" (UID: "e5b09acb-4f8f-45f4-b669-c491f59a52e1") : secret "metallb-memberlist" not found Nov 24 11:21:56 crc kubenswrapper[5072]: I1124 11:21:56.073529 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e5b09acb-4f8f-45f4-b669-c491f59a52e1-metrics-certs\") pod \"speaker-xc9ht\" (UID: \"e5b09acb-4f8f-45f4-b669-c491f59a52e1\") " pod="metallb-system/speaker-xc9ht" Nov 24 11:21:56 crc kubenswrapper[5072]: I1124 11:21:56.135321 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6c7b4b5f48-54sxn"] Nov 24 11:21:56 crc kubenswrapper[5072]: W1124 11:21:56.140721 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb9a94a05_9a99_48b5_8ba7_a1bd99f05577.slice/crio-62d4107cb0171ead25f9d21cee1d96c5e3cea8aa8b9c73a70b0829522bdc6b64 WatchSource:0}: Error finding container 62d4107cb0171ead25f9d21cee1d96c5e3cea8aa8b9c73a70b0829522bdc6b64: Status 404 returned error can't find the container with id 62d4107cb0171ead25f9d21cee1d96c5e3cea8aa8b9c73a70b0829522bdc6b64 Nov 24 11:21:56 crc kubenswrapper[5072]: I1124 11:21:56.936681 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-6998585d5-mjmzs" event={"ID":"a4839b57-91b0-4472-ac9e-fd342a3430c0","Type":"ContainerStarted","Data":"1f9eb8bb99740ff97c22d30c96d01478fdb0e35b9ba18df35f99aed7d46fb228"} Nov 24 11:21:56 crc kubenswrapper[5072]: I1124 11:21:56.943964 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-54sxn" event={"ID":"b9a94a05-9a99-48b5-8ba7-a1bd99f05577","Type":"ContainerStarted","Data":"604f7d2858c47d01018c4c3d687df57a081b23e6eaeef4894f21b43c7d71274f"} Nov 24 11:21:56 crc kubenswrapper[5072]: I1124 11:21:56.944015 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-54sxn" event={"ID":"b9a94a05-9a99-48b5-8ba7-a1bd99f05577","Type":"ContainerStarted","Data":"5668fce035a3dfc5579a46e0298efdc05c47228a26789897ce16c1bf69ec1a8d"} Nov 24 11:21:56 crc kubenswrapper[5072]: I1124 11:21:56.944028 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-54sxn" event={"ID":"b9a94a05-9a99-48b5-8ba7-a1bd99f05577","Type":"ContainerStarted","Data":"62d4107cb0171ead25f9d21cee1d96c5e3cea8aa8b9c73a70b0829522bdc6b64"} Nov 24 11:21:56 crc kubenswrapper[5072]: I1124 11:21:56.944143 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6c7b4b5f48-54sxn" Nov 24 11:21:56 crc kubenswrapper[5072]: I1124 11:21:56.963204 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6c7b4b5f48-54sxn" podStartSLOduration=1.963188933 podStartE2EDuration="1.963188933s" podCreationTimestamp="2025-11-24 11:21:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:21:56.962513686 +0000 UTC m=+768.674038162" watchObservedRunningTime="2025-11-24 11:21:56.963188933 +0000 UTC m=+768.674713399" Nov 24 11:21:57 crc kubenswrapper[5072]: I1124 11:21:57.087613 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/e5b09acb-4f8f-45f4-b669-c491f59a52e1-memberlist\") pod \"speaker-xc9ht\" (UID: \"e5b09acb-4f8f-45f4-b669-c491f59a52e1\") " pod="metallb-system/speaker-xc9ht" Nov 24 11:21:57 crc kubenswrapper[5072]: I1124 11:21:57.111305 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/e5b09acb-4f8f-45f4-b669-c491f59a52e1-memberlist\") pod \"speaker-xc9ht\" (UID: \"e5b09acb-4f8f-45f4-b669-c491f59a52e1\") " pod="metallb-system/speaker-xc9ht" Nov 24 11:21:57 crc kubenswrapper[5072]: I1124 11:21:57.184636 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-xc9ht" Nov 24 11:21:57 crc kubenswrapper[5072]: W1124 11:21:57.215904 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode5b09acb_4f8f_45f4_b669_c491f59a52e1.slice/crio-ba8f1afbd24cd2a7afd0ddca5030c3105278c7f5d151f4e66780c34f4fce419c WatchSource:0}: Error finding container ba8f1afbd24cd2a7afd0ddca5030c3105278c7f5d151f4e66780c34f4fce419c: Status 404 returned error can't find the container with id ba8f1afbd24cd2a7afd0ddca5030c3105278c7f5d151f4e66780c34f4fce419c Nov 24 11:21:57 crc kubenswrapper[5072]: I1124 11:21:57.956677 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-xc9ht" event={"ID":"e5b09acb-4f8f-45f4-b669-c491f59a52e1","Type":"ContainerStarted","Data":"948d5a462ed83c754ad16d279f4a8e967b72de93741762cc61f9347300022705"} Nov 24 11:21:57 crc kubenswrapper[5072]: I1124 11:21:57.956927 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-xc9ht" event={"ID":"e5b09acb-4f8f-45f4-b669-c491f59a52e1","Type":"ContainerStarted","Data":"ba8f1afbd24cd2a7afd0ddca5030c3105278c7f5d151f4e66780c34f4fce419c"} Nov 24 11:21:58 crc kubenswrapper[5072]: I1124 11:21:58.966409 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-xc9ht" event={"ID":"e5b09acb-4f8f-45f4-b669-c491f59a52e1","Type":"ContainerStarted","Data":"968543c49b43ff216aac664ba8979938260f1efe3c3225ef450809c37856faa0"} Nov 24 11:21:58 crc kubenswrapper[5072]: I1124 11:21:58.967325 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-xc9ht" Nov 24 11:21:58 crc kubenswrapper[5072]: I1124 11:21:58.986937 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-xc9ht" podStartSLOduration=3.986921789 podStartE2EDuration="3.986921789s" podCreationTimestamp="2025-11-24 11:21:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:21:58.986596641 +0000 UTC m=+770.698121117" watchObservedRunningTime="2025-11-24 11:21:58.986921789 +0000 UTC m=+770.698446265" Nov 24 11:21:59 crc kubenswrapper[5072]: I1124 11:21:59.140951 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-d4mnf" Nov 24 11:21:59 crc kubenswrapper[5072]: I1124 11:21:59.141501 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-d4mnf" Nov 24 11:22:00 crc kubenswrapper[5072]: I1124 11:22:00.252110 5072 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-d4mnf" podUID="edcb1a80-ffc4-4a75-9f38-07491b5c4c68" containerName="registry-server" probeResult="failure" output=< Nov 24 11:22:00 crc kubenswrapper[5072]: timeout: failed to connect service ":50051" within 1s Nov 24 11:22:00 crc kubenswrapper[5072]: > Nov 24 11:22:02 crc kubenswrapper[5072]: I1124 11:22:02.995994 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-6998585d5-mjmzs" event={"ID":"a4839b57-91b0-4472-ac9e-fd342a3430c0","Type":"ContainerStarted","Data":"02c6b0614262d4fa48b1af59e1dc588ca3c4fefadf97437355183fb796d4dbfb"} Nov 24 11:22:02 crc kubenswrapper[5072]: I1124 11:22:02.996617 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-6998585d5-mjmzs" Nov 24 11:22:02 crc kubenswrapper[5072]: I1124 11:22:02.998328 5072 generic.go:334] "Generic (PLEG): container finished" podID="b1d8a0f3-7f9b-4e19-bfcf-addd8fff3b88" containerID="e0c8f166a3768b251380340f5fbe7abd9564fb3d2eba13f381cbb805de92a793" exitCode=0 Nov 24 11:22:02 crc kubenswrapper[5072]: I1124 11:22:02.998359 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-2nhqx" event={"ID":"b1d8a0f3-7f9b-4e19-bfcf-addd8fff3b88","Type":"ContainerDied","Data":"e0c8f166a3768b251380340f5fbe7abd9564fb3d2eba13f381cbb805de92a793"} Nov 24 11:22:03 crc kubenswrapper[5072]: I1124 11:22:03.017608 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-6998585d5-mjmzs" podStartSLOduration=1.380132709 podStartE2EDuration="8.017589459s" podCreationTimestamp="2025-11-24 11:21:55 +0000 UTC" firstStartedPulling="2025-11-24 11:21:56.042490541 +0000 UTC m=+767.754015017" lastFinishedPulling="2025-11-24 11:22:02.679947291 +0000 UTC m=+774.391471767" observedRunningTime="2025-11-24 11:22:03.016703417 +0000 UTC m=+774.728227903" watchObservedRunningTime="2025-11-24 11:22:03.017589459 +0000 UTC m=+774.729113935" Nov 24 11:22:04 crc kubenswrapper[5072]: I1124 11:22:04.006065 5072 generic.go:334] "Generic (PLEG): container finished" podID="b1d8a0f3-7f9b-4e19-bfcf-addd8fff3b88" containerID="a09f1caecac91eb815326228c0e1f5173b198fbe28bd585c88fd116493504445" exitCode=0 Nov 24 11:22:04 crc kubenswrapper[5072]: I1124 11:22:04.006136 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-2nhqx" event={"ID":"b1d8a0f3-7f9b-4e19-bfcf-addd8fff3b88","Type":"ContainerDied","Data":"a09f1caecac91eb815326228c0e1f5173b198fbe28bd585c88fd116493504445"} Nov 24 11:22:04 crc kubenswrapper[5072]: I1124 11:22:04.598587 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zl2mk"] Nov 24 11:22:04 crc kubenswrapper[5072]: I1124 11:22:04.599980 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zl2mk" Nov 24 11:22:04 crc kubenswrapper[5072]: I1124 11:22:04.616494 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zl2mk"] Nov 24 11:22:04 crc kubenswrapper[5072]: I1124 11:22:04.699600 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5dac824-1711-46d7-8bd3-55975eb05d63-catalog-content\") pod \"certified-operators-zl2mk\" (UID: \"d5dac824-1711-46d7-8bd3-55975eb05d63\") " pod="openshift-marketplace/certified-operators-zl2mk" Nov 24 11:22:04 crc kubenswrapper[5072]: I1124 11:22:04.699668 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqbxq\" (UniqueName: \"kubernetes.io/projected/d5dac824-1711-46d7-8bd3-55975eb05d63-kube-api-access-rqbxq\") pod \"certified-operators-zl2mk\" (UID: \"d5dac824-1711-46d7-8bd3-55975eb05d63\") " pod="openshift-marketplace/certified-operators-zl2mk" Nov 24 11:22:04 crc kubenswrapper[5072]: I1124 11:22:04.699703 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5dac824-1711-46d7-8bd3-55975eb05d63-utilities\") pod \"certified-operators-zl2mk\" (UID: \"d5dac824-1711-46d7-8bd3-55975eb05d63\") " pod="openshift-marketplace/certified-operators-zl2mk" Nov 24 11:22:04 crc kubenswrapper[5072]: I1124 11:22:04.801327 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqbxq\" (UniqueName: \"kubernetes.io/projected/d5dac824-1711-46d7-8bd3-55975eb05d63-kube-api-access-rqbxq\") pod \"certified-operators-zl2mk\" (UID: \"d5dac824-1711-46d7-8bd3-55975eb05d63\") " pod="openshift-marketplace/certified-operators-zl2mk" Nov 24 11:22:04 crc kubenswrapper[5072]: I1124 11:22:04.801395 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5dac824-1711-46d7-8bd3-55975eb05d63-utilities\") pod \"certified-operators-zl2mk\" (UID: \"d5dac824-1711-46d7-8bd3-55975eb05d63\") " pod="openshift-marketplace/certified-operators-zl2mk" Nov 24 11:22:04 crc kubenswrapper[5072]: I1124 11:22:04.801506 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5dac824-1711-46d7-8bd3-55975eb05d63-catalog-content\") pod \"certified-operators-zl2mk\" (UID: \"d5dac824-1711-46d7-8bd3-55975eb05d63\") " pod="openshift-marketplace/certified-operators-zl2mk" Nov 24 11:22:04 crc kubenswrapper[5072]: I1124 11:22:04.802035 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5dac824-1711-46d7-8bd3-55975eb05d63-catalog-content\") pod \"certified-operators-zl2mk\" (UID: \"d5dac824-1711-46d7-8bd3-55975eb05d63\") " pod="openshift-marketplace/certified-operators-zl2mk" Nov 24 11:22:04 crc kubenswrapper[5072]: I1124 11:22:04.802139 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5dac824-1711-46d7-8bd3-55975eb05d63-utilities\") pod \"certified-operators-zl2mk\" (UID: \"d5dac824-1711-46d7-8bd3-55975eb05d63\") " pod="openshift-marketplace/certified-operators-zl2mk" Nov 24 11:22:04 crc kubenswrapper[5072]: I1124 11:22:04.832008 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqbxq\" (UniqueName: \"kubernetes.io/projected/d5dac824-1711-46d7-8bd3-55975eb05d63-kube-api-access-rqbxq\") pod \"certified-operators-zl2mk\" (UID: \"d5dac824-1711-46d7-8bd3-55975eb05d63\") " pod="openshift-marketplace/certified-operators-zl2mk" Nov 24 11:22:04 crc kubenswrapper[5072]: I1124 11:22:04.914658 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zl2mk" Nov 24 11:22:05 crc kubenswrapper[5072]: I1124 11:22:05.041532 5072 generic.go:334] "Generic (PLEG): container finished" podID="b1d8a0f3-7f9b-4e19-bfcf-addd8fff3b88" containerID="7cf7efede9ac50f4bfe229962a75bb2536dc37a7a856beb85ad397e1ea899228" exitCode=0 Nov 24 11:22:05 crc kubenswrapper[5072]: I1124 11:22:05.041599 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-2nhqx" event={"ID":"b1d8a0f3-7f9b-4e19-bfcf-addd8fff3b88","Type":"ContainerDied","Data":"7cf7efede9ac50f4bfe229962a75bb2536dc37a7a856beb85ad397e1ea899228"} Nov 24 11:22:05 crc kubenswrapper[5072]: I1124 11:22:05.359228 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zl2mk"] Nov 24 11:22:05 crc kubenswrapper[5072]: W1124 11:22:05.366328 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd5dac824_1711_46d7_8bd3_55975eb05d63.slice/crio-fe5156e627a4a7cd4e91930732261a9c74c948ba3f31920cb7276b038d62aac0 WatchSource:0}: Error finding container fe5156e627a4a7cd4e91930732261a9c74c948ba3f31920cb7276b038d62aac0: Status 404 returned error can't find the container with id fe5156e627a4a7cd4e91930732261a9c74c948ba3f31920cb7276b038d62aac0 Nov 24 11:22:06 crc kubenswrapper[5072]: I1124 11:22:06.069260 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-2nhqx" event={"ID":"b1d8a0f3-7f9b-4e19-bfcf-addd8fff3b88","Type":"ContainerStarted","Data":"42b0079860b440b06ef7367ae129b667a5a54f5ee4561da5a7f9624724299fb1"} Nov 24 11:22:06 crc kubenswrapper[5072]: I1124 11:22:06.069597 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-2nhqx" event={"ID":"b1d8a0f3-7f9b-4e19-bfcf-addd8fff3b88","Type":"ContainerStarted","Data":"745ecad5dfe72bb54d0462b4f066f60e1742e7451ee7d911fbb44b2602f2e478"} Nov 24 11:22:06 crc kubenswrapper[5072]: I1124 11:22:06.069611 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-2nhqx" event={"ID":"b1d8a0f3-7f9b-4e19-bfcf-addd8fff3b88","Type":"ContainerStarted","Data":"4ed95db550e94fbd65abb87d7dc5642f824f12b5ce83fb4d1c80768b26469dfe"} Nov 24 11:22:06 crc kubenswrapper[5072]: I1124 11:22:06.069622 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-2nhqx" event={"ID":"b1d8a0f3-7f9b-4e19-bfcf-addd8fff3b88","Type":"ContainerStarted","Data":"ac82b81c8cba28de9a5da7312001becc6c547f5de3a12daff41dc025d8ee6a37"} Nov 24 11:22:06 crc kubenswrapper[5072]: I1124 11:22:06.069634 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-2nhqx" event={"ID":"b1d8a0f3-7f9b-4e19-bfcf-addd8fff3b88","Type":"ContainerStarted","Data":"c4e0c014838c8a31eefb14f49ee813bacdc166df2bacdd2f895b3ce027e96acf"} Nov 24 11:22:06 crc kubenswrapper[5072]: I1124 11:22:06.072351 5072 generic.go:334] "Generic (PLEG): container finished" podID="d5dac824-1711-46d7-8bd3-55975eb05d63" containerID="11d5cf1abc95e6fb08ff5873145e07a3a961d66ef221857c0516bafee65e9064" exitCode=0 Nov 24 11:22:06 crc kubenswrapper[5072]: I1124 11:22:06.072419 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zl2mk" event={"ID":"d5dac824-1711-46d7-8bd3-55975eb05d63","Type":"ContainerDied","Data":"11d5cf1abc95e6fb08ff5873145e07a3a961d66ef221857c0516bafee65e9064"} Nov 24 11:22:06 crc kubenswrapper[5072]: I1124 11:22:06.072445 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zl2mk" event={"ID":"d5dac824-1711-46d7-8bd3-55975eb05d63","Type":"ContainerStarted","Data":"fe5156e627a4a7cd4e91930732261a9c74c948ba3f31920cb7276b038d62aac0"} Nov 24 11:22:07 crc kubenswrapper[5072]: I1124 11:22:07.091243 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-2nhqx" event={"ID":"b1d8a0f3-7f9b-4e19-bfcf-addd8fff3b88","Type":"ContainerStarted","Data":"5dd8479f4cc5b9ca47c504be1a888942bc1985c95a88f00d988421c8b4cf8a7f"} Nov 24 11:22:07 crc kubenswrapper[5072]: I1124 11:22:07.092603 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-2nhqx" Nov 24 11:22:07 crc kubenswrapper[5072]: I1124 11:22:07.094351 5072 generic.go:334] "Generic (PLEG): container finished" podID="d5dac824-1711-46d7-8bd3-55975eb05d63" containerID="d1d7b1f2daf8f2ac7b5a5e4eaa4da932b2f6a76b1cbf129b4a123e053e45c4be" exitCode=0 Nov 24 11:22:07 crc kubenswrapper[5072]: I1124 11:22:07.094416 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zl2mk" event={"ID":"d5dac824-1711-46d7-8bd3-55975eb05d63","Type":"ContainerDied","Data":"d1d7b1f2daf8f2ac7b5a5e4eaa4da932b2f6a76b1cbf129b4a123e053e45c4be"} Nov 24 11:22:07 crc kubenswrapper[5072]: I1124 11:22:07.128059 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-2nhqx" podStartSLOduration=5.258911933 podStartE2EDuration="12.12804576s" podCreationTimestamp="2025-11-24 11:21:55 +0000 UTC" firstStartedPulling="2025-11-24 11:21:55.775078073 +0000 UTC m=+767.486602539" lastFinishedPulling="2025-11-24 11:22:02.64421189 +0000 UTC m=+774.355736366" observedRunningTime="2025-11-24 11:22:07.123602238 +0000 UTC m=+778.835126714" watchObservedRunningTime="2025-11-24 11:22:07.12804576 +0000 UTC m=+778.839570236" Nov 24 11:22:07 crc kubenswrapper[5072]: I1124 11:22:07.188922 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-xc9ht" Nov 24 11:22:08 crc kubenswrapper[5072]: I1124 11:22:08.105636 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zl2mk" event={"ID":"d5dac824-1711-46d7-8bd3-55975eb05d63","Type":"ContainerStarted","Data":"8218bd2159e28b8d9afb452e0108a981217680527cb3bb19d73ae9f76b95fdf7"} Nov 24 11:22:08 crc kubenswrapper[5072]: I1124 11:22:08.124429 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zl2mk" podStartSLOduration=2.701246715 podStartE2EDuration="4.124407678s" podCreationTimestamp="2025-11-24 11:22:04 +0000 UTC" firstStartedPulling="2025-11-24 11:22:06.075180849 +0000 UTC m=+777.786705325" lastFinishedPulling="2025-11-24 11:22:07.498341782 +0000 UTC m=+779.209866288" observedRunningTime="2025-11-24 11:22:08.120397107 +0000 UTC m=+779.831921593" watchObservedRunningTime="2025-11-24 11:22:08.124407678 +0000 UTC m=+779.835932164" Nov 24 11:22:09 crc kubenswrapper[5072]: I1124 11:22:09.196352 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-d4mnf" Nov 24 11:22:09 crc kubenswrapper[5072]: I1124 11:22:09.245778 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-d4mnf" Nov 24 11:22:10 crc kubenswrapper[5072]: I1124 11:22:10.616878 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-2nhqx" Nov 24 11:22:10 crc kubenswrapper[5072]: I1124 11:22:10.656547 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-2nhqx" Nov 24 11:22:11 crc kubenswrapper[5072]: I1124 11:22:11.998736 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-d6cz9"] Nov 24 11:22:12 crc kubenswrapper[5072]: I1124 11:22:12.001107 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d6cz9" Nov 24 11:22:12 crc kubenswrapper[5072]: I1124 11:22:12.015552 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-d6cz9"] Nov 24 11:22:12 crc kubenswrapper[5072]: I1124 11:22:12.101552 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ee3a051-67a5-47c8-b663-8bff4a952094-catalog-content\") pod \"redhat-marketplace-d6cz9\" (UID: \"6ee3a051-67a5-47c8-b663-8bff4a952094\") " pod="openshift-marketplace/redhat-marketplace-d6cz9" Nov 24 11:22:12 crc kubenswrapper[5072]: I1124 11:22:12.102013 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lq58b\" (UniqueName: \"kubernetes.io/projected/6ee3a051-67a5-47c8-b663-8bff4a952094-kube-api-access-lq58b\") pod \"redhat-marketplace-d6cz9\" (UID: \"6ee3a051-67a5-47c8-b663-8bff4a952094\") " pod="openshift-marketplace/redhat-marketplace-d6cz9" Nov 24 11:22:12 crc kubenswrapper[5072]: I1124 11:22:12.102292 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ee3a051-67a5-47c8-b663-8bff4a952094-utilities\") pod \"redhat-marketplace-d6cz9\" (UID: \"6ee3a051-67a5-47c8-b663-8bff4a952094\") " pod="openshift-marketplace/redhat-marketplace-d6cz9" Nov 24 11:22:12 crc kubenswrapper[5072]: I1124 11:22:12.203901 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lq58b\" (UniqueName: \"kubernetes.io/projected/6ee3a051-67a5-47c8-b663-8bff4a952094-kube-api-access-lq58b\") pod \"redhat-marketplace-d6cz9\" (UID: \"6ee3a051-67a5-47c8-b663-8bff4a952094\") " pod="openshift-marketplace/redhat-marketplace-d6cz9" Nov 24 11:22:12 crc kubenswrapper[5072]: I1124 11:22:12.204249 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ee3a051-67a5-47c8-b663-8bff4a952094-utilities\") pod \"redhat-marketplace-d6cz9\" (UID: \"6ee3a051-67a5-47c8-b663-8bff4a952094\") " pod="openshift-marketplace/redhat-marketplace-d6cz9" Nov 24 11:22:12 crc kubenswrapper[5072]: I1124 11:22:12.204513 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ee3a051-67a5-47c8-b663-8bff4a952094-catalog-content\") pod \"redhat-marketplace-d6cz9\" (UID: \"6ee3a051-67a5-47c8-b663-8bff4a952094\") " pod="openshift-marketplace/redhat-marketplace-d6cz9" Nov 24 11:22:12 crc kubenswrapper[5072]: I1124 11:22:12.204793 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ee3a051-67a5-47c8-b663-8bff4a952094-utilities\") pod \"redhat-marketplace-d6cz9\" (UID: \"6ee3a051-67a5-47c8-b663-8bff4a952094\") " pod="openshift-marketplace/redhat-marketplace-d6cz9" Nov 24 11:22:12 crc kubenswrapper[5072]: I1124 11:22:12.205143 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ee3a051-67a5-47c8-b663-8bff4a952094-catalog-content\") pod \"redhat-marketplace-d6cz9\" (UID: \"6ee3a051-67a5-47c8-b663-8bff4a952094\") " pod="openshift-marketplace/redhat-marketplace-d6cz9" Nov 24 11:22:12 crc kubenswrapper[5072]: I1124 11:22:12.238576 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lq58b\" (UniqueName: \"kubernetes.io/projected/6ee3a051-67a5-47c8-b663-8bff4a952094-kube-api-access-lq58b\") pod \"redhat-marketplace-d6cz9\" (UID: \"6ee3a051-67a5-47c8-b663-8bff4a952094\") " pod="openshift-marketplace/redhat-marketplace-d6cz9" Nov 24 11:22:12 crc kubenswrapper[5072]: I1124 11:22:12.337326 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d6cz9" Nov 24 11:22:12 crc kubenswrapper[5072]: I1124 11:22:12.716813 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-d6cz9"] Nov 24 11:22:13 crc kubenswrapper[5072]: I1124 11:22:13.149446 5072 generic.go:334] "Generic (PLEG): container finished" podID="6ee3a051-67a5-47c8-b663-8bff4a952094" containerID="27e9331c6a940e0e88049d50346fb496877998e79f8192e234d6b1115e4c5d52" exitCode=0 Nov 24 11:22:13 crc kubenswrapper[5072]: I1124 11:22:13.149500 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d6cz9" event={"ID":"6ee3a051-67a5-47c8-b663-8bff4a952094","Type":"ContainerDied","Data":"27e9331c6a940e0e88049d50346fb496877998e79f8192e234d6b1115e4c5d52"} Nov 24 11:22:13 crc kubenswrapper[5072]: I1124 11:22:13.149557 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d6cz9" event={"ID":"6ee3a051-67a5-47c8-b663-8bff4a952094","Type":"ContainerStarted","Data":"1a645e7dc4f22e752fd6ac8992a1eeda46db7db02ed270d8bad91b8764ca59af"} Nov 24 11:22:14 crc kubenswrapper[5072]: I1124 11:22:14.789319 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-d4mnf"] Nov 24 11:22:14 crc kubenswrapper[5072]: I1124 11:22:14.790030 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-d4mnf" podUID="edcb1a80-ffc4-4a75-9f38-07491b5c4c68" containerName="registry-server" containerID="cri-o://90e04ddde7d4725a1b97f093f372d3103d1cec24f841609fb8ec40a111a6c846" gracePeriod=2 Nov 24 11:22:14 crc kubenswrapper[5072]: I1124 11:22:14.915772 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zl2mk" Nov 24 11:22:14 crc kubenswrapper[5072]: I1124 11:22:14.915856 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zl2mk" Nov 24 11:22:14 crc kubenswrapper[5072]: I1124 11:22:14.966729 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zl2mk" Nov 24 11:22:15 crc kubenswrapper[5072]: I1124 11:22:15.167359 5072 generic.go:334] "Generic (PLEG): container finished" podID="edcb1a80-ffc4-4a75-9f38-07491b5c4c68" containerID="90e04ddde7d4725a1b97f093f372d3103d1cec24f841609fb8ec40a111a6c846" exitCode=0 Nov 24 11:22:15 crc kubenswrapper[5072]: I1124 11:22:15.167466 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d4mnf" event={"ID":"edcb1a80-ffc4-4a75-9f38-07491b5c4c68","Type":"ContainerDied","Data":"90e04ddde7d4725a1b97f093f372d3103d1cec24f841609fb8ec40a111a6c846"} Nov 24 11:22:15 crc kubenswrapper[5072]: I1124 11:22:15.169273 5072 generic.go:334] "Generic (PLEG): container finished" podID="6ee3a051-67a5-47c8-b663-8bff4a952094" containerID="4394dfff5b15b37134329ced29a2f05c2716e37600e367787a21895d0117a6be" exitCode=0 Nov 24 11:22:15 crc kubenswrapper[5072]: I1124 11:22:15.170289 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d6cz9" event={"ID":"6ee3a051-67a5-47c8-b663-8bff4a952094","Type":"ContainerDied","Data":"4394dfff5b15b37134329ced29a2f05c2716e37600e367787a21895d0117a6be"} Nov 24 11:22:15 crc kubenswrapper[5072]: I1124 11:22:15.224186 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zl2mk" Nov 24 11:22:15 crc kubenswrapper[5072]: I1124 11:22:15.284205 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d4mnf" Nov 24 11:22:15 crc kubenswrapper[5072]: I1124 11:22:15.404947 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/edcb1a80-ffc4-4a75-9f38-07491b5c4c68-catalog-content\") pod \"edcb1a80-ffc4-4a75-9f38-07491b5c4c68\" (UID: \"edcb1a80-ffc4-4a75-9f38-07491b5c4c68\") " Nov 24 11:22:15 crc kubenswrapper[5072]: I1124 11:22:15.405005 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t45x2\" (UniqueName: \"kubernetes.io/projected/edcb1a80-ffc4-4a75-9f38-07491b5c4c68-kube-api-access-t45x2\") pod \"edcb1a80-ffc4-4a75-9f38-07491b5c4c68\" (UID: \"edcb1a80-ffc4-4a75-9f38-07491b5c4c68\") " Nov 24 11:22:15 crc kubenswrapper[5072]: I1124 11:22:15.405044 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/edcb1a80-ffc4-4a75-9f38-07491b5c4c68-utilities\") pod \"edcb1a80-ffc4-4a75-9f38-07491b5c4c68\" (UID: \"edcb1a80-ffc4-4a75-9f38-07491b5c4c68\") " Nov 24 11:22:15 crc kubenswrapper[5072]: I1124 11:22:15.406594 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/edcb1a80-ffc4-4a75-9f38-07491b5c4c68-utilities" (OuterVolumeSpecName: "utilities") pod "edcb1a80-ffc4-4a75-9f38-07491b5c4c68" (UID: "edcb1a80-ffc4-4a75-9f38-07491b5c4c68"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:22:15 crc kubenswrapper[5072]: I1124 11:22:15.413017 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edcb1a80-ffc4-4a75-9f38-07491b5c4c68-kube-api-access-t45x2" (OuterVolumeSpecName: "kube-api-access-t45x2") pod "edcb1a80-ffc4-4a75-9f38-07491b5c4c68" (UID: "edcb1a80-ffc4-4a75-9f38-07491b5c4c68"). InnerVolumeSpecName "kube-api-access-t45x2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:22:15 crc kubenswrapper[5072]: I1124 11:22:15.506035 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t45x2\" (UniqueName: \"kubernetes.io/projected/edcb1a80-ffc4-4a75-9f38-07491b5c4c68-kube-api-access-t45x2\") on node \"crc\" DevicePath \"\"" Nov 24 11:22:15 crc kubenswrapper[5072]: I1124 11:22:15.506063 5072 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/edcb1a80-ffc4-4a75-9f38-07491b5c4c68-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 11:22:15 crc kubenswrapper[5072]: I1124 11:22:15.529483 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/edcb1a80-ffc4-4a75-9f38-07491b5c4c68-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "edcb1a80-ffc4-4a75-9f38-07491b5c4c68" (UID: "edcb1a80-ffc4-4a75-9f38-07491b5c4c68"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:22:15 crc kubenswrapper[5072]: I1124 11:22:15.607928 5072 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/edcb1a80-ffc4-4a75-9f38-07491b5c4c68-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 11:22:15 crc kubenswrapper[5072]: I1124 11:22:15.611958 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-6998585d5-mjmzs" Nov 24 11:22:15 crc kubenswrapper[5072]: I1124 11:22:15.619480 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-2nhqx" Nov 24 11:22:15 crc kubenswrapper[5072]: I1124 11:22:15.715207 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6c7b4b5f48-54sxn" Nov 24 11:22:16 crc kubenswrapper[5072]: I1124 11:22:15.998598 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-fj9hm"] Nov 24 11:22:16 crc kubenswrapper[5072]: E1124 11:22:15.999906 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edcb1a80-ffc4-4a75-9f38-07491b5c4c68" containerName="extract-utilities" Nov 24 11:22:16 crc kubenswrapper[5072]: I1124 11:22:15.999935 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="edcb1a80-ffc4-4a75-9f38-07491b5c4c68" containerName="extract-utilities" Nov 24 11:22:16 crc kubenswrapper[5072]: E1124 11:22:15.999989 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edcb1a80-ffc4-4a75-9f38-07491b5c4c68" containerName="registry-server" Nov 24 11:22:16 crc kubenswrapper[5072]: I1124 11:22:16.000004 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="edcb1a80-ffc4-4a75-9f38-07491b5c4c68" containerName="registry-server" Nov 24 11:22:16 crc kubenswrapper[5072]: E1124 11:22:16.000020 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edcb1a80-ffc4-4a75-9f38-07491b5c4c68" containerName="extract-content" Nov 24 11:22:16 crc kubenswrapper[5072]: I1124 11:22:16.000035 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="edcb1a80-ffc4-4a75-9f38-07491b5c4c68" containerName="extract-content" Nov 24 11:22:16 crc kubenswrapper[5072]: I1124 11:22:16.000516 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="edcb1a80-ffc4-4a75-9f38-07491b5c4c68" containerName="registry-server" Nov 24 11:22:16 crc kubenswrapper[5072]: I1124 11:22:16.001552 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-fj9hm" Nov 24 11:22:16 crc kubenswrapper[5072]: I1124 11:22:16.012537 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-2gjsl" Nov 24 11:22:16 crc kubenswrapper[5072]: I1124 11:22:16.012671 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Nov 24 11:22:16 crc kubenswrapper[5072]: I1124 11:22:16.012826 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Nov 24 11:22:16 crc kubenswrapper[5072]: I1124 11:22:16.035902 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-fj9hm"] Nov 24 11:22:16 crc kubenswrapper[5072]: I1124 11:22:16.119017 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nwxx\" (UniqueName: \"kubernetes.io/projected/647cb5b8-46fc-4c8d-90af-18ef37a34807-kube-api-access-2nwxx\") pod \"openstack-operator-index-fj9hm\" (UID: \"647cb5b8-46fc-4c8d-90af-18ef37a34807\") " pod="openstack-operators/openstack-operator-index-fj9hm" Nov 24 11:22:16 crc kubenswrapper[5072]: I1124 11:22:16.180471 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d6cz9" event={"ID":"6ee3a051-67a5-47c8-b663-8bff4a952094","Type":"ContainerStarted","Data":"c3b11d410fcc4741adc8658ed6227be453415ff78f21c19998242cb8d87e0e85"} Nov 24 11:22:16 crc kubenswrapper[5072]: I1124 11:22:16.184420 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d4mnf" event={"ID":"edcb1a80-ffc4-4a75-9f38-07491b5c4c68","Type":"ContainerDied","Data":"009b00a77b894538ef5b560001a6d2d6937ad0b16326d5a8ec8515793b36d596"} Nov 24 11:22:16 crc kubenswrapper[5072]: I1124 11:22:16.184443 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d4mnf" Nov 24 11:22:16 crc kubenswrapper[5072]: I1124 11:22:16.184516 5072 scope.go:117] "RemoveContainer" containerID="90e04ddde7d4725a1b97f093f372d3103d1cec24f841609fb8ec40a111a6c846" Nov 24 11:22:16 crc kubenswrapper[5072]: I1124 11:22:16.203231 5072 scope.go:117] "RemoveContainer" containerID="e6c4a4de2e1005447a2d4d496b5c45935e263a15524b42ed3f2c3830acf91254" Nov 24 11:22:16 crc kubenswrapper[5072]: I1124 11:22:16.220744 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-d6cz9" podStartSLOduration=2.780183569 podStartE2EDuration="5.220722479s" podCreationTimestamp="2025-11-24 11:22:11 +0000 UTC" firstStartedPulling="2025-11-24 11:22:13.151333563 +0000 UTC m=+784.862858039" lastFinishedPulling="2025-11-24 11:22:15.591872463 +0000 UTC m=+787.303396949" observedRunningTime="2025-11-24 11:22:16.207673271 +0000 UTC m=+787.919197747" watchObservedRunningTime="2025-11-24 11:22:16.220722479 +0000 UTC m=+787.932246955" Nov 24 11:22:16 crc kubenswrapper[5072]: I1124 11:22:16.221467 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nwxx\" (UniqueName: \"kubernetes.io/projected/647cb5b8-46fc-4c8d-90af-18ef37a34807-kube-api-access-2nwxx\") pod \"openstack-operator-index-fj9hm\" (UID: \"647cb5b8-46fc-4c8d-90af-18ef37a34807\") " pod="openstack-operators/openstack-operator-index-fj9hm" Nov 24 11:22:16 crc kubenswrapper[5072]: I1124 11:22:16.225437 5072 scope.go:117] "RemoveContainer" containerID="235a2666d0468fac05c353f4d573cb345c6acf54cdc345493bd4d3bc4140e6be" Nov 24 11:22:16 crc kubenswrapper[5072]: I1124 11:22:16.225559 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-d4mnf"] Nov 24 11:22:16 crc kubenswrapper[5072]: I1124 11:22:16.229305 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-d4mnf"] Nov 24 11:22:16 crc kubenswrapper[5072]: I1124 11:22:16.248267 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nwxx\" (UniqueName: \"kubernetes.io/projected/647cb5b8-46fc-4c8d-90af-18ef37a34807-kube-api-access-2nwxx\") pod \"openstack-operator-index-fj9hm\" (UID: \"647cb5b8-46fc-4c8d-90af-18ef37a34807\") " pod="openstack-operators/openstack-operator-index-fj9hm" Nov 24 11:22:16 crc kubenswrapper[5072]: I1124 11:22:16.336352 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-fj9hm" Nov 24 11:22:16 crc kubenswrapper[5072]: I1124 11:22:16.781350 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-fj9hm"] Nov 24 11:22:16 crc kubenswrapper[5072]: W1124 11:22:16.788889 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod647cb5b8_46fc_4c8d_90af_18ef37a34807.slice/crio-1f9280c02b02b2fb15ac1a5880423ad71145c3f263769ba2486d795e728e4ed4 WatchSource:0}: Error finding container 1f9280c02b02b2fb15ac1a5880423ad71145c3f263769ba2486d795e728e4ed4: Status 404 returned error can't find the container with id 1f9280c02b02b2fb15ac1a5880423ad71145c3f263769ba2486d795e728e4ed4 Nov 24 11:22:17 crc kubenswrapper[5072]: I1124 11:22:17.027527 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="edcb1a80-ffc4-4a75-9f38-07491b5c4c68" path="/var/lib/kubelet/pods/edcb1a80-ffc4-4a75-9f38-07491b5c4c68/volumes" Nov 24 11:22:17 crc kubenswrapper[5072]: I1124 11:22:17.191825 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-fj9hm" event={"ID":"647cb5b8-46fc-4c8d-90af-18ef37a34807","Type":"ContainerStarted","Data":"1f9280c02b02b2fb15ac1a5880423ad71145c3f263769ba2486d795e728e4ed4"} Nov 24 11:22:20 crc kubenswrapper[5072]: I1124 11:22:20.214506 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-fj9hm" event={"ID":"647cb5b8-46fc-4c8d-90af-18ef37a34807","Type":"ContainerStarted","Data":"e407cc6ca63c9fd43d3174c304fc385cf07d515f003c96ee64b36dab3b0d99cb"} Nov 24 11:22:20 crc kubenswrapper[5072]: I1124 11:22:20.239668 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-fj9hm" podStartSLOduration=2.211427445 podStartE2EDuration="5.239639044s" podCreationTimestamp="2025-11-24 11:22:15 +0000 UTC" firstStartedPulling="2025-11-24 11:22:16.790429136 +0000 UTC m=+788.501953612" lastFinishedPulling="2025-11-24 11:22:19.818640725 +0000 UTC m=+791.530165211" observedRunningTime="2025-11-24 11:22:20.237207113 +0000 UTC m=+791.948731649" watchObservedRunningTime="2025-11-24 11:22:20.239639044 +0000 UTC m=+791.951163550" Nov 24 11:22:22 crc kubenswrapper[5072]: I1124 11:22:22.183816 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zl2mk"] Nov 24 11:22:22 crc kubenswrapper[5072]: I1124 11:22:22.184092 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-zl2mk" podUID="d5dac824-1711-46d7-8bd3-55975eb05d63" containerName="registry-server" containerID="cri-o://8218bd2159e28b8d9afb452e0108a981217680527cb3bb19d73ae9f76b95fdf7" gracePeriod=2 Nov 24 11:22:22 crc kubenswrapper[5072]: I1124 11:22:22.340424 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-d6cz9" Nov 24 11:22:22 crc kubenswrapper[5072]: I1124 11:22:22.340770 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-d6cz9" Nov 24 11:22:22 crc kubenswrapper[5072]: I1124 11:22:22.393039 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-d6cz9" Nov 24 11:22:22 crc kubenswrapper[5072]: I1124 11:22:22.655900 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zl2mk" Nov 24 11:22:22 crc kubenswrapper[5072]: I1124 11:22:22.823035 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5dac824-1711-46d7-8bd3-55975eb05d63-utilities\") pod \"d5dac824-1711-46d7-8bd3-55975eb05d63\" (UID: \"d5dac824-1711-46d7-8bd3-55975eb05d63\") " Nov 24 11:22:22 crc kubenswrapper[5072]: I1124 11:22:22.823144 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqbxq\" (UniqueName: \"kubernetes.io/projected/d5dac824-1711-46d7-8bd3-55975eb05d63-kube-api-access-rqbxq\") pod \"d5dac824-1711-46d7-8bd3-55975eb05d63\" (UID: \"d5dac824-1711-46d7-8bd3-55975eb05d63\") " Nov 24 11:22:22 crc kubenswrapper[5072]: I1124 11:22:22.823241 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5dac824-1711-46d7-8bd3-55975eb05d63-catalog-content\") pod \"d5dac824-1711-46d7-8bd3-55975eb05d63\" (UID: \"d5dac824-1711-46d7-8bd3-55975eb05d63\") " Nov 24 11:22:22 crc kubenswrapper[5072]: I1124 11:22:22.823771 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5dac824-1711-46d7-8bd3-55975eb05d63-utilities" (OuterVolumeSpecName: "utilities") pod "d5dac824-1711-46d7-8bd3-55975eb05d63" (UID: "d5dac824-1711-46d7-8bd3-55975eb05d63"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:22:22 crc kubenswrapper[5072]: I1124 11:22:22.836106 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5dac824-1711-46d7-8bd3-55975eb05d63-kube-api-access-rqbxq" (OuterVolumeSpecName: "kube-api-access-rqbxq") pod "d5dac824-1711-46d7-8bd3-55975eb05d63" (UID: "d5dac824-1711-46d7-8bd3-55975eb05d63"). InnerVolumeSpecName "kube-api-access-rqbxq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:22:22 crc kubenswrapper[5072]: I1124 11:22:22.882601 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5dac824-1711-46d7-8bd3-55975eb05d63-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d5dac824-1711-46d7-8bd3-55975eb05d63" (UID: "d5dac824-1711-46d7-8bd3-55975eb05d63"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:22:22 crc kubenswrapper[5072]: I1124 11:22:22.924791 5072 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5dac824-1711-46d7-8bd3-55975eb05d63-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 11:22:22 crc kubenswrapper[5072]: I1124 11:22:22.924852 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rqbxq\" (UniqueName: \"kubernetes.io/projected/d5dac824-1711-46d7-8bd3-55975eb05d63-kube-api-access-rqbxq\") on node \"crc\" DevicePath \"\"" Nov 24 11:22:22 crc kubenswrapper[5072]: I1124 11:22:22.924873 5072 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5dac824-1711-46d7-8bd3-55975eb05d63-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 11:22:23 crc kubenswrapper[5072]: I1124 11:22:23.237991 5072 generic.go:334] "Generic (PLEG): container finished" podID="d5dac824-1711-46d7-8bd3-55975eb05d63" containerID="8218bd2159e28b8d9afb452e0108a981217680527cb3bb19d73ae9f76b95fdf7" exitCode=0 Nov 24 11:22:23 crc kubenswrapper[5072]: I1124 11:22:23.238053 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zl2mk" Nov 24 11:22:23 crc kubenswrapper[5072]: I1124 11:22:23.238148 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zl2mk" event={"ID":"d5dac824-1711-46d7-8bd3-55975eb05d63","Type":"ContainerDied","Data":"8218bd2159e28b8d9afb452e0108a981217680527cb3bb19d73ae9f76b95fdf7"} Nov 24 11:22:23 crc kubenswrapper[5072]: I1124 11:22:23.238195 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zl2mk" event={"ID":"d5dac824-1711-46d7-8bd3-55975eb05d63","Type":"ContainerDied","Data":"fe5156e627a4a7cd4e91930732261a9c74c948ba3f31920cb7276b038d62aac0"} Nov 24 11:22:23 crc kubenswrapper[5072]: I1124 11:22:23.238226 5072 scope.go:117] "RemoveContainer" containerID="8218bd2159e28b8d9afb452e0108a981217680527cb3bb19d73ae9f76b95fdf7" Nov 24 11:22:23 crc kubenswrapper[5072]: I1124 11:22:23.259197 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zl2mk"] Nov 24 11:22:23 crc kubenswrapper[5072]: I1124 11:22:23.261261 5072 scope.go:117] "RemoveContainer" containerID="d1d7b1f2daf8f2ac7b5a5e4eaa4da932b2f6a76b1cbf129b4a123e053e45c4be" Nov 24 11:22:23 crc kubenswrapper[5072]: I1124 11:22:23.271779 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-zl2mk"] Nov 24 11:22:23 crc kubenswrapper[5072]: I1124 11:22:23.278405 5072 scope.go:117] "RemoveContainer" containerID="11d5cf1abc95e6fb08ff5873145e07a3a961d66ef221857c0516bafee65e9064" Nov 24 11:22:23 crc kubenswrapper[5072]: I1124 11:22:23.298787 5072 scope.go:117] "RemoveContainer" containerID="8218bd2159e28b8d9afb452e0108a981217680527cb3bb19d73ae9f76b95fdf7" Nov 24 11:22:23 crc kubenswrapper[5072]: E1124 11:22:23.300319 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8218bd2159e28b8d9afb452e0108a981217680527cb3bb19d73ae9f76b95fdf7\": container with ID starting with 8218bd2159e28b8d9afb452e0108a981217680527cb3bb19d73ae9f76b95fdf7 not found: ID does not exist" containerID="8218bd2159e28b8d9afb452e0108a981217680527cb3bb19d73ae9f76b95fdf7" Nov 24 11:22:23 crc kubenswrapper[5072]: I1124 11:22:23.300407 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8218bd2159e28b8d9afb452e0108a981217680527cb3bb19d73ae9f76b95fdf7"} err="failed to get container status \"8218bd2159e28b8d9afb452e0108a981217680527cb3bb19d73ae9f76b95fdf7\": rpc error: code = NotFound desc = could not find container \"8218bd2159e28b8d9afb452e0108a981217680527cb3bb19d73ae9f76b95fdf7\": container with ID starting with 8218bd2159e28b8d9afb452e0108a981217680527cb3bb19d73ae9f76b95fdf7 not found: ID does not exist" Nov 24 11:22:23 crc kubenswrapper[5072]: I1124 11:22:23.300445 5072 scope.go:117] "RemoveContainer" containerID="d1d7b1f2daf8f2ac7b5a5e4eaa4da932b2f6a76b1cbf129b4a123e053e45c4be" Nov 24 11:22:23 crc kubenswrapper[5072]: E1124 11:22:23.300900 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1d7b1f2daf8f2ac7b5a5e4eaa4da932b2f6a76b1cbf129b4a123e053e45c4be\": container with ID starting with d1d7b1f2daf8f2ac7b5a5e4eaa4da932b2f6a76b1cbf129b4a123e053e45c4be not found: ID does not exist" containerID="d1d7b1f2daf8f2ac7b5a5e4eaa4da932b2f6a76b1cbf129b4a123e053e45c4be" Nov 24 11:22:23 crc kubenswrapper[5072]: I1124 11:22:23.300938 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1d7b1f2daf8f2ac7b5a5e4eaa4da932b2f6a76b1cbf129b4a123e053e45c4be"} err="failed to get container status \"d1d7b1f2daf8f2ac7b5a5e4eaa4da932b2f6a76b1cbf129b4a123e053e45c4be\": rpc error: code = NotFound desc = could not find container \"d1d7b1f2daf8f2ac7b5a5e4eaa4da932b2f6a76b1cbf129b4a123e053e45c4be\": container with ID starting with d1d7b1f2daf8f2ac7b5a5e4eaa4da932b2f6a76b1cbf129b4a123e053e45c4be not found: ID does not exist" Nov 24 11:22:23 crc kubenswrapper[5072]: I1124 11:22:23.300986 5072 scope.go:117] "RemoveContainer" containerID="11d5cf1abc95e6fb08ff5873145e07a3a961d66ef221857c0516bafee65e9064" Nov 24 11:22:23 crc kubenswrapper[5072]: E1124 11:22:23.302562 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11d5cf1abc95e6fb08ff5873145e07a3a961d66ef221857c0516bafee65e9064\": container with ID starting with 11d5cf1abc95e6fb08ff5873145e07a3a961d66ef221857c0516bafee65e9064 not found: ID does not exist" containerID="11d5cf1abc95e6fb08ff5873145e07a3a961d66ef221857c0516bafee65e9064" Nov 24 11:22:23 crc kubenswrapper[5072]: I1124 11:22:23.302637 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11d5cf1abc95e6fb08ff5873145e07a3a961d66ef221857c0516bafee65e9064"} err="failed to get container status \"11d5cf1abc95e6fb08ff5873145e07a3a961d66ef221857c0516bafee65e9064\": rpc error: code = NotFound desc = could not find container \"11d5cf1abc95e6fb08ff5873145e07a3a961d66ef221857c0516bafee65e9064\": container with ID starting with 11d5cf1abc95e6fb08ff5873145e07a3a961d66ef221857c0516bafee65e9064 not found: ID does not exist" Nov 24 11:22:23 crc kubenswrapper[5072]: I1124 11:22:23.304914 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-d6cz9" Nov 24 11:22:25 crc kubenswrapper[5072]: I1124 11:22:25.030342 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5dac824-1711-46d7-8bd3-55975eb05d63" path="/var/lib/kubelet/pods/d5dac824-1711-46d7-8bd3-55975eb05d63/volumes" Nov 24 11:22:25 crc kubenswrapper[5072]: I1124 11:22:25.183879 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-d6cz9"] Nov 24 11:22:25 crc kubenswrapper[5072]: I1124 11:22:25.250761 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-d6cz9" podUID="6ee3a051-67a5-47c8-b663-8bff4a952094" containerName="registry-server" containerID="cri-o://c3b11d410fcc4741adc8658ed6227be453415ff78f21c19998242cb8d87e0e85" gracePeriod=2 Nov 24 11:22:25 crc kubenswrapper[5072]: I1124 11:22:25.769970 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d6cz9" Nov 24 11:22:25 crc kubenswrapper[5072]: I1124 11:22:25.869012 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lq58b\" (UniqueName: \"kubernetes.io/projected/6ee3a051-67a5-47c8-b663-8bff4a952094-kube-api-access-lq58b\") pod \"6ee3a051-67a5-47c8-b663-8bff4a952094\" (UID: \"6ee3a051-67a5-47c8-b663-8bff4a952094\") " Nov 24 11:22:25 crc kubenswrapper[5072]: I1124 11:22:25.869059 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ee3a051-67a5-47c8-b663-8bff4a952094-catalog-content\") pod \"6ee3a051-67a5-47c8-b663-8bff4a952094\" (UID: \"6ee3a051-67a5-47c8-b663-8bff4a952094\") " Nov 24 11:22:25 crc kubenswrapper[5072]: I1124 11:22:25.874498 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ee3a051-67a5-47c8-b663-8bff4a952094-kube-api-access-lq58b" (OuterVolumeSpecName: "kube-api-access-lq58b") pod "6ee3a051-67a5-47c8-b663-8bff4a952094" (UID: "6ee3a051-67a5-47c8-b663-8bff4a952094"). InnerVolumeSpecName "kube-api-access-lq58b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:22:25 crc kubenswrapper[5072]: I1124 11:22:25.893909 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ee3a051-67a5-47c8-b663-8bff4a952094-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6ee3a051-67a5-47c8-b663-8bff4a952094" (UID: "6ee3a051-67a5-47c8-b663-8bff4a952094"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:22:25 crc kubenswrapper[5072]: I1124 11:22:25.970075 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ee3a051-67a5-47c8-b663-8bff4a952094-utilities\") pod \"6ee3a051-67a5-47c8-b663-8bff4a952094\" (UID: \"6ee3a051-67a5-47c8-b663-8bff4a952094\") " Nov 24 11:22:25 crc kubenswrapper[5072]: I1124 11:22:25.970505 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lq58b\" (UniqueName: \"kubernetes.io/projected/6ee3a051-67a5-47c8-b663-8bff4a952094-kube-api-access-lq58b\") on node \"crc\" DevicePath \"\"" Nov 24 11:22:25 crc kubenswrapper[5072]: I1124 11:22:25.970548 5072 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ee3a051-67a5-47c8-b663-8bff4a952094-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 11:22:25 crc kubenswrapper[5072]: I1124 11:22:25.970965 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ee3a051-67a5-47c8-b663-8bff4a952094-utilities" (OuterVolumeSpecName: "utilities") pod "6ee3a051-67a5-47c8-b663-8bff4a952094" (UID: "6ee3a051-67a5-47c8-b663-8bff4a952094"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:22:26 crc kubenswrapper[5072]: I1124 11:22:26.072082 5072 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ee3a051-67a5-47c8-b663-8bff4a952094-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 11:22:26 crc kubenswrapper[5072]: I1124 11:22:26.260249 5072 generic.go:334] "Generic (PLEG): container finished" podID="6ee3a051-67a5-47c8-b663-8bff4a952094" containerID="c3b11d410fcc4741adc8658ed6227be453415ff78f21c19998242cb8d87e0e85" exitCode=0 Nov 24 11:22:26 crc kubenswrapper[5072]: I1124 11:22:26.260295 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d6cz9" event={"ID":"6ee3a051-67a5-47c8-b663-8bff4a952094","Type":"ContainerDied","Data":"c3b11d410fcc4741adc8658ed6227be453415ff78f21c19998242cb8d87e0e85"} Nov 24 11:22:26 crc kubenswrapper[5072]: I1124 11:22:26.260323 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d6cz9" event={"ID":"6ee3a051-67a5-47c8-b663-8bff4a952094","Type":"ContainerDied","Data":"1a645e7dc4f22e752fd6ac8992a1eeda46db7db02ed270d8bad91b8764ca59af"} Nov 24 11:22:26 crc kubenswrapper[5072]: I1124 11:22:26.260343 5072 scope.go:117] "RemoveContainer" containerID="c3b11d410fcc4741adc8658ed6227be453415ff78f21c19998242cb8d87e0e85" Nov 24 11:22:26 crc kubenswrapper[5072]: I1124 11:22:26.260346 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d6cz9" Nov 24 11:22:26 crc kubenswrapper[5072]: I1124 11:22:26.274286 5072 scope.go:117] "RemoveContainer" containerID="4394dfff5b15b37134329ced29a2f05c2716e37600e367787a21895d0117a6be" Nov 24 11:22:26 crc kubenswrapper[5072]: I1124 11:22:26.287800 5072 scope.go:117] "RemoveContainer" containerID="27e9331c6a940e0e88049d50346fb496877998e79f8192e234d6b1115e4c5d52" Nov 24 11:22:26 crc kubenswrapper[5072]: I1124 11:22:26.296548 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-d6cz9"] Nov 24 11:22:26 crc kubenswrapper[5072]: I1124 11:22:26.303518 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-d6cz9"] Nov 24 11:22:26 crc kubenswrapper[5072]: I1124 11:22:26.309769 5072 scope.go:117] "RemoveContainer" containerID="c3b11d410fcc4741adc8658ed6227be453415ff78f21c19998242cb8d87e0e85" Nov 24 11:22:26 crc kubenswrapper[5072]: E1124 11:22:26.310211 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3b11d410fcc4741adc8658ed6227be453415ff78f21c19998242cb8d87e0e85\": container with ID starting with c3b11d410fcc4741adc8658ed6227be453415ff78f21c19998242cb8d87e0e85 not found: ID does not exist" containerID="c3b11d410fcc4741adc8658ed6227be453415ff78f21c19998242cb8d87e0e85" Nov 24 11:22:26 crc kubenswrapper[5072]: I1124 11:22:26.310245 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3b11d410fcc4741adc8658ed6227be453415ff78f21c19998242cb8d87e0e85"} err="failed to get container status \"c3b11d410fcc4741adc8658ed6227be453415ff78f21c19998242cb8d87e0e85\": rpc error: code = NotFound desc = could not find container \"c3b11d410fcc4741adc8658ed6227be453415ff78f21c19998242cb8d87e0e85\": container with ID starting with c3b11d410fcc4741adc8658ed6227be453415ff78f21c19998242cb8d87e0e85 not found: ID does not exist" Nov 24 11:22:26 crc kubenswrapper[5072]: I1124 11:22:26.310271 5072 scope.go:117] "RemoveContainer" containerID="4394dfff5b15b37134329ced29a2f05c2716e37600e367787a21895d0117a6be" Nov 24 11:22:26 crc kubenswrapper[5072]: E1124 11:22:26.310712 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4394dfff5b15b37134329ced29a2f05c2716e37600e367787a21895d0117a6be\": container with ID starting with 4394dfff5b15b37134329ced29a2f05c2716e37600e367787a21895d0117a6be not found: ID does not exist" containerID="4394dfff5b15b37134329ced29a2f05c2716e37600e367787a21895d0117a6be" Nov 24 11:22:26 crc kubenswrapper[5072]: I1124 11:22:26.310737 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4394dfff5b15b37134329ced29a2f05c2716e37600e367787a21895d0117a6be"} err="failed to get container status \"4394dfff5b15b37134329ced29a2f05c2716e37600e367787a21895d0117a6be\": rpc error: code = NotFound desc = could not find container \"4394dfff5b15b37134329ced29a2f05c2716e37600e367787a21895d0117a6be\": container with ID starting with 4394dfff5b15b37134329ced29a2f05c2716e37600e367787a21895d0117a6be not found: ID does not exist" Nov 24 11:22:26 crc kubenswrapper[5072]: I1124 11:22:26.310758 5072 scope.go:117] "RemoveContainer" containerID="27e9331c6a940e0e88049d50346fb496877998e79f8192e234d6b1115e4c5d52" Nov 24 11:22:26 crc kubenswrapper[5072]: E1124 11:22:26.311221 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27e9331c6a940e0e88049d50346fb496877998e79f8192e234d6b1115e4c5d52\": container with ID starting with 27e9331c6a940e0e88049d50346fb496877998e79f8192e234d6b1115e4c5d52 not found: ID does not exist" containerID="27e9331c6a940e0e88049d50346fb496877998e79f8192e234d6b1115e4c5d52" Nov 24 11:22:26 crc kubenswrapper[5072]: I1124 11:22:26.311246 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27e9331c6a940e0e88049d50346fb496877998e79f8192e234d6b1115e4c5d52"} err="failed to get container status \"27e9331c6a940e0e88049d50346fb496877998e79f8192e234d6b1115e4c5d52\": rpc error: code = NotFound desc = could not find container \"27e9331c6a940e0e88049d50346fb496877998e79f8192e234d6b1115e4c5d52\": container with ID starting with 27e9331c6a940e0e88049d50346fb496877998e79f8192e234d6b1115e4c5d52 not found: ID does not exist" Nov 24 11:22:26 crc kubenswrapper[5072]: I1124 11:22:26.337589 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-fj9hm" Nov 24 11:22:26 crc kubenswrapper[5072]: I1124 11:22:26.337645 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-fj9hm" Nov 24 11:22:26 crc kubenswrapper[5072]: I1124 11:22:26.363742 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-fj9hm" Nov 24 11:22:27 crc kubenswrapper[5072]: I1124 11:22:27.023440 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ee3a051-67a5-47c8-b663-8bff4a952094" path="/var/lib/kubelet/pods/6ee3a051-67a5-47c8-b663-8bff4a952094/volumes" Nov 24 11:22:27 crc kubenswrapper[5072]: I1124 11:22:27.293899 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-fj9hm" Nov 24 11:22:30 crc kubenswrapper[5072]: I1124 11:22:30.838189 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/9b2186798aeb926003696bc84f4630fc1fe1628e77d31f0b55ade92554p4x65"] Nov 24 11:22:30 crc kubenswrapper[5072]: E1124 11:22:30.840504 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ee3a051-67a5-47c8-b663-8bff4a952094" containerName="extract-utilities" Nov 24 11:22:30 crc kubenswrapper[5072]: I1124 11:22:30.840655 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ee3a051-67a5-47c8-b663-8bff4a952094" containerName="extract-utilities" Nov 24 11:22:30 crc kubenswrapper[5072]: E1124 11:22:30.840762 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ee3a051-67a5-47c8-b663-8bff4a952094" containerName="registry-server" Nov 24 11:22:30 crc kubenswrapper[5072]: I1124 11:22:30.840866 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ee3a051-67a5-47c8-b663-8bff4a952094" containerName="registry-server" Nov 24 11:22:30 crc kubenswrapper[5072]: E1124 11:22:30.840980 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5dac824-1711-46d7-8bd3-55975eb05d63" containerName="extract-content" Nov 24 11:22:30 crc kubenswrapper[5072]: I1124 11:22:30.841081 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5dac824-1711-46d7-8bd3-55975eb05d63" containerName="extract-content" Nov 24 11:22:30 crc kubenswrapper[5072]: E1124 11:22:30.841183 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5dac824-1711-46d7-8bd3-55975eb05d63" containerName="registry-server" Nov 24 11:22:30 crc kubenswrapper[5072]: I1124 11:22:30.841283 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5dac824-1711-46d7-8bd3-55975eb05d63" containerName="registry-server" Nov 24 11:22:30 crc kubenswrapper[5072]: E1124 11:22:30.841427 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ee3a051-67a5-47c8-b663-8bff4a952094" containerName="extract-content" Nov 24 11:22:30 crc kubenswrapper[5072]: I1124 11:22:30.841578 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ee3a051-67a5-47c8-b663-8bff4a952094" containerName="extract-content" Nov 24 11:22:30 crc kubenswrapper[5072]: E1124 11:22:30.841683 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5dac824-1711-46d7-8bd3-55975eb05d63" containerName="extract-utilities" Nov 24 11:22:30 crc kubenswrapper[5072]: I1124 11:22:30.841783 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5dac824-1711-46d7-8bd3-55975eb05d63" containerName="extract-utilities" Nov 24 11:22:30 crc kubenswrapper[5072]: I1124 11:22:30.842075 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5dac824-1711-46d7-8bd3-55975eb05d63" containerName="registry-server" Nov 24 11:22:30 crc kubenswrapper[5072]: I1124 11:22:30.842185 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ee3a051-67a5-47c8-b663-8bff4a952094" containerName="registry-server" Nov 24 11:22:30 crc kubenswrapper[5072]: I1124 11:22:30.845193 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9b2186798aeb926003696bc84f4630fc1fe1628e77d31f0b55ade92554p4x65" Nov 24 11:22:30 crc kubenswrapper[5072]: I1124 11:22:30.847667 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-j5zrj" Nov 24 11:22:30 crc kubenswrapper[5072]: I1124 11:22:30.856589 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/9b2186798aeb926003696bc84f4630fc1fe1628e77d31f0b55ade92554p4x65"] Nov 24 11:22:31 crc kubenswrapper[5072]: I1124 11:22:31.034182 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e7f9a3f4-4e91-406d-b8da-1bf99ac318bd-util\") pod \"9b2186798aeb926003696bc84f4630fc1fe1628e77d31f0b55ade92554p4x65\" (UID: \"e7f9a3f4-4e91-406d-b8da-1bf99ac318bd\") " pod="openstack-operators/9b2186798aeb926003696bc84f4630fc1fe1628e77d31f0b55ade92554p4x65" Nov 24 11:22:31 crc kubenswrapper[5072]: I1124 11:22:31.034240 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e7f9a3f4-4e91-406d-b8da-1bf99ac318bd-bundle\") pod \"9b2186798aeb926003696bc84f4630fc1fe1628e77d31f0b55ade92554p4x65\" (UID: \"e7f9a3f4-4e91-406d-b8da-1bf99ac318bd\") " pod="openstack-operators/9b2186798aeb926003696bc84f4630fc1fe1628e77d31f0b55ade92554p4x65" Nov 24 11:22:31 crc kubenswrapper[5072]: I1124 11:22:31.034271 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9q47\" (UniqueName: \"kubernetes.io/projected/e7f9a3f4-4e91-406d-b8da-1bf99ac318bd-kube-api-access-x9q47\") pod \"9b2186798aeb926003696bc84f4630fc1fe1628e77d31f0b55ade92554p4x65\" (UID: \"e7f9a3f4-4e91-406d-b8da-1bf99ac318bd\") " pod="openstack-operators/9b2186798aeb926003696bc84f4630fc1fe1628e77d31f0b55ade92554p4x65" Nov 24 11:22:31 crc kubenswrapper[5072]: I1124 11:22:31.135658 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e7f9a3f4-4e91-406d-b8da-1bf99ac318bd-util\") pod \"9b2186798aeb926003696bc84f4630fc1fe1628e77d31f0b55ade92554p4x65\" (UID: \"e7f9a3f4-4e91-406d-b8da-1bf99ac318bd\") " pod="openstack-operators/9b2186798aeb926003696bc84f4630fc1fe1628e77d31f0b55ade92554p4x65" Nov 24 11:22:31 crc kubenswrapper[5072]: I1124 11:22:31.135719 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e7f9a3f4-4e91-406d-b8da-1bf99ac318bd-bundle\") pod \"9b2186798aeb926003696bc84f4630fc1fe1628e77d31f0b55ade92554p4x65\" (UID: \"e7f9a3f4-4e91-406d-b8da-1bf99ac318bd\") " pod="openstack-operators/9b2186798aeb926003696bc84f4630fc1fe1628e77d31f0b55ade92554p4x65" Nov 24 11:22:31 crc kubenswrapper[5072]: I1124 11:22:31.135752 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9q47\" (UniqueName: \"kubernetes.io/projected/e7f9a3f4-4e91-406d-b8da-1bf99ac318bd-kube-api-access-x9q47\") pod \"9b2186798aeb926003696bc84f4630fc1fe1628e77d31f0b55ade92554p4x65\" (UID: \"e7f9a3f4-4e91-406d-b8da-1bf99ac318bd\") " pod="openstack-operators/9b2186798aeb926003696bc84f4630fc1fe1628e77d31f0b55ade92554p4x65" Nov 24 11:22:31 crc kubenswrapper[5072]: I1124 11:22:31.136227 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e7f9a3f4-4e91-406d-b8da-1bf99ac318bd-util\") pod \"9b2186798aeb926003696bc84f4630fc1fe1628e77d31f0b55ade92554p4x65\" (UID: \"e7f9a3f4-4e91-406d-b8da-1bf99ac318bd\") " pod="openstack-operators/9b2186798aeb926003696bc84f4630fc1fe1628e77d31f0b55ade92554p4x65" Nov 24 11:22:31 crc kubenswrapper[5072]: I1124 11:22:31.136359 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e7f9a3f4-4e91-406d-b8da-1bf99ac318bd-bundle\") pod \"9b2186798aeb926003696bc84f4630fc1fe1628e77d31f0b55ade92554p4x65\" (UID: \"e7f9a3f4-4e91-406d-b8da-1bf99ac318bd\") " pod="openstack-operators/9b2186798aeb926003696bc84f4630fc1fe1628e77d31f0b55ade92554p4x65" Nov 24 11:22:31 crc kubenswrapper[5072]: I1124 11:22:31.155175 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9q47\" (UniqueName: \"kubernetes.io/projected/e7f9a3f4-4e91-406d-b8da-1bf99ac318bd-kube-api-access-x9q47\") pod \"9b2186798aeb926003696bc84f4630fc1fe1628e77d31f0b55ade92554p4x65\" (UID: \"e7f9a3f4-4e91-406d-b8da-1bf99ac318bd\") " pod="openstack-operators/9b2186798aeb926003696bc84f4630fc1fe1628e77d31f0b55ade92554p4x65" Nov 24 11:22:31 crc kubenswrapper[5072]: I1124 11:22:31.165547 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9b2186798aeb926003696bc84f4630fc1fe1628e77d31f0b55ade92554p4x65" Nov 24 11:22:31 crc kubenswrapper[5072]: I1124 11:22:31.624879 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/9b2186798aeb926003696bc84f4630fc1fe1628e77d31f0b55ade92554p4x65"] Nov 24 11:22:31 crc kubenswrapper[5072]: W1124 11:22:31.628047 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode7f9a3f4_4e91_406d_b8da_1bf99ac318bd.slice/crio-7b776c7d88448cf4e83e45acaa8c3504c3121f7ca188d8e0a13af9cf75cd8be2 WatchSource:0}: Error finding container 7b776c7d88448cf4e83e45acaa8c3504c3121f7ca188d8e0a13af9cf75cd8be2: Status 404 returned error can't find the container with id 7b776c7d88448cf4e83e45acaa8c3504c3121f7ca188d8e0a13af9cf75cd8be2 Nov 24 11:22:32 crc kubenswrapper[5072]: I1124 11:22:32.304609 5072 generic.go:334] "Generic (PLEG): container finished" podID="e7f9a3f4-4e91-406d-b8da-1bf99ac318bd" containerID="6a6f2555c3d33b8d621b55ca57286dfada767018cb0ea48fb57ade491840c70f" exitCode=0 Nov 24 11:22:32 crc kubenswrapper[5072]: I1124 11:22:32.304656 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9b2186798aeb926003696bc84f4630fc1fe1628e77d31f0b55ade92554p4x65" event={"ID":"e7f9a3f4-4e91-406d-b8da-1bf99ac318bd","Type":"ContainerDied","Data":"6a6f2555c3d33b8d621b55ca57286dfada767018cb0ea48fb57ade491840c70f"} Nov 24 11:22:32 crc kubenswrapper[5072]: I1124 11:22:32.304685 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9b2186798aeb926003696bc84f4630fc1fe1628e77d31f0b55ade92554p4x65" event={"ID":"e7f9a3f4-4e91-406d-b8da-1bf99ac318bd","Type":"ContainerStarted","Data":"7b776c7d88448cf4e83e45acaa8c3504c3121f7ca188d8e0a13af9cf75cd8be2"} Nov 24 11:22:33 crc kubenswrapper[5072]: I1124 11:22:33.312703 5072 generic.go:334] "Generic (PLEG): container finished" podID="e7f9a3f4-4e91-406d-b8da-1bf99ac318bd" containerID="1074206a6d7d1bd0e483e98d9368a7b20cc9626fa2b14b9443cdcc2f78fdb031" exitCode=0 Nov 24 11:22:33 crc kubenswrapper[5072]: I1124 11:22:33.312802 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9b2186798aeb926003696bc84f4630fc1fe1628e77d31f0b55ade92554p4x65" event={"ID":"e7f9a3f4-4e91-406d-b8da-1bf99ac318bd","Type":"ContainerDied","Data":"1074206a6d7d1bd0e483e98d9368a7b20cc9626fa2b14b9443cdcc2f78fdb031"} Nov 24 11:22:34 crc kubenswrapper[5072]: I1124 11:22:34.323517 5072 generic.go:334] "Generic (PLEG): container finished" podID="e7f9a3f4-4e91-406d-b8da-1bf99ac318bd" containerID="8e92dd47b34d4d0dfb67d27fd0141d3f7f64d0322baf31922fc34684b5e3257d" exitCode=0 Nov 24 11:22:34 crc kubenswrapper[5072]: I1124 11:22:34.323583 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9b2186798aeb926003696bc84f4630fc1fe1628e77d31f0b55ade92554p4x65" event={"ID":"e7f9a3f4-4e91-406d-b8da-1bf99ac318bd","Type":"ContainerDied","Data":"8e92dd47b34d4d0dfb67d27fd0141d3f7f64d0322baf31922fc34684b5e3257d"} Nov 24 11:22:35 crc kubenswrapper[5072]: I1124 11:22:35.653953 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9b2186798aeb926003696bc84f4630fc1fe1628e77d31f0b55ade92554p4x65" Nov 24 11:22:35 crc kubenswrapper[5072]: I1124 11:22:35.799504 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e7f9a3f4-4e91-406d-b8da-1bf99ac318bd-bundle\") pod \"e7f9a3f4-4e91-406d-b8da-1bf99ac318bd\" (UID: \"e7f9a3f4-4e91-406d-b8da-1bf99ac318bd\") " Nov 24 11:22:35 crc kubenswrapper[5072]: I1124 11:22:35.799565 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9q47\" (UniqueName: \"kubernetes.io/projected/e7f9a3f4-4e91-406d-b8da-1bf99ac318bd-kube-api-access-x9q47\") pod \"e7f9a3f4-4e91-406d-b8da-1bf99ac318bd\" (UID: \"e7f9a3f4-4e91-406d-b8da-1bf99ac318bd\") " Nov 24 11:22:35 crc kubenswrapper[5072]: I1124 11:22:35.799663 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e7f9a3f4-4e91-406d-b8da-1bf99ac318bd-util\") pod \"e7f9a3f4-4e91-406d-b8da-1bf99ac318bd\" (UID: \"e7f9a3f4-4e91-406d-b8da-1bf99ac318bd\") " Nov 24 11:22:35 crc kubenswrapper[5072]: I1124 11:22:35.800143 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7f9a3f4-4e91-406d-b8da-1bf99ac318bd-bundle" (OuterVolumeSpecName: "bundle") pod "e7f9a3f4-4e91-406d-b8da-1bf99ac318bd" (UID: "e7f9a3f4-4e91-406d-b8da-1bf99ac318bd"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:22:35 crc kubenswrapper[5072]: I1124 11:22:35.811596 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7f9a3f4-4e91-406d-b8da-1bf99ac318bd-kube-api-access-x9q47" (OuterVolumeSpecName: "kube-api-access-x9q47") pod "e7f9a3f4-4e91-406d-b8da-1bf99ac318bd" (UID: "e7f9a3f4-4e91-406d-b8da-1bf99ac318bd"). InnerVolumeSpecName "kube-api-access-x9q47". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:22:35 crc kubenswrapper[5072]: I1124 11:22:35.812740 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7f9a3f4-4e91-406d-b8da-1bf99ac318bd-util" (OuterVolumeSpecName: "util") pod "e7f9a3f4-4e91-406d-b8da-1bf99ac318bd" (UID: "e7f9a3f4-4e91-406d-b8da-1bf99ac318bd"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:22:35 crc kubenswrapper[5072]: I1124 11:22:35.901232 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x9q47\" (UniqueName: \"kubernetes.io/projected/e7f9a3f4-4e91-406d-b8da-1bf99ac318bd-kube-api-access-x9q47\") on node \"crc\" DevicePath \"\"" Nov 24 11:22:35 crc kubenswrapper[5072]: I1124 11:22:35.901272 5072 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e7f9a3f4-4e91-406d-b8da-1bf99ac318bd-util\") on node \"crc\" DevicePath \"\"" Nov 24 11:22:35 crc kubenswrapper[5072]: I1124 11:22:35.901286 5072 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e7f9a3f4-4e91-406d-b8da-1bf99ac318bd-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:22:36 crc kubenswrapper[5072]: I1124 11:22:36.343096 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9b2186798aeb926003696bc84f4630fc1fe1628e77d31f0b55ade92554p4x65" event={"ID":"e7f9a3f4-4e91-406d-b8da-1bf99ac318bd","Type":"ContainerDied","Data":"7b776c7d88448cf4e83e45acaa8c3504c3121f7ca188d8e0a13af9cf75cd8be2"} Nov 24 11:22:36 crc kubenswrapper[5072]: I1124 11:22:36.343172 5072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b776c7d88448cf4e83e45acaa8c3504c3121f7ca188d8e0a13af9cf75cd8be2" Nov 24 11:22:36 crc kubenswrapper[5072]: I1124 11:22:36.343286 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9b2186798aeb926003696bc84f4630fc1fe1628e77d31f0b55ade92554p4x65" Nov 24 11:22:41 crc kubenswrapper[5072]: I1124 11:22:41.393160 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5np56"] Nov 24 11:22:41 crc kubenswrapper[5072]: E1124 11:22:41.393881 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7f9a3f4-4e91-406d-b8da-1bf99ac318bd" containerName="pull" Nov 24 11:22:41 crc kubenswrapper[5072]: I1124 11:22:41.393892 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7f9a3f4-4e91-406d-b8da-1bf99ac318bd" containerName="pull" Nov 24 11:22:41 crc kubenswrapper[5072]: E1124 11:22:41.393904 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7f9a3f4-4e91-406d-b8da-1bf99ac318bd" containerName="extract" Nov 24 11:22:41 crc kubenswrapper[5072]: I1124 11:22:41.393913 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7f9a3f4-4e91-406d-b8da-1bf99ac318bd" containerName="extract" Nov 24 11:22:41 crc kubenswrapper[5072]: E1124 11:22:41.393924 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7f9a3f4-4e91-406d-b8da-1bf99ac318bd" containerName="util" Nov 24 11:22:41 crc kubenswrapper[5072]: I1124 11:22:41.393930 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7f9a3f4-4e91-406d-b8da-1bf99ac318bd" containerName="util" Nov 24 11:22:41 crc kubenswrapper[5072]: I1124 11:22:41.394034 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7f9a3f4-4e91-406d-b8da-1bf99ac318bd" containerName="extract" Nov 24 11:22:41 crc kubenswrapper[5072]: I1124 11:22:41.394852 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5np56" Nov 24 11:22:41 crc kubenswrapper[5072]: I1124 11:22:41.401131 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5np56"] Nov 24 11:22:41 crc kubenswrapper[5072]: I1124 11:22:41.585124 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7f75ec5-3739-4ed6-a705-326b47f324a7-catalog-content\") pod \"community-operators-5np56\" (UID: \"c7f75ec5-3739-4ed6-a705-326b47f324a7\") " pod="openshift-marketplace/community-operators-5np56" Nov 24 11:22:41 crc kubenswrapper[5072]: I1124 11:22:41.585255 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxttj\" (UniqueName: \"kubernetes.io/projected/c7f75ec5-3739-4ed6-a705-326b47f324a7-kube-api-access-cxttj\") pod \"community-operators-5np56\" (UID: \"c7f75ec5-3739-4ed6-a705-326b47f324a7\") " pod="openshift-marketplace/community-operators-5np56" Nov 24 11:22:41 crc kubenswrapper[5072]: I1124 11:22:41.585306 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7f75ec5-3739-4ed6-a705-326b47f324a7-utilities\") pod \"community-operators-5np56\" (UID: \"c7f75ec5-3739-4ed6-a705-326b47f324a7\") " pod="openshift-marketplace/community-operators-5np56" Nov 24 11:22:41 crc kubenswrapper[5072]: I1124 11:22:41.686341 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxttj\" (UniqueName: \"kubernetes.io/projected/c7f75ec5-3739-4ed6-a705-326b47f324a7-kube-api-access-cxttj\") pod \"community-operators-5np56\" (UID: \"c7f75ec5-3739-4ed6-a705-326b47f324a7\") " pod="openshift-marketplace/community-operators-5np56" Nov 24 11:22:41 crc kubenswrapper[5072]: I1124 11:22:41.686405 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7f75ec5-3739-4ed6-a705-326b47f324a7-utilities\") pod \"community-operators-5np56\" (UID: \"c7f75ec5-3739-4ed6-a705-326b47f324a7\") " pod="openshift-marketplace/community-operators-5np56" Nov 24 11:22:41 crc kubenswrapper[5072]: I1124 11:22:41.686483 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7f75ec5-3739-4ed6-a705-326b47f324a7-catalog-content\") pod \"community-operators-5np56\" (UID: \"c7f75ec5-3739-4ed6-a705-326b47f324a7\") " pod="openshift-marketplace/community-operators-5np56" Nov 24 11:22:41 crc kubenswrapper[5072]: I1124 11:22:41.686975 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7f75ec5-3739-4ed6-a705-326b47f324a7-utilities\") pod \"community-operators-5np56\" (UID: \"c7f75ec5-3739-4ed6-a705-326b47f324a7\") " pod="openshift-marketplace/community-operators-5np56" Nov 24 11:22:41 crc kubenswrapper[5072]: I1124 11:22:41.687057 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7f75ec5-3739-4ed6-a705-326b47f324a7-catalog-content\") pod \"community-operators-5np56\" (UID: \"c7f75ec5-3739-4ed6-a705-326b47f324a7\") " pod="openshift-marketplace/community-operators-5np56" Nov 24 11:22:41 crc kubenswrapper[5072]: I1124 11:22:41.710345 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxttj\" (UniqueName: \"kubernetes.io/projected/c7f75ec5-3739-4ed6-a705-326b47f324a7-kube-api-access-cxttj\") pod \"community-operators-5np56\" (UID: \"c7f75ec5-3739-4ed6-a705-326b47f324a7\") " pod="openshift-marketplace/community-operators-5np56" Nov 24 11:22:41 crc kubenswrapper[5072]: I1124 11:22:41.714165 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5np56" Nov 24 11:22:42 crc kubenswrapper[5072]: I1124 11:22:42.128233 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5np56"] Nov 24 11:22:42 crc kubenswrapper[5072]: I1124 11:22:42.390956 5072 generic.go:334] "Generic (PLEG): container finished" podID="c7f75ec5-3739-4ed6-a705-326b47f324a7" containerID="8acb6b39098a332f6f8cd56b28f726b6c89257f783d9aa5776630a876b1f0e59" exitCode=0 Nov 24 11:22:42 crc kubenswrapper[5072]: I1124 11:22:42.391006 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5np56" event={"ID":"c7f75ec5-3739-4ed6-a705-326b47f324a7","Type":"ContainerDied","Data":"8acb6b39098a332f6f8cd56b28f726b6c89257f783d9aa5776630a876b1f0e59"} Nov 24 11:22:42 crc kubenswrapper[5072]: I1124 11:22:42.391234 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5np56" event={"ID":"c7f75ec5-3739-4ed6-a705-326b47f324a7","Type":"ContainerStarted","Data":"d09f792f44da0362d815ccbd278ca906d6d057f756c7f3fc9e7e22788226dcde"} Nov 24 11:22:42 crc kubenswrapper[5072]: I1124 11:22:42.829327 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-68868f9b94-xzgj7"] Nov 24 11:22:42 crc kubenswrapper[5072]: I1124 11:22:42.830528 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-68868f9b94-xzgj7" Nov 24 11:22:42 crc kubenswrapper[5072]: I1124 11:22:42.832583 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-qlx2g" Nov 24 11:22:42 crc kubenswrapper[5072]: I1124 11:22:42.903208 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-68868f9b94-xzgj7"] Nov 24 11:22:43 crc kubenswrapper[5072]: I1124 11:22:43.003206 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rn2zr\" (UniqueName: \"kubernetes.io/projected/cf28b96d-16c5-40f6-a588-0a77f527d52d-kube-api-access-rn2zr\") pod \"openstack-operator-controller-operator-68868f9b94-xzgj7\" (UID: \"cf28b96d-16c5-40f6-a588-0a77f527d52d\") " pod="openstack-operators/openstack-operator-controller-operator-68868f9b94-xzgj7" Nov 24 11:22:43 crc kubenswrapper[5072]: I1124 11:22:43.104897 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rn2zr\" (UniqueName: \"kubernetes.io/projected/cf28b96d-16c5-40f6-a588-0a77f527d52d-kube-api-access-rn2zr\") pod \"openstack-operator-controller-operator-68868f9b94-xzgj7\" (UID: \"cf28b96d-16c5-40f6-a588-0a77f527d52d\") " pod="openstack-operators/openstack-operator-controller-operator-68868f9b94-xzgj7" Nov 24 11:22:43 crc kubenswrapper[5072]: I1124 11:22:43.124627 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rn2zr\" (UniqueName: \"kubernetes.io/projected/cf28b96d-16c5-40f6-a588-0a77f527d52d-kube-api-access-rn2zr\") pod \"openstack-operator-controller-operator-68868f9b94-xzgj7\" (UID: \"cf28b96d-16c5-40f6-a588-0a77f527d52d\") " pod="openstack-operators/openstack-operator-controller-operator-68868f9b94-xzgj7" Nov 24 11:22:43 crc kubenswrapper[5072]: I1124 11:22:43.148441 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-68868f9b94-xzgj7" Nov 24 11:22:43 crc kubenswrapper[5072]: I1124 11:22:43.484578 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5np56" event={"ID":"c7f75ec5-3739-4ed6-a705-326b47f324a7","Type":"ContainerStarted","Data":"c52f38affbd6711f8583157471d0f65b0595c18f04bb61cf6ee71a7cd9971cfc"} Nov 24 11:22:43 crc kubenswrapper[5072]: I1124 11:22:43.579855 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-68868f9b94-xzgj7"] Nov 24 11:22:44 crc kubenswrapper[5072]: I1124 11:22:44.501976 5072 generic.go:334] "Generic (PLEG): container finished" podID="c7f75ec5-3739-4ed6-a705-326b47f324a7" containerID="c52f38affbd6711f8583157471d0f65b0595c18f04bb61cf6ee71a7cd9971cfc" exitCode=0 Nov 24 11:22:44 crc kubenswrapper[5072]: I1124 11:22:44.503291 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5np56" event={"ID":"c7f75ec5-3739-4ed6-a705-326b47f324a7","Type":"ContainerDied","Data":"c52f38affbd6711f8583157471d0f65b0595c18f04bb61cf6ee71a7cd9971cfc"} Nov 24 11:22:44 crc kubenswrapper[5072]: I1124 11:22:44.506728 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-68868f9b94-xzgj7" event={"ID":"cf28b96d-16c5-40f6-a588-0a77f527d52d","Type":"ContainerStarted","Data":"a1181148a96c85e5bc4cf98c36a5b21b06c31fd0dfd1e1a3ee3ac83ef86c0fc6"} Nov 24 11:22:47 crc kubenswrapper[5072]: I1124 11:22:47.526701 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5np56" event={"ID":"c7f75ec5-3739-4ed6-a705-326b47f324a7","Type":"ContainerStarted","Data":"7ccfc921cc1b52a848f402584a3eecac43a57149541f75b58ddd5acee81a7bf8"} Nov 24 11:22:47 crc kubenswrapper[5072]: I1124 11:22:47.528533 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-68868f9b94-xzgj7" event={"ID":"cf28b96d-16c5-40f6-a588-0a77f527d52d","Type":"ContainerStarted","Data":"9373306d1f867f6cb0fe6bd3473845abe79cc67589aead53b7a9ae37483d143e"} Nov 24 11:22:47 crc kubenswrapper[5072]: I1124 11:22:47.528814 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-68868f9b94-xzgj7" Nov 24 11:22:47 crc kubenswrapper[5072]: I1124 11:22:47.551881 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5np56" podStartSLOduration=1.8218274700000001 podStartE2EDuration="6.551863254s" podCreationTimestamp="2025-11-24 11:22:41 +0000 UTC" firstStartedPulling="2025-11-24 11:22:42.392201745 +0000 UTC m=+814.103726221" lastFinishedPulling="2025-11-24 11:22:47.122237529 +0000 UTC m=+818.833762005" observedRunningTime="2025-11-24 11:22:47.546410717 +0000 UTC m=+819.257935213" watchObservedRunningTime="2025-11-24 11:22:47.551863254 +0000 UTC m=+819.263387750" Nov 24 11:22:47 crc kubenswrapper[5072]: I1124 11:22:47.579975 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-68868f9b94-xzgj7" podStartSLOduration=2.039810093 podStartE2EDuration="5.579951512s" podCreationTimestamp="2025-11-24 11:22:42 +0000 UTC" firstStartedPulling="2025-11-24 11:22:43.595283252 +0000 UTC m=+815.306807738" lastFinishedPulling="2025-11-24 11:22:47.135424681 +0000 UTC m=+818.846949157" observedRunningTime="2025-11-24 11:22:47.57469654 +0000 UTC m=+819.286221056" watchObservedRunningTime="2025-11-24 11:22:47.579951512 +0000 UTC m=+819.291475998" Nov 24 11:22:51 crc kubenswrapper[5072]: I1124 11:22:51.714777 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5np56" Nov 24 11:22:51 crc kubenswrapper[5072]: I1124 11:22:51.715592 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5np56" Nov 24 11:22:51 crc kubenswrapper[5072]: I1124 11:22:51.767584 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5np56" Nov 24 11:22:52 crc kubenswrapper[5072]: I1124 11:22:52.606524 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5np56" Nov 24 11:22:53 crc kubenswrapper[5072]: I1124 11:22:53.152237 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-68868f9b94-xzgj7" Nov 24 11:22:54 crc kubenswrapper[5072]: I1124 11:22:54.182933 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5np56"] Nov 24 11:22:54 crc kubenswrapper[5072]: I1124 11:22:54.582365 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5np56" podUID="c7f75ec5-3739-4ed6-a705-326b47f324a7" containerName="registry-server" containerID="cri-o://7ccfc921cc1b52a848f402584a3eecac43a57149541f75b58ddd5acee81a7bf8" gracePeriod=2 Nov 24 11:22:55 crc kubenswrapper[5072]: I1124 11:22:55.150461 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5np56" Nov 24 11:22:55 crc kubenswrapper[5072]: I1124 11:22:55.325954 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7f75ec5-3739-4ed6-a705-326b47f324a7-utilities\") pod \"c7f75ec5-3739-4ed6-a705-326b47f324a7\" (UID: \"c7f75ec5-3739-4ed6-a705-326b47f324a7\") " Nov 24 11:22:55 crc kubenswrapper[5072]: I1124 11:22:55.326302 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cxttj\" (UniqueName: \"kubernetes.io/projected/c7f75ec5-3739-4ed6-a705-326b47f324a7-kube-api-access-cxttj\") pod \"c7f75ec5-3739-4ed6-a705-326b47f324a7\" (UID: \"c7f75ec5-3739-4ed6-a705-326b47f324a7\") " Nov 24 11:22:55 crc kubenswrapper[5072]: I1124 11:22:55.326351 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7f75ec5-3739-4ed6-a705-326b47f324a7-catalog-content\") pod \"c7f75ec5-3739-4ed6-a705-326b47f324a7\" (UID: \"c7f75ec5-3739-4ed6-a705-326b47f324a7\") " Nov 24 11:22:55 crc kubenswrapper[5072]: I1124 11:22:55.327430 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7f75ec5-3739-4ed6-a705-326b47f324a7-utilities" (OuterVolumeSpecName: "utilities") pod "c7f75ec5-3739-4ed6-a705-326b47f324a7" (UID: "c7f75ec5-3739-4ed6-a705-326b47f324a7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:22:55 crc kubenswrapper[5072]: I1124 11:22:55.334823 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7f75ec5-3739-4ed6-a705-326b47f324a7-kube-api-access-cxttj" (OuterVolumeSpecName: "kube-api-access-cxttj") pod "c7f75ec5-3739-4ed6-a705-326b47f324a7" (UID: "c7f75ec5-3739-4ed6-a705-326b47f324a7"). InnerVolumeSpecName "kube-api-access-cxttj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:22:55 crc kubenswrapper[5072]: I1124 11:22:55.393620 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7f75ec5-3739-4ed6-a705-326b47f324a7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c7f75ec5-3739-4ed6-a705-326b47f324a7" (UID: "c7f75ec5-3739-4ed6-a705-326b47f324a7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:22:55 crc kubenswrapper[5072]: I1124 11:22:55.427483 5072 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7f75ec5-3739-4ed6-a705-326b47f324a7-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 11:22:55 crc kubenswrapper[5072]: I1124 11:22:55.427525 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cxttj\" (UniqueName: \"kubernetes.io/projected/c7f75ec5-3739-4ed6-a705-326b47f324a7-kube-api-access-cxttj\") on node \"crc\" DevicePath \"\"" Nov 24 11:22:55 crc kubenswrapper[5072]: I1124 11:22:55.427538 5072 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7f75ec5-3739-4ed6-a705-326b47f324a7-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 11:22:55 crc kubenswrapper[5072]: I1124 11:22:55.591887 5072 generic.go:334] "Generic (PLEG): container finished" podID="c7f75ec5-3739-4ed6-a705-326b47f324a7" containerID="7ccfc921cc1b52a848f402584a3eecac43a57149541f75b58ddd5acee81a7bf8" exitCode=0 Nov 24 11:22:55 crc kubenswrapper[5072]: I1124 11:22:55.591935 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5np56" event={"ID":"c7f75ec5-3739-4ed6-a705-326b47f324a7","Type":"ContainerDied","Data":"7ccfc921cc1b52a848f402584a3eecac43a57149541f75b58ddd5acee81a7bf8"} Nov 24 11:22:55 crc kubenswrapper[5072]: I1124 11:22:55.591965 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5np56" event={"ID":"c7f75ec5-3739-4ed6-a705-326b47f324a7","Type":"ContainerDied","Data":"d09f792f44da0362d815ccbd278ca906d6d057f756c7f3fc9e7e22788226dcde"} Nov 24 11:22:55 crc kubenswrapper[5072]: I1124 11:22:55.591988 5072 scope.go:117] "RemoveContainer" containerID="7ccfc921cc1b52a848f402584a3eecac43a57149541f75b58ddd5acee81a7bf8" Nov 24 11:22:55 crc kubenswrapper[5072]: I1124 11:22:55.592152 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5np56" Nov 24 11:22:55 crc kubenswrapper[5072]: I1124 11:22:55.611203 5072 scope.go:117] "RemoveContainer" containerID="c52f38affbd6711f8583157471d0f65b0595c18f04bb61cf6ee71a7cd9971cfc" Nov 24 11:22:55 crc kubenswrapper[5072]: I1124 11:22:55.631184 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5np56"] Nov 24 11:22:55 crc kubenswrapper[5072]: I1124 11:22:55.636012 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5np56"] Nov 24 11:22:55 crc kubenswrapper[5072]: I1124 11:22:55.650299 5072 scope.go:117] "RemoveContainer" containerID="8acb6b39098a332f6f8cd56b28f726b6c89257f783d9aa5776630a876b1f0e59" Nov 24 11:22:55 crc kubenswrapper[5072]: I1124 11:22:55.669630 5072 scope.go:117] "RemoveContainer" containerID="7ccfc921cc1b52a848f402584a3eecac43a57149541f75b58ddd5acee81a7bf8" Nov 24 11:22:55 crc kubenswrapper[5072]: E1124 11:22:55.670091 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ccfc921cc1b52a848f402584a3eecac43a57149541f75b58ddd5acee81a7bf8\": container with ID starting with 7ccfc921cc1b52a848f402584a3eecac43a57149541f75b58ddd5acee81a7bf8 not found: ID does not exist" containerID="7ccfc921cc1b52a848f402584a3eecac43a57149541f75b58ddd5acee81a7bf8" Nov 24 11:22:55 crc kubenswrapper[5072]: I1124 11:22:55.670123 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ccfc921cc1b52a848f402584a3eecac43a57149541f75b58ddd5acee81a7bf8"} err="failed to get container status \"7ccfc921cc1b52a848f402584a3eecac43a57149541f75b58ddd5acee81a7bf8\": rpc error: code = NotFound desc = could not find container \"7ccfc921cc1b52a848f402584a3eecac43a57149541f75b58ddd5acee81a7bf8\": container with ID starting with 7ccfc921cc1b52a848f402584a3eecac43a57149541f75b58ddd5acee81a7bf8 not found: ID does not exist" Nov 24 11:22:55 crc kubenswrapper[5072]: I1124 11:22:55.670144 5072 scope.go:117] "RemoveContainer" containerID="c52f38affbd6711f8583157471d0f65b0595c18f04bb61cf6ee71a7cd9971cfc" Nov 24 11:22:55 crc kubenswrapper[5072]: E1124 11:22:55.670639 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c52f38affbd6711f8583157471d0f65b0595c18f04bb61cf6ee71a7cd9971cfc\": container with ID starting with c52f38affbd6711f8583157471d0f65b0595c18f04bb61cf6ee71a7cd9971cfc not found: ID does not exist" containerID="c52f38affbd6711f8583157471d0f65b0595c18f04bb61cf6ee71a7cd9971cfc" Nov 24 11:22:55 crc kubenswrapper[5072]: I1124 11:22:55.670678 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c52f38affbd6711f8583157471d0f65b0595c18f04bb61cf6ee71a7cd9971cfc"} err="failed to get container status \"c52f38affbd6711f8583157471d0f65b0595c18f04bb61cf6ee71a7cd9971cfc\": rpc error: code = NotFound desc = could not find container \"c52f38affbd6711f8583157471d0f65b0595c18f04bb61cf6ee71a7cd9971cfc\": container with ID starting with c52f38affbd6711f8583157471d0f65b0595c18f04bb61cf6ee71a7cd9971cfc not found: ID does not exist" Nov 24 11:22:55 crc kubenswrapper[5072]: I1124 11:22:55.670702 5072 scope.go:117] "RemoveContainer" containerID="8acb6b39098a332f6f8cd56b28f726b6c89257f783d9aa5776630a876b1f0e59" Nov 24 11:22:55 crc kubenswrapper[5072]: E1124 11:22:55.671101 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8acb6b39098a332f6f8cd56b28f726b6c89257f783d9aa5776630a876b1f0e59\": container with ID starting with 8acb6b39098a332f6f8cd56b28f726b6c89257f783d9aa5776630a876b1f0e59 not found: ID does not exist" containerID="8acb6b39098a332f6f8cd56b28f726b6c89257f783d9aa5776630a876b1f0e59" Nov 24 11:22:55 crc kubenswrapper[5072]: I1124 11:22:55.671120 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8acb6b39098a332f6f8cd56b28f726b6c89257f783d9aa5776630a876b1f0e59"} err="failed to get container status \"8acb6b39098a332f6f8cd56b28f726b6c89257f783d9aa5776630a876b1f0e59\": rpc error: code = NotFound desc = could not find container \"8acb6b39098a332f6f8cd56b28f726b6c89257f783d9aa5776630a876b1f0e59\": container with ID starting with 8acb6b39098a332f6f8cd56b28f726b6c89257f783d9aa5776630a876b1f0e59 not found: ID does not exist" Nov 24 11:22:57 crc kubenswrapper[5072]: I1124 11:22:57.023195 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7f75ec5-3739-4ed6-a705-326b47f324a7" path="/var/lib/kubelet/pods/c7f75ec5-3739-4ed6-a705-326b47f324a7/volumes" Nov 24 11:23:07 crc kubenswrapper[5072]: I1124 11:23:07.813469 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-86dc4d89c8-4jwxd"] Nov 24 11:23:07 crc kubenswrapper[5072]: E1124 11:23:07.814314 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7f75ec5-3739-4ed6-a705-326b47f324a7" containerName="registry-server" Nov 24 11:23:07 crc kubenswrapper[5072]: I1124 11:23:07.814332 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7f75ec5-3739-4ed6-a705-326b47f324a7" containerName="registry-server" Nov 24 11:23:07 crc kubenswrapper[5072]: E1124 11:23:07.814343 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7f75ec5-3739-4ed6-a705-326b47f324a7" containerName="extract-utilities" Nov 24 11:23:07 crc kubenswrapper[5072]: I1124 11:23:07.814350 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7f75ec5-3739-4ed6-a705-326b47f324a7" containerName="extract-utilities" Nov 24 11:23:07 crc kubenswrapper[5072]: E1124 11:23:07.814394 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7f75ec5-3739-4ed6-a705-326b47f324a7" containerName="extract-content" Nov 24 11:23:07 crc kubenswrapper[5072]: I1124 11:23:07.814403 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7f75ec5-3739-4ed6-a705-326b47f324a7" containerName="extract-content" Nov 24 11:23:07 crc kubenswrapper[5072]: I1124 11:23:07.814533 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7f75ec5-3739-4ed6-a705-326b47f324a7" containerName="registry-server" Nov 24 11:23:07 crc kubenswrapper[5072]: I1124 11:23:07.815305 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-4jwxd" Nov 24 11:23:07 crc kubenswrapper[5072]: I1124 11:23:07.817067 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-g9zcf" Nov 24 11:23:07 crc kubenswrapper[5072]: I1124 11:23:07.821343 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-79856dc55c-756nd"] Nov 24 11:23:07 crc kubenswrapper[5072]: I1124 11:23:07.822464 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-756nd" Nov 24 11:23:07 crc kubenswrapper[5072]: I1124 11:23:07.824126 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-66ps8" Nov 24 11:23:07 crc kubenswrapper[5072]: I1124 11:23:07.829542 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-86dc4d89c8-4jwxd"] Nov 24 11:23:07 crc kubenswrapper[5072]: I1124 11:23:07.832896 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-79856dc55c-756nd"] Nov 24 11:23:07 crc kubenswrapper[5072]: I1124 11:23:07.848912 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-7d695c9b56-bpsnt"] Nov 24 11:23:07 crc kubenswrapper[5072]: I1124 11:23:07.853062 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-bpsnt" Nov 24 11:23:07 crc kubenswrapper[5072]: I1124 11:23:07.862028 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-lst9c" Nov 24 11:23:07 crc kubenswrapper[5072]: I1124 11:23:07.878251 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-68b95954c9-5s9dg"] Nov 24 11:23:07 crc kubenswrapper[5072]: I1124 11:23:07.880330 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-5s9dg" Nov 24 11:23:07 crc kubenswrapper[5072]: I1124 11:23:07.890982 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-srrq9" Nov 24 11:23:07 crc kubenswrapper[5072]: I1124 11:23:07.891845 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bs8ft\" (UniqueName: \"kubernetes.io/projected/500235e4-633d-486d-8ea9-bc0830747b6f-kube-api-access-bs8ft\") pod \"designate-operator-controller-manager-7d695c9b56-bpsnt\" (UID: \"500235e4-633d-486d-8ea9-bc0830747b6f\") " pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-bpsnt" Nov 24 11:23:07 crc kubenswrapper[5072]: I1124 11:23:07.891892 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnw7h\" (UniqueName: \"kubernetes.io/projected/a4945263-5f74-4c93-b782-8a381e40275c-kube-api-access-xnw7h\") pod \"barbican-operator-controller-manager-86dc4d89c8-4jwxd\" (UID: \"a4945263-5f74-4c93-b782-8a381e40275c\") " pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-4jwxd" Nov 24 11:23:07 crc kubenswrapper[5072]: I1124 11:23:07.891936 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbtfw\" (UniqueName: \"kubernetes.io/projected/67cd7ebd-5d77-4c59-a1af-2283997e4de4-kube-api-access-gbtfw\") pod \"glance-operator-controller-manager-68b95954c9-5s9dg\" (UID: \"67cd7ebd-5d77-4c59-a1af-2283997e4de4\") " pod="openstack-operators/glance-operator-controller-manager-68b95954c9-5s9dg" Nov 24 11:23:07 crc kubenswrapper[5072]: I1124 11:23:07.904697 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-7d695c9b56-bpsnt"] Nov 24 11:23:07 crc kubenswrapper[5072]: I1124 11:23:07.912079 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-68b95954c9-5s9dg"] Nov 24 11:23:07 crc kubenswrapper[5072]: I1124 11:23:07.918861 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-774b86978c-qn647"] Nov 24 11:23:07 crc kubenswrapper[5072]: I1124 11:23:07.920165 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-774b86978c-qn647" Nov 24 11:23:07 crc kubenswrapper[5072]: I1124 11:23:07.922722 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-kddxr" Nov 24 11:23:07 crc kubenswrapper[5072]: I1124 11:23:07.926218 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-774b86978c-qn647"] Nov 24 11:23:07 crc kubenswrapper[5072]: I1124 11:23:07.932395 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c9694994-wkqz4"] Nov 24 11:23:07 crc kubenswrapper[5072]: I1124 11:23:07.933641 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-wkqz4" Nov 24 11:23:07 crc kubenswrapper[5072]: I1124 11:23:07.936348 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-858778c9dc-lrk4z"] Nov 24 11:23:07 crc kubenswrapper[5072]: I1124 11:23:07.938616 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-858778c9dc-lrk4z" Nov 24 11:23:07 crc kubenswrapper[5072]: I1124 11:23:07.939055 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-w5v2r" Nov 24 11:23:07 crc kubenswrapper[5072]: I1124 11:23:07.942050 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Nov 24 11:23:07 crc kubenswrapper[5072]: I1124 11:23:07.942267 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-z9k8j" Nov 24 11:23:07 crc kubenswrapper[5072]: I1124 11:23:07.949461 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c9694994-wkqz4"] Nov 24 11:23:07 crc kubenswrapper[5072]: I1124 11:23:07.972425 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5bfcdc958c-7mzzw"] Nov 24 11:23:07 crc kubenswrapper[5072]: I1124 11:23:07.973412 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-7mzzw" Nov 24 11:23:07 crc kubenswrapper[5072]: I1124 11:23:07.980971 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-858778c9dc-lrk4z"] Nov 24 11:23:07 crc kubenswrapper[5072]: I1124 11:23:07.985899 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-l5ff9" Nov 24 11:23:07 crc kubenswrapper[5072]: I1124 11:23:07.993482 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnw7h\" (UniqueName: \"kubernetes.io/projected/a4945263-5f74-4c93-b782-8a381e40275c-kube-api-access-xnw7h\") pod \"barbican-operator-controller-manager-86dc4d89c8-4jwxd\" (UID: \"a4945263-5f74-4c93-b782-8a381e40275c\") " pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-4jwxd" Nov 24 11:23:07 crc kubenswrapper[5072]: I1124 11:23:07.993526 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbtfw\" (UniqueName: \"kubernetes.io/projected/67cd7ebd-5d77-4c59-a1af-2283997e4de4-kube-api-access-gbtfw\") pod \"glance-operator-controller-manager-68b95954c9-5s9dg\" (UID: \"67cd7ebd-5d77-4c59-a1af-2283997e4de4\") " pod="openstack-operators/glance-operator-controller-manager-68b95954c9-5s9dg" Nov 24 11:23:07 crc kubenswrapper[5072]: I1124 11:23:07.993556 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8mks\" (UniqueName: \"kubernetes.io/projected/459e53de-60cc-4763-a093-4940428df8c3-kube-api-access-p8mks\") pod \"cinder-operator-controller-manager-79856dc55c-756nd\" (UID: \"459e53de-60cc-4763-a093-4940428df8c3\") " pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-756nd" Nov 24 11:23:07 crc kubenswrapper[5072]: I1124 11:23:07.993623 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bs8ft\" (UniqueName: \"kubernetes.io/projected/500235e4-633d-486d-8ea9-bc0830747b6f-kube-api-access-bs8ft\") pod \"designate-operator-controller-manager-7d695c9b56-bpsnt\" (UID: \"500235e4-633d-486d-8ea9-bc0830747b6f\") " pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-bpsnt" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.007320 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-748dc6576f-rbff2"] Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.008235 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-rbff2" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.012174 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-6588bc459f-mnxdw"] Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.015818 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-6588bc459f-mnxdw" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.016450 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-9mfr8" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.024051 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-pmvkz" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.029346 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5bfcdc958c-7mzzw"] Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.035929 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnw7h\" (UniqueName: \"kubernetes.io/projected/a4945263-5f74-4c93-b782-8a381e40275c-kube-api-access-xnw7h\") pod \"barbican-operator-controller-manager-86dc4d89c8-4jwxd\" (UID: \"a4945263-5f74-4c93-b782-8a381e40275c\") " pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-4jwxd" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.036087 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbtfw\" (UniqueName: \"kubernetes.io/projected/67cd7ebd-5d77-4c59-a1af-2283997e4de4-kube-api-access-gbtfw\") pod \"glance-operator-controller-manager-68b95954c9-5s9dg\" (UID: \"67cd7ebd-5d77-4c59-a1af-2283997e4de4\") " pod="openstack-operators/glance-operator-controller-manager-68b95954c9-5s9dg" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.048914 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bs8ft\" (UniqueName: \"kubernetes.io/projected/500235e4-633d-486d-8ea9-bc0830747b6f-kube-api-access-bs8ft\") pod \"designate-operator-controller-manager-7d695c9b56-bpsnt\" (UID: \"500235e4-633d-486d-8ea9-bc0830747b6f\") " pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-bpsnt" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.052430 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-vwkpc"] Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.053393 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-vwkpc" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.059874 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-gxlzc" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.061442 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-b7nnc"] Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.062396 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-b7nnc" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.092811 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-sdpdr" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.094597 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jf8bn\" (UniqueName: \"kubernetes.io/projected/62a8ddcc-1b1e-4bd6-8e4b-41273932a900-kube-api-access-jf8bn\") pod \"heat-operator-controller-manager-774b86978c-qn647\" (UID: \"62a8ddcc-1b1e-4bd6-8e4b-41273932a900\") " pod="openstack-operators/heat-operator-controller-manager-774b86978c-qn647" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.094634 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-276c7\" (UniqueName: \"kubernetes.io/projected/e8ca42b5-22f1-4101-bbf6-d053bda8b6f2-kube-api-access-276c7\") pod \"infra-operator-controller-manager-858778c9dc-lrk4z\" (UID: \"e8ca42b5-22f1-4101-bbf6-d053bda8b6f2\") " pod="openstack-operators/infra-operator-controller-manager-858778c9dc-lrk4z" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.094666 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8mks\" (UniqueName: \"kubernetes.io/projected/459e53de-60cc-4763-a093-4940428df8c3-kube-api-access-p8mks\") pod \"cinder-operator-controller-manager-79856dc55c-756nd\" (UID: \"459e53de-60cc-4763-a093-4940428df8c3\") " pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-756nd" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.094686 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6rtl\" (UniqueName: \"kubernetes.io/projected/bdcb07cf-3d31-40c8-bd3b-1c791408a3b9-kube-api-access-d6rtl\") pod \"horizon-operator-controller-manager-68c9694994-wkqz4\" (UID: \"bdcb07cf-3d31-40c8-bd3b-1c791408a3b9\") " pod="openstack-operators/horizon-operator-controller-manager-68c9694994-wkqz4" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.094717 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e8ca42b5-22f1-4101-bbf6-d053bda8b6f2-cert\") pod \"infra-operator-controller-manager-858778c9dc-lrk4z\" (UID: \"e8ca42b5-22f1-4101-bbf6-d053bda8b6f2\") " pod="openstack-operators/infra-operator-controller-manager-858778c9dc-lrk4z" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.094741 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgn9s\" (UniqueName: \"kubernetes.io/projected/d7f60d9f-304e-4531-aeec-6c4a576d3a1e-kube-api-access-qgn9s\") pod \"ironic-operator-controller-manager-5bfcdc958c-7mzzw\" (UID: \"d7f60d9f-304e-4531-aeec-6c4a576d3a1e\") " pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-7mzzw" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.119435 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-vwkpc"] Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.133202 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8mks\" (UniqueName: \"kubernetes.io/projected/459e53de-60cc-4763-a093-4940428df8c3-kube-api-access-p8mks\") pod \"cinder-operator-controller-manager-79856dc55c-756nd\" (UID: \"459e53de-60cc-4763-a093-4940428df8c3\") " pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-756nd" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.133631 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-748dc6576f-rbff2"] Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.135426 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-b7nnc"] Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.140707 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-4jwxd" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.143335 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-79556f57fc-r7mbw"] Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.144508 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-r7mbw" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.149739 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-rhmk2" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.166417 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-6588bc459f-mnxdw"] Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.166668 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-756nd" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.182687 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-fd75fd47d-4z4cm"] Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.183642 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-bpsnt" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.183713 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-4z4cm" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.188194 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-mvlnn" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.190785 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-79556f57fc-r7mbw"] Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.195663 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvqhv\" (UniqueName: \"kubernetes.io/projected/9696dd76-5a2d-46d8-b344-bde781c44bd9-kube-api-access-vvqhv\") pod \"mariadb-operator-controller-manager-cb6c4fdb7-vwkpc\" (UID: \"9696dd76-5a2d-46d8-b344-bde781c44bd9\") " pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-vwkpc" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.195699 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jf8bn\" (UniqueName: \"kubernetes.io/projected/62a8ddcc-1b1e-4bd6-8e4b-41273932a900-kube-api-access-jf8bn\") pod \"heat-operator-controller-manager-774b86978c-qn647\" (UID: \"62a8ddcc-1b1e-4bd6-8e4b-41273932a900\") " pod="openstack-operators/heat-operator-controller-manager-774b86978c-qn647" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.195726 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfb4z\" (UniqueName: \"kubernetes.io/projected/7bf279a5-5615-474c-8f17-0066eb4a681d-kube-api-access-lfb4z\") pod \"manila-operator-controller-manager-6588bc459f-mnxdw\" (UID: \"7bf279a5-5615-474c-8f17-0066eb4a681d\") " pod="openstack-operators/manila-operator-controller-manager-6588bc459f-mnxdw" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.195754 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-276c7\" (UniqueName: \"kubernetes.io/projected/e8ca42b5-22f1-4101-bbf6-d053bda8b6f2-kube-api-access-276c7\") pod \"infra-operator-controller-manager-858778c9dc-lrk4z\" (UID: \"e8ca42b5-22f1-4101-bbf6-d053bda8b6f2\") " pod="openstack-operators/infra-operator-controller-manager-858778c9dc-lrk4z" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.195775 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msnw4\" (UniqueName: \"kubernetes.io/projected/82a02d23-10da-4e39-a81a-9f63180ecc89-kube-api-access-msnw4\") pod \"neutron-operator-controller-manager-7c57c8bbc4-b7nnc\" (UID: \"82a02d23-10da-4e39-a81a-9f63180ecc89\") " pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-b7nnc" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.195797 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6rtl\" (UniqueName: \"kubernetes.io/projected/bdcb07cf-3d31-40c8-bd3b-1c791408a3b9-kube-api-access-d6rtl\") pod \"horizon-operator-controller-manager-68c9694994-wkqz4\" (UID: \"bdcb07cf-3d31-40c8-bd3b-1c791408a3b9\") " pod="openstack-operators/horizon-operator-controller-manager-68c9694994-wkqz4" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.195822 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e8ca42b5-22f1-4101-bbf6-d053bda8b6f2-cert\") pod \"infra-operator-controller-manager-858778c9dc-lrk4z\" (UID: \"e8ca42b5-22f1-4101-bbf6-d053bda8b6f2\") " pod="openstack-operators/infra-operator-controller-manager-858778c9dc-lrk4z" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.195843 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgn9s\" (UniqueName: \"kubernetes.io/projected/d7f60d9f-304e-4531-aeec-6c4a576d3a1e-kube-api-access-qgn9s\") pod \"ironic-operator-controller-manager-5bfcdc958c-7mzzw\" (UID: \"d7f60d9f-304e-4531-aeec-6c4a576d3a1e\") " pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-7mzzw" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.195885 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mq82\" (UniqueName: \"kubernetes.io/projected/39f25192-6179-44cd-894a-0ebf01a675e1-kube-api-access-6mq82\") pod \"keystone-operator-controller-manager-748dc6576f-rbff2\" (UID: \"39f25192-6179-44cd-894a-0ebf01a675e1\") " pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-rbff2" Nov 24 11:23:08 crc kubenswrapper[5072]: E1124 11:23:08.200501 5072 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 24 11:23:08 crc kubenswrapper[5072]: E1124 11:23:08.200553 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e8ca42b5-22f1-4101-bbf6-d053bda8b6f2-cert podName:e8ca42b5-22f1-4101-bbf6-d053bda8b6f2 nodeName:}" failed. No retries permitted until 2025-11-24 11:23:08.700539709 +0000 UTC m=+840.412064185 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e8ca42b5-22f1-4101-bbf6-d053bda8b6f2-cert") pod "infra-operator-controller-manager-858778c9dc-lrk4z" (UID: "e8ca42b5-22f1-4101-bbf6-d053bda8b6f2") : secret "infra-operator-webhook-server-cert" not found Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.205536 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-5s9dg" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.213765 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-fd75fd47d-4z4cm"] Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.243065 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-5sknj"] Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.244006 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-5sknj" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.251443 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-pp9r4" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.251627 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.257489 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6rtl\" (UniqueName: \"kubernetes.io/projected/bdcb07cf-3d31-40c8-bd3b-1c791408a3b9-kube-api-access-d6rtl\") pod \"horizon-operator-controller-manager-68c9694994-wkqz4\" (UID: \"bdcb07cf-3d31-40c8-bd3b-1c791408a3b9\") " pod="openstack-operators/horizon-operator-controller-manager-68c9694994-wkqz4" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.259771 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jf8bn\" (UniqueName: \"kubernetes.io/projected/62a8ddcc-1b1e-4bd6-8e4b-41273932a900-kube-api-access-jf8bn\") pod \"heat-operator-controller-manager-774b86978c-qn647\" (UID: \"62a8ddcc-1b1e-4bd6-8e4b-41273932a900\") " pod="openstack-operators/heat-operator-controller-manager-774b86978c-qn647" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.262509 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-774b86978c-qn647" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.268128 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgn9s\" (UniqueName: \"kubernetes.io/projected/d7f60d9f-304e-4531-aeec-6c4a576d3a1e-kube-api-access-qgn9s\") pod \"ironic-operator-controller-manager-5bfcdc958c-7mzzw\" (UID: \"d7f60d9f-304e-4531-aeec-6c4a576d3a1e\") " pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-7mzzw" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.268751 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-66cf5c67ff-p6hcl"] Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.270634 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-p6hcl" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.273603 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-69vkp" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.298109 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-276c7\" (UniqueName: \"kubernetes.io/projected/e8ca42b5-22f1-4101-bbf6-d053bda8b6f2-kube-api-access-276c7\") pod \"infra-operator-controller-manager-858778c9dc-lrk4z\" (UID: \"e8ca42b5-22f1-4101-bbf6-d053bda8b6f2\") " pod="openstack-operators/infra-operator-controller-manager-858778c9dc-lrk4z" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.300829 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-wkqz4" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.330278 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqf95\" (UniqueName: \"kubernetes.io/projected/1b89d966-3ff3-451d-859c-0198a7cde893-kube-api-access-nqf95\") pod \"octavia-operator-controller-manager-fd75fd47d-4z4cm\" (UID: \"1b89d966-3ff3-451d-859c-0198a7cde893\") " pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-4z4cm" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.330388 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mq82\" (UniqueName: \"kubernetes.io/projected/39f25192-6179-44cd-894a-0ebf01a675e1-kube-api-access-6mq82\") pod \"keystone-operator-controller-manager-748dc6576f-rbff2\" (UID: \"39f25192-6179-44cd-894a-0ebf01a675e1\") " pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-rbff2" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.330461 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvqhv\" (UniqueName: \"kubernetes.io/projected/9696dd76-5a2d-46d8-b344-bde781c44bd9-kube-api-access-vvqhv\") pod \"mariadb-operator-controller-manager-cb6c4fdb7-vwkpc\" (UID: \"9696dd76-5a2d-46d8-b344-bde781c44bd9\") " pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-vwkpc" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.330510 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfb4z\" (UniqueName: \"kubernetes.io/projected/7bf279a5-5615-474c-8f17-0066eb4a681d-kube-api-access-lfb4z\") pod \"manila-operator-controller-manager-6588bc459f-mnxdw\" (UID: \"7bf279a5-5615-474c-8f17-0066eb4a681d\") " pod="openstack-operators/manila-operator-controller-manager-6588bc459f-mnxdw" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.330571 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msnw4\" (UniqueName: \"kubernetes.io/projected/82a02d23-10da-4e39-a81a-9f63180ecc89-kube-api-access-msnw4\") pod \"neutron-operator-controller-manager-7c57c8bbc4-b7nnc\" (UID: \"82a02d23-10da-4e39-a81a-9f63180ecc89\") " pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-b7nnc" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.330662 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzlhs\" (UniqueName: \"kubernetes.io/projected/fc8a9f5f-37fe-417e-9016-886b359a5a71-kube-api-access-wzlhs\") pod \"nova-operator-controller-manager-79556f57fc-r7mbw\" (UID: \"fc8a9f5f-37fe-417e-9016-886b359a5a71\") " pod="openstack-operators/nova-operator-controller-manager-79556f57fc-r7mbw" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.341400 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-5sknj"] Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.349659 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-7mzzw" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.356773 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-66cf5c67ff-p6hcl"] Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.384318 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msnw4\" (UniqueName: \"kubernetes.io/projected/82a02d23-10da-4e39-a81a-9f63180ecc89-kube-api-access-msnw4\") pod \"neutron-operator-controller-manager-7c57c8bbc4-b7nnc\" (UID: \"82a02d23-10da-4e39-a81a-9f63180ecc89\") " pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-b7nnc" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.396601 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfb4z\" (UniqueName: \"kubernetes.io/projected/7bf279a5-5615-474c-8f17-0066eb4a681d-kube-api-access-lfb4z\") pod \"manila-operator-controller-manager-6588bc459f-mnxdw\" (UID: \"7bf279a5-5615-474c-8f17-0066eb4a681d\") " pod="openstack-operators/manila-operator-controller-manager-6588bc459f-mnxdw" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.398970 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mq82\" (UniqueName: \"kubernetes.io/projected/39f25192-6179-44cd-894a-0ebf01a675e1-kube-api-access-6mq82\") pod \"keystone-operator-controller-manager-748dc6576f-rbff2\" (UID: \"39f25192-6179-44cd-894a-0ebf01a675e1\") " pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-rbff2" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.406403 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5db546f9d9-jh4nt"] Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.408165 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-6588bc459f-mnxdw" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.417326 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-jh4nt" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.418297 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvqhv\" (UniqueName: \"kubernetes.io/projected/9696dd76-5a2d-46d8-b344-bde781c44bd9-kube-api-access-vvqhv\") pod \"mariadb-operator-controller-manager-cb6c4fdb7-vwkpc\" (UID: \"9696dd76-5a2d-46d8-b344-bde781c44bd9\") " pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-vwkpc" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.421257 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-4hkh2" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.434481 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqf95\" (UniqueName: \"kubernetes.io/projected/1b89d966-3ff3-451d-859c-0198a7cde893-kube-api-access-nqf95\") pod \"octavia-operator-controller-manager-fd75fd47d-4z4cm\" (UID: \"1b89d966-3ff3-451d-859c-0198a7cde893\") " pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-4z4cm" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.434533 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdgwv\" (UniqueName: \"kubernetes.io/projected/edb8360f-2977-47c4-9029-02341a92a6de-kube-api-access-bdgwv\") pod \"ovn-operator-controller-manager-66cf5c67ff-p6hcl\" (UID: \"edb8360f-2977-47c4-9029-02341a92a6de\") " pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-p6hcl" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.434556 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnw66\" (UniqueName: \"kubernetes.io/projected/ff7d4c70-56ad-4baa-b7eb-bba77d3811bb-kube-api-access-cnw66\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-5sknj\" (UID: \"ff7d4c70-56ad-4baa-b7eb-bba77d3811bb\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-5sknj" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.434677 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzlhs\" (UniqueName: \"kubernetes.io/projected/fc8a9f5f-37fe-417e-9016-886b359a5a71-kube-api-access-wzlhs\") pod \"nova-operator-controller-manager-79556f57fc-r7mbw\" (UID: \"fc8a9f5f-37fe-417e-9016-886b359a5a71\") " pod="openstack-operators/nova-operator-controller-manager-79556f57fc-r7mbw" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.434695 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ff7d4c70-56ad-4baa-b7eb-bba77d3811bb-cert\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-5sknj\" (UID: \"ff7d4c70-56ad-4baa-b7eb-bba77d3811bb\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-5sknj" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.436004 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-vwkpc" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.487671 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-b7nnc" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.488678 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5db546f9d9-jh4nt"] Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.491543 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqf95\" (UniqueName: \"kubernetes.io/projected/1b89d966-3ff3-451d-859c-0198a7cde893-kube-api-access-nqf95\") pod \"octavia-operator-controller-manager-fd75fd47d-4z4cm\" (UID: \"1b89d966-3ff3-451d-859c-0198a7cde893\") " pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-4z4cm" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.495698 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzlhs\" (UniqueName: \"kubernetes.io/projected/fc8a9f5f-37fe-417e-9016-886b359a5a71-kube-api-access-wzlhs\") pod \"nova-operator-controller-manager-79556f57fc-r7mbw\" (UID: \"fc8a9f5f-37fe-417e-9016-886b359a5a71\") " pod="openstack-operators/nova-operator-controller-manager-79556f57fc-r7mbw" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.515842 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-6fdc4fcf86-r7bsx"] Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.516943 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-r7bsx" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.517948 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-567f98c9d-cfj6h"] Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.519185 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-cfj6h" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.520721 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-9892q" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.521241 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-dxm8h" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.537173 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-6fdc4fcf86-r7bsx"] Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.539543 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7s5b\" (UniqueName: \"kubernetes.io/projected/64a55d3a-a7ab-4bce-8497-1992e9591a90-kube-api-access-r7s5b\") pod \"placement-operator-controller-manager-5db546f9d9-jh4nt\" (UID: \"64a55d3a-a7ab-4bce-8497-1992e9591a90\") " pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-jh4nt" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.547448 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ff7d4c70-56ad-4baa-b7eb-bba77d3811bb-cert\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-5sknj\" (UID: \"ff7d4c70-56ad-4baa-b7eb-bba77d3811bb\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-5sknj" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.547753 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdgwv\" (UniqueName: \"kubernetes.io/projected/edb8360f-2977-47c4-9029-02341a92a6de-kube-api-access-bdgwv\") pod \"ovn-operator-controller-manager-66cf5c67ff-p6hcl\" (UID: \"edb8360f-2977-47c4-9029-02341a92a6de\") " pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-p6hcl" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.547786 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnw66\" (UniqueName: \"kubernetes.io/projected/ff7d4c70-56ad-4baa-b7eb-bba77d3811bb-kube-api-access-cnw66\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-5sknj\" (UID: \"ff7d4c70-56ad-4baa-b7eb-bba77d3811bb\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-5sknj" Nov 24 11:23:08 crc kubenswrapper[5072]: E1124 11:23:08.548279 5072 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 24 11:23:08 crc kubenswrapper[5072]: E1124 11:23:08.548329 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff7d4c70-56ad-4baa-b7eb-bba77d3811bb-cert podName:ff7d4c70-56ad-4baa-b7eb-bba77d3811bb nodeName:}" failed. No retries permitted until 2025-11-24 11:23:09.048312453 +0000 UTC m=+840.759836939 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ff7d4c70-56ad-4baa-b7eb-bba77d3811bb-cert") pod "openstack-baremetal-operator-controller-manager-544b9bb9-5sknj" (UID: "ff7d4c70-56ad-4baa-b7eb-bba77d3811bb") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.565687 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-567f98c9d-cfj6h"] Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.569534 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5cb74df96-dvldw"] Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.570965 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5cb74df96-dvldw" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.577796 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-lf84c" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.589794 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5cb74df96-dvldw"] Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.601532 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-r7mbw" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.609168 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnw66\" (UniqueName: \"kubernetes.io/projected/ff7d4c70-56ad-4baa-b7eb-bba77d3811bb-kube-api-access-cnw66\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-5sknj\" (UID: \"ff7d4c70-56ad-4baa-b7eb-bba77d3811bb\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-5sknj" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.615166 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-864885998-bz2zj"] Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.625049 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdgwv\" (UniqueName: \"kubernetes.io/projected/edb8360f-2977-47c4-9029-02341a92a6de-kube-api-access-bdgwv\") pod \"ovn-operator-controller-manager-66cf5c67ff-p6hcl\" (UID: \"edb8360f-2977-47c4-9029-02341a92a6de\") " pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-p6hcl" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.625475 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-864885998-bz2zj" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.627096 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-jw52x" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.633119 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-864885998-bz2zj"] Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.633765 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-rbff2" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.648660 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zhql\" (UniqueName: \"kubernetes.io/projected/321368f6-c64b-4d58-ae2a-e939d6d447f7-kube-api-access-6zhql\") pod \"swift-operator-controller-manager-6fdc4fcf86-r7bsx\" (UID: \"321368f6-c64b-4d58-ae2a-e939d6d447f7\") " pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-r7bsx" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.648700 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6xsm\" (UniqueName: \"kubernetes.io/projected/cd9a8dda-b29e-4e10-837a-d00bdcf6bdaa-kube-api-access-s6xsm\") pod \"test-operator-controller-manager-5cb74df96-dvldw\" (UID: \"cd9a8dda-b29e-4e10-837a-d00bdcf6bdaa\") " pod="openstack-operators/test-operator-controller-manager-5cb74df96-dvldw" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.648729 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7s5b\" (UniqueName: \"kubernetes.io/projected/64a55d3a-a7ab-4bce-8497-1992e9591a90-kube-api-access-r7s5b\") pod \"placement-operator-controller-manager-5db546f9d9-jh4nt\" (UID: \"64a55d3a-a7ab-4bce-8497-1992e9591a90\") " pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-jh4nt" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.648782 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxzgp\" (UniqueName: \"kubernetes.io/projected/7c599673-db2a-4c37-88fa-45e7166f6c20-kube-api-access-vxzgp\") pod \"telemetry-operator-controller-manager-567f98c9d-cfj6h\" (UID: \"7c599673-db2a-4c37-88fa-45e7166f6c20\") " pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-cfj6h" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.673356 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-698dfbd98-5pfmt"] Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.675593 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-698dfbd98-5pfmt" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.680539 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.680790 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.680838 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7s5b\" (UniqueName: \"kubernetes.io/projected/64a55d3a-a7ab-4bce-8497-1992e9591a90-kube-api-access-r7s5b\") pod \"placement-operator-controller-manager-5db546f9d9-jh4nt\" (UID: \"64a55d3a-a7ab-4bce-8497-1992e9591a90\") " pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-jh4nt" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.680998 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-dcvrs" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.686772 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-698dfbd98-5pfmt"] Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.694781 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lgdqp"] Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.697941 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lgdqp" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.701407 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lgdqp"] Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.707090 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-xdvmg" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.728568 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-p6hcl" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.760247 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-jh4nt" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.761138 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfdpd\" (UniqueName: \"kubernetes.io/projected/0d17eb13-802b-4d4a-b221-1481e16e1110-kube-api-access-dfdpd\") pod \"watcher-operator-controller-manager-864885998-bz2zj\" (UID: \"0d17eb13-802b-4d4a-b221-1481e16e1110\") " pod="openstack-operators/watcher-operator-controller-manager-864885998-bz2zj" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.761161 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z667c\" (UniqueName: \"kubernetes.io/projected/ae6c4b3b-27a4-4d23-bdd0-0ea9e100d400-kube-api-access-z667c\") pod \"openstack-operator-controller-manager-698dfbd98-5pfmt\" (UID: \"ae6c4b3b-27a4-4d23-bdd0-0ea9e100d400\") " pod="openstack-operators/openstack-operator-controller-manager-698dfbd98-5pfmt" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.761186 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e8ca42b5-22f1-4101-bbf6-d053bda8b6f2-cert\") pod \"infra-operator-controller-manager-858778c9dc-lrk4z\" (UID: \"e8ca42b5-22f1-4101-bbf6-d053bda8b6f2\") " pod="openstack-operators/infra-operator-controller-manager-858778c9dc-lrk4z" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.761210 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxzgp\" (UniqueName: \"kubernetes.io/projected/7c599673-db2a-4c37-88fa-45e7166f6c20-kube-api-access-vxzgp\") pod \"telemetry-operator-controller-manager-567f98c9d-cfj6h\" (UID: \"7c599673-db2a-4c37-88fa-45e7166f6c20\") " pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-cfj6h" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.761230 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ae6c4b3b-27a4-4d23-bdd0-0ea9e100d400-metrics-certs\") pod \"openstack-operator-controller-manager-698dfbd98-5pfmt\" (UID: \"ae6c4b3b-27a4-4d23-bdd0-0ea9e100d400\") " pod="openstack-operators/openstack-operator-controller-manager-698dfbd98-5pfmt" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.761287 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zhql\" (UniqueName: \"kubernetes.io/projected/321368f6-c64b-4d58-ae2a-e939d6d447f7-kube-api-access-6zhql\") pod \"swift-operator-controller-manager-6fdc4fcf86-r7bsx\" (UID: \"321368f6-c64b-4d58-ae2a-e939d6d447f7\") " pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-r7bsx" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.761306 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6xsm\" (UniqueName: \"kubernetes.io/projected/cd9a8dda-b29e-4e10-837a-d00bdcf6bdaa-kube-api-access-s6xsm\") pod \"test-operator-controller-manager-5cb74df96-dvldw\" (UID: \"cd9a8dda-b29e-4e10-837a-d00bdcf6bdaa\") " pod="openstack-operators/test-operator-controller-manager-5cb74df96-dvldw" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.761335 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ae6c4b3b-27a4-4d23-bdd0-0ea9e100d400-webhook-certs\") pod \"openstack-operator-controller-manager-698dfbd98-5pfmt\" (UID: \"ae6c4b3b-27a4-4d23-bdd0-0ea9e100d400\") " pod="openstack-operators/openstack-operator-controller-manager-698dfbd98-5pfmt" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.768869 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-4z4cm" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.780036 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e8ca42b5-22f1-4101-bbf6-d053bda8b6f2-cert\") pod \"infra-operator-controller-manager-858778c9dc-lrk4z\" (UID: \"e8ca42b5-22f1-4101-bbf6-d053bda8b6f2\") " pod="openstack-operators/infra-operator-controller-manager-858778c9dc-lrk4z" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.786279 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6xsm\" (UniqueName: \"kubernetes.io/projected/cd9a8dda-b29e-4e10-837a-d00bdcf6bdaa-kube-api-access-s6xsm\") pod \"test-operator-controller-manager-5cb74df96-dvldw\" (UID: \"cd9a8dda-b29e-4e10-837a-d00bdcf6bdaa\") " pod="openstack-operators/test-operator-controller-manager-5cb74df96-dvldw" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.793996 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxzgp\" (UniqueName: \"kubernetes.io/projected/7c599673-db2a-4c37-88fa-45e7166f6c20-kube-api-access-vxzgp\") pod \"telemetry-operator-controller-manager-567f98c9d-cfj6h\" (UID: \"7c599673-db2a-4c37-88fa-45e7166f6c20\") " pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-cfj6h" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.795187 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zhql\" (UniqueName: \"kubernetes.io/projected/321368f6-c64b-4d58-ae2a-e939d6d447f7-kube-api-access-6zhql\") pod \"swift-operator-controller-manager-6fdc4fcf86-r7bsx\" (UID: \"321368f6-c64b-4d58-ae2a-e939d6d447f7\") " pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-r7bsx" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.846683 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-cfj6h" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.862592 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckxfp\" (UniqueName: \"kubernetes.io/projected/88168be8-a585-468a-a983-f56bbb31b4a0-kube-api-access-ckxfp\") pod \"rabbitmq-cluster-operator-manager-668c99d594-lgdqp\" (UID: \"88168be8-a585-468a-a983-f56bbb31b4a0\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lgdqp" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.862665 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ae6c4b3b-27a4-4d23-bdd0-0ea9e100d400-webhook-certs\") pod \"openstack-operator-controller-manager-698dfbd98-5pfmt\" (UID: \"ae6c4b3b-27a4-4d23-bdd0-0ea9e100d400\") " pod="openstack-operators/openstack-operator-controller-manager-698dfbd98-5pfmt" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.862714 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfdpd\" (UniqueName: \"kubernetes.io/projected/0d17eb13-802b-4d4a-b221-1481e16e1110-kube-api-access-dfdpd\") pod \"watcher-operator-controller-manager-864885998-bz2zj\" (UID: \"0d17eb13-802b-4d4a-b221-1481e16e1110\") " pod="openstack-operators/watcher-operator-controller-manager-864885998-bz2zj" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.862730 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z667c\" (UniqueName: \"kubernetes.io/projected/ae6c4b3b-27a4-4d23-bdd0-0ea9e100d400-kube-api-access-z667c\") pod \"openstack-operator-controller-manager-698dfbd98-5pfmt\" (UID: \"ae6c4b3b-27a4-4d23-bdd0-0ea9e100d400\") " pod="openstack-operators/openstack-operator-controller-manager-698dfbd98-5pfmt" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.862777 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ae6c4b3b-27a4-4d23-bdd0-0ea9e100d400-metrics-certs\") pod \"openstack-operator-controller-manager-698dfbd98-5pfmt\" (UID: \"ae6c4b3b-27a4-4d23-bdd0-0ea9e100d400\") " pod="openstack-operators/openstack-operator-controller-manager-698dfbd98-5pfmt" Nov 24 11:23:08 crc kubenswrapper[5072]: E1124 11:23:08.863646 5072 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 24 11:23:08 crc kubenswrapper[5072]: E1124 11:23:08.863725 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae6c4b3b-27a4-4d23-bdd0-0ea9e100d400-webhook-certs podName:ae6c4b3b-27a4-4d23-bdd0-0ea9e100d400 nodeName:}" failed. No retries permitted until 2025-11-24 11:23:09.36370308 +0000 UTC m=+841.075227636 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ae6c4b3b-27a4-4d23-bdd0-0ea9e100d400-webhook-certs") pod "openstack-operator-controller-manager-698dfbd98-5pfmt" (UID: "ae6c4b3b-27a4-4d23-bdd0-0ea9e100d400") : secret "webhook-server-cert" not found Nov 24 11:23:08 crc kubenswrapper[5072]: E1124 11:23:08.864098 5072 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 24 11:23:08 crc kubenswrapper[5072]: E1124 11:23:08.864176 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae6c4b3b-27a4-4d23-bdd0-0ea9e100d400-metrics-certs podName:ae6c4b3b-27a4-4d23-bdd0-0ea9e100d400 nodeName:}" failed. No retries permitted until 2025-11-24 11:23:09.364161512 +0000 UTC m=+841.075685988 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ae6c4b3b-27a4-4d23-bdd0-0ea9e100d400-metrics-certs") pod "openstack-operator-controller-manager-698dfbd98-5pfmt" (UID: "ae6c4b3b-27a4-4d23-bdd0-0ea9e100d400") : secret "metrics-server-cert" not found Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.865203 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-r7bsx" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.882357 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-858778c9dc-lrk4z" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.885992 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z667c\" (UniqueName: \"kubernetes.io/projected/ae6c4b3b-27a4-4d23-bdd0-0ea9e100d400-kube-api-access-z667c\") pod \"openstack-operator-controller-manager-698dfbd98-5pfmt\" (UID: \"ae6c4b3b-27a4-4d23-bdd0-0ea9e100d400\") " pod="openstack-operators/openstack-operator-controller-manager-698dfbd98-5pfmt" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.888688 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5cb74df96-dvldw" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.894508 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfdpd\" (UniqueName: \"kubernetes.io/projected/0d17eb13-802b-4d4a-b221-1481e16e1110-kube-api-access-dfdpd\") pod \"watcher-operator-controller-manager-864885998-bz2zj\" (UID: \"0d17eb13-802b-4d4a-b221-1481e16e1110\") " pod="openstack-operators/watcher-operator-controller-manager-864885998-bz2zj" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.916607 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-864885998-bz2zj" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.965487 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckxfp\" (UniqueName: \"kubernetes.io/projected/88168be8-a585-468a-a983-f56bbb31b4a0-kube-api-access-ckxfp\") pod \"rabbitmq-cluster-operator-manager-668c99d594-lgdqp\" (UID: \"88168be8-a585-468a-a983-f56bbb31b4a0\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lgdqp" Nov 24 11:23:08 crc kubenswrapper[5072]: I1124 11:23:08.992968 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckxfp\" (UniqueName: \"kubernetes.io/projected/88168be8-a585-468a-a983-f56bbb31b4a0-kube-api-access-ckxfp\") pod \"rabbitmq-cluster-operator-manager-668c99d594-lgdqp\" (UID: \"88168be8-a585-468a-a983-f56bbb31b4a0\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lgdqp" Nov 24 11:23:09 crc kubenswrapper[5072]: I1124 11:23:09.017908 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-xdvmg" Nov 24 11:23:09 crc kubenswrapper[5072]: I1124 11:23:09.025949 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lgdqp" Nov 24 11:23:09 crc kubenswrapper[5072]: I1124 11:23:09.073931 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ff7d4c70-56ad-4baa-b7eb-bba77d3811bb-cert\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-5sknj\" (UID: \"ff7d4c70-56ad-4baa-b7eb-bba77d3811bb\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-5sknj" Nov 24 11:23:09 crc kubenswrapper[5072]: E1124 11:23:09.075071 5072 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 24 11:23:09 crc kubenswrapper[5072]: E1124 11:23:09.075122 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff7d4c70-56ad-4baa-b7eb-bba77d3811bb-cert podName:ff7d4c70-56ad-4baa-b7eb-bba77d3811bb nodeName:}" failed. No retries permitted until 2025-11-24 11:23:10.075102216 +0000 UTC m=+841.786626692 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ff7d4c70-56ad-4baa-b7eb-bba77d3811bb-cert") pod "openstack-baremetal-operator-controller-manager-544b9bb9-5sknj" (UID: "ff7d4c70-56ad-4baa-b7eb-bba77d3811bb") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 24 11:23:09 crc kubenswrapper[5072]: I1124 11:23:09.378965 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ae6c4b3b-27a4-4d23-bdd0-0ea9e100d400-webhook-certs\") pod \"openstack-operator-controller-manager-698dfbd98-5pfmt\" (UID: \"ae6c4b3b-27a4-4d23-bdd0-0ea9e100d400\") " pod="openstack-operators/openstack-operator-controller-manager-698dfbd98-5pfmt" Nov 24 11:23:09 crc kubenswrapper[5072]: I1124 11:23:09.379042 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ae6c4b3b-27a4-4d23-bdd0-0ea9e100d400-metrics-certs\") pod \"openstack-operator-controller-manager-698dfbd98-5pfmt\" (UID: \"ae6c4b3b-27a4-4d23-bdd0-0ea9e100d400\") " pod="openstack-operators/openstack-operator-controller-manager-698dfbd98-5pfmt" Nov 24 11:23:09 crc kubenswrapper[5072]: E1124 11:23:09.379191 5072 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 24 11:23:09 crc kubenswrapper[5072]: E1124 11:23:09.379246 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae6c4b3b-27a4-4d23-bdd0-0ea9e100d400-metrics-certs podName:ae6c4b3b-27a4-4d23-bdd0-0ea9e100d400 nodeName:}" failed. No retries permitted until 2025-11-24 11:23:10.37921892 +0000 UTC m=+842.090743396 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ae6c4b3b-27a4-4d23-bdd0-0ea9e100d400-metrics-certs") pod "openstack-operator-controller-manager-698dfbd98-5pfmt" (UID: "ae6c4b3b-27a4-4d23-bdd0-0ea9e100d400") : secret "metrics-server-cert" not found Nov 24 11:23:09 crc kubenswrapper[5072]: E1124 11:23:09.379639 5072 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 24 11:23:09 crc kubenswrapper[5072]: E1124 11:23:09.379679 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae6c4b3b-27a4-4d23-bdd0-0ea9e100d400-webhook-certs podName:ae6c4b3b-27a4-4d23-bdd0-0ea9e100d400 nodeName:}" failed. No retries permitted until 2025-11-24 11:23:10.379668921 +0000 UTC m=+842.091193397 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ae6c4b3b-27a4-4d23-bdd0-0ea9e100d400-webhook-certs") pod "openstack-operator-controller-manager-698dfbd98-5pfmt" (UID: "ae6c4b3b-27a4-4d23-bdd0-0ea9e100d400") : secret "webhook-server-cert" not found Nov 24 11:23:09 crc kubenswrapper[5072]: I1124 11:23:09.647237 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-68b95954c9-5s9dg"] Nov 24 11:23:09 crc kubenswrapper[5072]: I1124 11:23:09.671505 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-79856dc55c-756nd"] Nov 24 11:23:09 crc kubenswrapper[5072]: I1124 11:23:09.687500 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-86dc4d89c8-4jwxd"] Nov 24 11:23:09 crc kubenswrapper[5072]: I1124 11:23:09.712858 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-7d695c9b56-bpsnt"] Nov 24 11:23:09 crc kubenswrapper[5072]: I1124 11:23:09.750468 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-6588bc459f-mnxdw"] Nov 24 11:23:09 crc kubenswrapper[5072]: I1124 11:23:09.765145 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-567f98c9d-cfj6h"] Nov 24 11:23:09 crc kubenswrapper[5072]: I1124 11:23:09.774421 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5bfcdc958c-7mzzw"] Nov 24 11:23:09 crc kubenswrapper[5072]: I1124 11:23:09.787517 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5db546f9d9-jh4nt"] Nov 24 11:23:09 crc kubenswrapper[5072]: W1124 11:23:09.789072 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod64a55d3a_a7ab_4bce_8497_1992e9591a90.slice/crio-ac821e5c80dd4c71d99f1efbb7b871c71d8c64b558280d75b559aa88b1d03c27 WatchSource:0}: Error finding container ac821e5c80dd4c71d99f1efbb7b871c71d8c64b558280d75b559aa88b1d03c27: Status 404 returned error can't find the container with id ac821e5c80dd4c71d99f1efbb7b871c71d8c64b558280d75b559aa88b1d03c27 Nov 24 11:23:09 crc kubenswrapper[5072]: I1124 11:23:09.791700 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-774b86978c-qn647"] Nov 24 11:23:09 crc kubenswrapper[5072]: I1124 11:23:09.794744 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-vwkpc"] Nov 24 11:23:09 crc kubenswrapper[5072]: I1124 11:23:09.936860 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-748dc6576f-rbff2"] Nov 24 11:23:09 crc kubenswrapper[5072]: W1124 11:23:09.960636 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod39f25192_6179_44cd_894a_0ebf01a675e1.slice/crio-f1d90aa380f53750f1f39ec53fff31e218291799f5a2bbea3534d0b70f56555c WatchSource:0}: Error finding container f1d90aa380f53750f1f39ec53fff31e218291799f5a2bbea3534d0b70f56555c: Status 404 returned error can't find the container with id f1d90aa380f53750f1f39ec53fff31e218291799f5a2bbea3534d0b70f56555c Nov 24 11:23:10 crc kubenswrapper[5072]: I1124 11:23:10.018799 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-864885998-bz2zj"] Nov 24 11:23:10 crc kubenswrapper[5072]: W1124 11:23:10.026326 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d17eb13_802b_4d4a_b221_1481e16e1110.slice/crio-2ccaf4ad1a76b25a17cf6b941ef94e0abbe9476bb1c646b4835102432d5a0034 WatchSource:0}: Error finding container 2ccaf4ad1a76b25a17cf6b941ef94e0abbe9476bb1c646b4835102432d5a0034: Status 404 returned error can't find the container with id 2ccaf4ad1a76b25a17cf6b941ef94e0abbe9476bb1c646b4835102432d5a0034 Nov 24 11:23:10 crc kubenswrapper[5072]: E1124 11:23:10.030496 5072 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:4838402d41d42c56613d43dc5041aae475a2b18e6172491d6c4d4a78a580697f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dfdpd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-864885998-bz2zj_openstack-operators(0d17eb13-802b-4d4a-b221-1481e16e1110): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 11:23:10 crc kubenswrapper[5072]: E1124 11:23:10.033887 5072 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dfdpd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-864885998-bz2zj_openstack-operators(0d17eb13-802b-4d4a-b221-1481e16e1110): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 11:23:10 crc kubenswrapper[5072]: E1124 11:23:10.035066 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/watcher-operator-controller-manager-864885998-bz2zj" podUID="0d17eb13-802b-4d4a-b221-1481e16e1110" Nov 24 11:23:10 crc kubenswrapper[5072]: I1124 11:23:10.091858 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ff7d4c70-56ad-4baa-b7eb-bba77d3811bb-cert\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-5sknj\" (UID: \"ff7d4c70-56ad-4baa-b7eb-bba77d3811bb\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-5sknj" Nov 24 11:23:10 crc kubenswrapper[5072]: I1124 11:23:10.097428 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ff7d4c70-56ad-4baa-b7eb-bba77d3811bb-cert\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-5sknj\" (UID: \"ff7d4c70-56ad-4baa-b7eb-bba77d3811bb\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-5sknj" Nov 24 11:23:10 crc kubenswrapper[5072]: I1124 11:23:10.188486 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-fd75fd47d-4z4cm"] Nov 24 11:23:10 crc kubenswrapper[5072]: I1124 11:23:10.211521 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-6fdc4fcf86-r7bsx"] Nov 24 11:23:10 crc kubenswrapper[5072]: I1124 11:23:10.223510 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c9694994-wkqz4"] Nov 24 11:23:10 crc kubenswrapper[5072]: I1124 11:23:10.227663 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-858778c9dc-lrk4z"] Nov 24 11:23:10 crc kubenswrapper[5072]: W1124 11:23:10.229793 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8ca42b5_22f1_4101_bbf6_d053bda8b6f2.slice/crio-b92e6f8a97adf49692a9befbf9ce1def678e3c94103c066f3cb1034794447ba0 WatchSource:0}: Error finding container b92e6f8a97adf49692a9befbf9ce1def678e3c94103c066f3cb1034794447ba0: Status 404 returned error can't find the container with id b92e6f8a97adf49692a9befbf9ce1def678e3c94103c066f3cb1034794447ba0 Nov 24 11:23:10 crc kubenswrapper[5072]: I1124 11:23:10.232547 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-b7nnc"] Nov 24 11:23:10 crc kubenswrapper[5072]: E1124 11:23:10.232855 5072 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/infra-operator@sha256:f0688f6a55b7b548aaafd5c2c4f0749a43e7ea447c62a24e8b35257c5d8ba17f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{600 -3} {} 600m DecimalSI},memory: {{2147483648 0} {} 2Gi BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{536870912 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-276c7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infra-operator-controller-manager-858778c9dc-lrk4z_openstack-operators(e8ca42b5-22f1-4101-bbf6-d053bda8b6f2): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 11:23:10 crc kubenswrapper[5072]: E1124 11:23:10.233485 5072 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:442c269d79163f8da75505019c02e9f0815837aaadcaddacb8e6c12df297ca13,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nqf95,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-fd75fd47d-4z4cm_openstack-operators(1b89d966-3ff3-451d-859c-0198a7cde893): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 11:23:10 crc kubenswrapper[5072]: E1124 11:23:10.237189 5072 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nqf95,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-fd75fd47d-4z4cm_openstack-operators(1b89d966-3ff3-451d-859c-0198a7cde893): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 11:23:10 crc kubenswrapper[5072]: E1124 11:23:10.237193 5072 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-276c7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infra-operator-controller-manager-858778c9dc-lrk4z_openstack-operators(e8ca42b5-22f1-4101-bbf6-d053bda8b6f2): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 11:23:10 crc kubenswrapper[5072]: I1124 11:23:10.237983 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-79556f57fc-r7mbw"] Nov 24 11:23:10 crc kubenswrapper[5072]: E1124 11:23:10.238311 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/infra-operator-controller-manager-858778c9dc-lrk4z" podUID="e8ca42b5-22f1-4101-bbf6-d053bda8b6f2" Nov 24 11:23:10 crc kubenswrapper[5072]: E1124 11:23:10.238450 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-4z4cm" podUID="1b89d966-3ff3-451d-859c-0198a7cde893" Nov 24 11:23:10 crc kubenswrapper[5072]: I1124 11:23:10.262557 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-66cf5c67ff-p6hcl"] Nov 24 11:23:10 crc kubenswrapper[5072]: I1124 11:23:10.269205 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-pp9r4" Nov 24 11:23:10 crc kubenswrapper[5072]: I1124 11:23:10.274026 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lgdqp"] Nov 24 11:23:10 crc kubenswrapper[5072]: I1124 11:23:10.278983 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-5sknj" Nov 24 11:23:10 crc kubenswrapper[5072]: I1124 11:23:10.280071 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5cb74df96-dvldw"] Nov 24 11:23:10 crc kubenswrapper[5072]: E1124 11:23:10.302697 5072 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:c053e34316044f14929e16e4f0d97f9f1b24cb68b5e22b925ca74c66aaaed0a7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wzlhs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-79556f57fc-r7mbw_openstack-operators(fc8a9f5f-37fe-417e-9016-886b359a5a71): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 11:23:10 crc kubenswrapper[5072]: E1124 11:23:10.303070 5072 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:5d49d4594c66eda7b151746cc6e1d3c67c0129b4503eeb043a64ae8ec2da6a1b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bdgwv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-66cf5c67ff-p6hcl_openstack-operators(edb8360f-2977-47c4-9029-02341a92a6de): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 11:23:10 crc kubenswrapper[5072]: E1124 11:23:10.303160 5072 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:848f4c43c6bdd4e33e3ce1d147a85b9b6a6124a150bd5155dce421ef539259e9,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-d6rtl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-68c9694994-wkqz4_openstack-operators(bdcb07cf-3d31-40c8-bd3b-1c791408a3b9): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 11:23:10 crc kubenswrapper[5072]: E1124 11:23:10.304768 5072 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wzlhs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-79556f57fc-r7mbw_openstack-operators(fc8a9f5f-37fe-417e-9016-886b359a5a71): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 11:23:10 crc kubenswrapper[5072]: E1124 11:23:10.304853 5072 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bdgwv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-66cf5c67ff-p6hcl_openstack-operators(edb8360f-2977-47c4-9029-02341a92a6de): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 11:23:10 crc kubenswrapper[5072]: E1124 11:23:10.304859 5072 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-d6rtl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-68c9694994-wkqz4_openstack-operators(bdcb07cf-3d31-40c8-bd3b-1c791408a3b9): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 11:23:10 crc kubenswrapper[5072]: E1124 11:23:10.306581 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-p6hcl" podUID="edb8360f-2977-47c4-9029-02341a92a6de" Nov 24 11:23:10 crc kubenswrapper[5072]: E1124 11:23:10.306602 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-wkqz4" podUID="bdcb07cf-3d31-40c8-bd3b-1c791408a3b9" Nov 24 11:23:10 crc kubenswrapper[5072]: E1124 11:23:10.306606 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-r7mbw" podUID="fc8a9f5f-37fe-417e-9016-886b359a5a71" Nov 24 11:23:10 crc kubenswrapper[5072]: W1124 11:23:10.310958 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcd9a8dda_b29e_4e10_837a_d00bdcf6bdaa.slice/crio-e9db21c2251e10c3fe633bef4683db64f1cf0c220b626b5d59bab3eb74008862 WatchSource:0}: Error finding container e9db21c2251e10c3fe633bef4683db64f1cf0c220b626b5d59bab3eb74008862: Status 404 returned error can't find the container with id e9db21c2251e10c3fe633bef4683db64f1cf0c220b626b5d59bab3eb74008862 Nov 24 11:23:10 crc kubenswrapper[5072]: E1124 11:23:10.312929 5072 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:82207e753574d4be246f86c4b074500d66cf20214aa80f0a8525cf3287a35e6d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s6xsm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5cb74df96-dvldw_openstack-operators(cd9a8dda-b29e-4e10-837a-d00bdcf6bdaa): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 11:23:10 crc kubenswrapper[5072]: W1124 11:23:10.314678 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod88168be8_a585_468a_a983_f56bbb31b4a0.slice/crio-b233241cd8de763c5da2675fe2a7f543716bd19c76c9093587e50c8d64eea065 WatchSource:0}: Error finding container b233241cd8de763c5da2675fe2a7f543716bd19c76c9093587e50c8d64eea065: Status 404 returned error can't find the container with id b233241cd8de763c5da2675fe2a7f543716bd19c76c9093587e50c8d64eea065 Nov 24 11:23:10 crc kubenswrapper[5072]: E1124 11:23:10.318484 5072 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s6xsm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5cb74df96-dvldw_openstack-operators(cd9a8dda-b29e-4e10-837a-d00bdcf6bdaa): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 11:23:10 crc kubenswrapper[5072]: E1124 11:23:10.319316 5072 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ckxfp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-lgdqp_openstack-operators(88168be8-a585-468a-a983-f56bbb31b4a0): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 11:23:10 crc kubenswrapper[5072]: E1124 11:23:10.320553 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/test-operator-controller-manager-5cb74df96-dvldw" podUID="cd9a8dda-b29e-4e10-837a-d00bdcf6bdaa" Nov 24 11:23:10 crc kubenswrapper[5072]: E1124 11:23:10.320601 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lgdqp" podUID="88168be8-a585-468a-a983-f56bbb31b4a0" Nov 24 11:23:10 crc kubenswrapper[5072]: I1124 11:23:10.398651 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ae6c4b3b-27a4-4d23-bdd0-0ea9e100d400-webhook-certs\") pod \"openstack-operator-controller-manager-698dfbd98-5pfmt\" (UID: \"ae6c4b3b-27a4-4d23-bdd0-0ea9e100d400\") " pod="openstack-operators/openstack-operator-controller-manager-698dfbd98-5pfmt" Nov 24 11:23:10 crc kubenswrapper[5072]: I1124 11:23:10.398975 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ae6c4b3b-27a4-4d23-bdd0-0ea9e100d400-metrics-certs\") pod \"openstack-operator-controller-manager-698dfbd98-5pfmt\" (UID: \"ae6c4b3b-27a4-4d23-bdd0-0ea9e100d400\") " pod="openstack-operators/openstack-operator-controller-manager-698dfbd98-5pfmt" Nov 24 11:23:10 crc kubenswrapper[5072]: E1124 11:23:10.398852 5072 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 24 11:23:10 crc kubenswrapper[5072]: E1124 11:23:10.399351 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae6c4b3b-27a4-4d23-bdd0-0ea9e100d400-webhook-certs podName:ae6c4b3b-27a4-4d23-bdd0-0ea9e100d400 nodeName:}" failed. No retries permitted until 2025-11-24 11:23:12.399336927 +0000 UTC m=+844.110861403 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ae6c4b3b-27a4-4d23-bdd0-0ea9e100d400-webhook-certs") pod "openstack-operator-controller-manager-698dfbd98-5pfmt" (UID: "ae6c4b3b-27a4-4d23-bdd0-0ea9e100d400") : secret "webhook-server-cert" not found Nov 24 11:23:10 crc kubenswrapper[5072]: E1124 11:23:10.399302 5072 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 24 11:23:10 crc kubenswrapper[5072]: E1124 11:23:10.399792 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae6c4b3b-27a4-4d23-bdd0-0ea9e100d400-metrics-certs podName:ae6c4b3b-27a4-4d23-bdd0-0ea9e100d400 nodeName:}" failed. No retries permitted until 2025-11-24 11:23:12.399783778 +0000 UTC m=+844.111308254 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ae6c4b3b-27a4-4d23-bdd0-0ea9e100d400-metrics-certs") pod "openstack-operator-controller-manager-698dfbd98-5pfmt" (UID: "ae6c4b3b-27a4-4d23-bdd0-0ea9e100d400") : secret "metrics-server-cert" not found Nov 24 11:23:10 crc kubenswrapper[5072]: I1124 11:23:10.692618 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-864885998-bz2zj" event={"ID":"0d17eb13-802b-4d4a-b221-1481e16e1110","Type":"ContainerStarted","Data":"2ccaf4ad1a76b25a17cf6b941ef94e0abbe9476bb1c646b4835102432d5a0034"} Nov 24 11:23:10 crc kubenswrapper[5072]: I1124 11:23:10.697131 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-wkqz4" event={"ID":"bdcb07cf-3d31-40c8-bd3b-1c791408a3b9","Type":"ContainerStarted","Data":"6786df159b4a9a76090100e0c46d0144916f5c57be864630f8574ba76a872e6b"} Nov 24 11:23:10 crc kubenswrapper[5072]: E1124 11:23:10.699356 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:4838402d41d42c56613d43dc5041aae475a2b18e6172491d6c4d4a78a580697f\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-864885998-bz2zj" podUID="0d17eb13-802b-4d4a-b221-1481e16e1110" Nov 24 11:23:10 crc kubenswrapper[5072]: I1124 11:23:10.700254 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-b7nnc" event={"ID":"82a02d23-10da-4e39-a81a-9f63180ecc89","Type":"ContainerStarted","Data":"36b1105ce2234fc0616f701693f6de50b322691ce1c56f1b54c350bfe6b38796"} Nov 24 11:23:10 crc kubenswrapper[5072]: I1124 11:23:10.702608 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lgdqp" event={"ID":"88168be8-a585-468a-a983-f56bbb31b4a0","Type":"ContainerStarted","Data":"b233241cd8de763c5da2675fe2a7f543716bd19c76c9093587e50c8d64eea065"} Nov 24 11:23:10 crc kubenswrapper[5072]: E1124 11:23:10.708114 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lgdqp" podUID="88168be8-a585-468a-a983-f56bbb31b4a0" Nov 24 11:23:10 crc kubenswrapper[5072]: E1124 11:23:10.708196 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:848f4c43c6bdd4e33e3ce1d147a85b9b6a6124a150bd5155dce421ef539259e9\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-wkqz4" podUID="bdcb07cf-3d31-40c8-bd3b-1c791408a3b9" Nov 24 11:23:10 crc kubenswrapper[5072]: I1124 11:23:10.708844 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-756nd" event={"ID":"459e53de-60cc-4763-a093-4940428df8c3","Type":"ContainerStarted","Data":"851ab92eab1af764fe386a16b0059696be8f884480c3a5050101bb02d311fb6a"} Nov 24 11:23:10 crc kubenswrapper[5072]: I1124 11:23:10.711722 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-vwkpc" event={"ID":"9696dd76-5a2d-46d8-b344-bde781c44bd9","Type":"ContainerStarted","Data":"8c59f5bccec79bb07dcf256b1cdeb070383c8a7d74a495597d9e7194e1037bf5"} Nov 24 11:23:10 crc kubenswrapper[5072]: I1124 11:23:10.715556 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-rbff2" event={"ID":"39f25192-6179-44cd-894a-0ebf01a675e1","Type":"ContainerStarted","Data":"f1d90aa380f53750f1f39ec53fff31e218291799f5a2bbea3534d0b70f56555c"} Nov 24 11:23:10 crc kubenswrapper[5072]: I1124 11:23:10.729510 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-774b86978c-qn647" event={"ID":"62a8ddcc-1b1e-4bd6-8e4b-41273932a900","Type":"ContainerStarted","Data":"151b614e28ef244015b1959776d8729fbd0abf28829cef3ab8445b4d1953e231"} Nov 24 11:23:10 crc kubenswrapper[5072]: I1124 11:23:10.730661 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-bpsnt" event={"ID":"500235e4-633d-486d-8ea9-bc0830747b6f","Type":"ContainerStarted","Data":"d6f41f2a7c94754caaf8198be740e753c83383490841ff6ff2dbafc2a18506e0"} Nov 24 11:23:10 crc kubenswrapper[5072]: I1124 11:23:10.732755 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-jh4nt" event={"ID":"64a55d3a-a7ab-4bce-8497-1992e9591a90","Type":"ContainerStarted","Data":"ac821e5c80dd4c71d99f1efbb7b871c71d8c64b558280d75b559aa88b1d03c27"} Nov 24 11:23:10 crc kubenswrapper[5072]: I1124 11:23:10.734904 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-r7mbw" event={"ID":"fc8a9f5f-37fe-417e-9016-886b359a5a71","Type":"ContainerStarted","Data":"a65956b11158540a6f965aad6e5139be6302d528b0396e089a170e556d66161a"} Nov 24 11:23:10 crc kubenswrapper[5072]: I1124 11:23:10.736835 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-6588bc459f-mnxdw" event={"ID":"7bf279a5-5615-474c-8f17-0066eb4a681d","Type":"ContainerStarted","Data":"dc3d1e243afaa9457f7e55bba78d3f6552aebbd27331ccc6b1c4da55987a60d1"} Nov 24 11:23:10 crc kubenswrapper[5072]: E1124 11:23:10.745569 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:c053e34316044f14929e16e4f0d97f9f1b24cb68b5e22b925ca74c66aaaed0a7\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-r7mbw" podUID="fc8a9f5f-37fe-417e-9016-886b359a5a71" Nov 24 11:23:10 crc kubenswrapper[5072]: I1124 11:23:10.746159 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-7mzzw" event={"ID":"d7f60d9f-304e-4531-aeec-6c4a576d3a1e","Type":"ContainerStarted","Data":"01766078dab83038cce9fadf0a0ac37f8761c2ea33ac92f8631f333989850f14"} Nov 24 11:23:10 crc kubenswrapper[5072]: I1124 11:23:10.750431 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cb74df96-dvldw" event={"ID":"cd9a8dda-b29e-4e10-837a-d00bdcf6bdaa","Type":"ContainerStarted","Data":"e9db21c2251e10c3fe633bef4683db64f1cf0c220b626b5d59bab3eb74008862"} Nov 24 11:23:10 crc kubenswrapper[5072]: E1124 11:23:10.752760 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:82207e753574d4be246f86c4b074500d66cf20214aa80f0a8525cf3287a35e6d\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/test-operator-controller-manager-5cb74df96-dvldw" podUID="cd9a8dda-b29e-4e10-837a-d00bdcf6bdaa" Nov 24 11:23:10 crc kubenswrapper[5072]: I1124 11:23:10.753698 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-858778c9dc-lrk4z" event={"ID":"e8ca42b5-22f1-4101-bbf6-d053bda8b6f2","Type":"ContainerStarted","Data":"b92e6f8a97adf49692a9befbf9ce1def678e3c94103c066f3cb1034794447ba0"} Nov 24 11:23:10 crc kubenswrapper[5072]: E1124 11:23:10.755968 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/infra-operator@sha256:f0688f6a55b7b548aaafd5c2c4f0749a43e7ea447c62a24e8b35257c5d8ba17f\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/infra-operator-controller-manager-858778c9dc-lrk4z" podUID="e8ca42b5-22f1-4101-bbf6-d053bda8b6f2" Nov 24 11:23:10 crc kubenswrapper[5072]: I1124 11:23:10.761851 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-4jwxd" event={"ID":"a4945263-5f74-4c93-b782-8a381e40275c","Type":"ContainerStarted","Data":"ceb46f6740b7293169b6f0600eb8ab065359bc7ba40f19f6d4018adc1d020941"} Nov 24 11:23:10 crc kubenswrapper[5072]: I1124 11:23:10.765448 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-5s9dg" event={"ID":"67cd7ebd-5d77-4c59-a1af-2283997e4de4","Type":"ContainerStarted","Data":"e42d80ef021c77637ffcda4ba0c4b6bda3429bd3206d5473a03ad420d80533a3"} Nov 24 11:23:10 crc kubenswrapper[5072]: I1124 11:23:10.766303 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-cfj6h" event={"ID":"7c599673-db2a-4c37-88fa-45e7166f6c20","Type":"ContainerStarted","Data":"c0083da15542f1a969c0bb0fe006ac6a3bf727187a0b02bf9c5ad03cba21c048"} Nov 24 11:23:10 crc kubenswrapper[5072]: I1124 11:23:10.767295 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-4z4cm" event={"ID":"1b89d966-3ff3-451d-859c-0198a7cde893","Type":"ContainerStarted","Data":"a2bcd93aeb425bb06d4c08ba869643ac5eb02d4bb2ef782f7301e5d05a658a73"} Nov 24 11:23:10 crc kubenswrapper[5072]: I1124 11:23:10.771735 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-p6hcl" event={"ID":"edb8360f-2977-47c4-9029-02341a92a6de","Type":"ContainerStarted","Data":"e024ecc38c9398a0b203545dfa54f60f63e0d3ecc5e0c6942a732149162d51d4"} Nov 24 11:23:10 crc kubenswrapper[5072]: I1124 11:23:10.774136 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-r7bsx" event={"ID":"321368f6-c64b-4d58-ae2a-e939d6d447f7","Type":"ContainerStarted","Data":"c90e94311be7421501e7bc2d9f5f056888e294350e67d7207c3c765de19aea1a"} Nov 24 11:23:10 crc kubenswrapper[5072]: E1124 11:23:10.798691 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:442c269d79163f8da75505019c02e9f0815837aaadcaddacb8e6c12df297ca13\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-4z4cm" podUID="1b89d966-3ff3-451d-859c-0198a7cde893" Nov 24 11:23:10 crc kubenswrapper[5072]: E1124 11:23:10.799498 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:5d49d4594c66eda7b151746cc6e1d3c67c0129b4503eeb043a64ae8ec2da6a1b\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-p6hcl" podUID="edb8360f-2977-47c4-9029-02341a92a6de" Nov 24 11:23:10 crc kubenswrapper[5072]: I1124 11:23:10.802466 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-5sknj"] Nov 24 11:23:10 crc kubenswrapper[5072]: W1124 11:23:10.807748 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podff7d4c70_56ad_4baa_b7eb_bba77d3811bb.slice/crio-de32737778ac337c4e65ed965757548615c89645da084ae6a7a3498977d25e7c WatchSource:0}: Error finding container de32737778ac337c4e65ed965757548615c89645da084ae6a7a3498977d25e7c: Status 404 returned error can't find the container with id de32737778ac337c4e65ed965757548615c89645da084ae6a7a3498977d25e7c Nov 24 11:23:11 crc kubenswrapper[5072]: I1124 11:23:11.786251 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-5sknj" event={"ID":"ff7d4c70-56ad-4baa-b7eb-bba77d3811bb","Type":"ContainerStarted","Data":"de32737778ac337c4e65ed965757548615c89645da084ae6a7a3498977d25e7c"} Nov 24 11:23:11 crc kubenswrapper[5072]: E1124 11:23:11.789362 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lgdqp" podUID="88168be8-a585-468a-a983-f56bbb31b4a0" Nov 24 11:23:11 crc kubenswrapper[5072]: E1124 11:23:11.791151 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:4838402d41d42c56613d43dc5041aae475a2b18e6172491d6c4d4a78a580697f\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-864885998-bz2zj" podUID="0d17eb13-802b-4d4a-b221-1481e16e1110" Nov 24 11:23:11 crc kubenswrapper[5072]: E1124 11:23:11.791365 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:442c269d79163f8da75505019c02e9f0815837aaadcaddacb8e6c12df297ca13\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-4z4cm" podUID="1b89d966-3ff3-451d-859c-0198a7cde893" Nov 24 11:23:11 crc kubenswrapper[5072]: E1124 11:23:11.791522 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:c053e34316044f14929e16e4f0d97f9f1b24cb68b5e22b925ca74c66aaaed0a7\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-r7mbw" podUID="fc8a9f5f-37fe-417e-9016-886b359a5a71" Nov 24 11:23:11 crc kubenswrapper[5072]: E1124 11:23:11.791579 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:848f4c43c6bdd4e33e3ce1d147a85b9b6a6124a150bd5155dce421ef539259e9\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-wkqz4" podUID="bdcb07cf-3d31-40c8-bd3b-1c791408a3b9" Nov 24 11:23:11 crc kubenswrapper[5072]: E1124 11:23:11.791565 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:82207e753574d4be246f86c4b074500d66cf20214aa80f0a8525cf3287a35e6d\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/test-operator-controller-manager-5cb74df96-dvldw" podUID="cd9a8dda-b29e-4e10-837a-d00bdcf6bdaa" Nov 24 11:23:11 crc kubenswrapper[5072]: E1124 11:23:11.792930 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:5d49d4594c66eda7b151746cc6e1d3c67c0129b4503eeb043a64ae8ec2da6a1b\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-p6hcl" podUID="edb8360f-2977-47c4-9029-02341a92a6de" Nov 24 11:23:11 crc kubenswrapper[5072]: E1124 11:23:11.795824 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/infra-operator@sha256:f0688f6a55b7b548aaafd5c2c4f0749a43e7ea447c62a24e8b35257c5d8ba17f\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/infra-operator-controller-manager-858778c9dc-lrk4z" podUID="e8ca42b5-22f1-4101-bbf6-d053bda8b6f2" Nov 24 11:23:12 crc kubenswrapper[5072]: I1124 11:23:12.436328 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ae6c4b3b-27a4-4d23-bdd0-0ea9e100d400-webhook-certs\") pod \"openstack-operator-controller-manager-698dfbd98-5pfmt\" (UID: \"ae6c4b3b-27a4-4d23-bdd0-0ea9e100d400\") " pod="openstack-operators/openstack-operator-controller-manager-698dfbd98-5pfmt" Nov 24 11:23:12 crc kubenswrapper[5072]: I1124 11:23:12.436716 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ae6c4b3b-27a4-4d23-bdd0-0ea9e100d400-metrics-certs\") pod \"openstack-operator-controller-manager-698dfbd98-5pfmt\" (UID: \"ae6c4b3b-27a4-4d23-bdd0-0ea9e100d400\") " pod="openstack-operators/openstack-operator-controller-manager-698dfbd98-5pfmt" Nov 24 11:23:12 crc kubenswrapper[5072]: I1124 11:23:12.443014 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ae6c4b3b-27a4-4d23-bdd0-0ea9e100d400-metrics-certs\") pod \"openstack-operator-controller-manager-698dfbd98-5pfmt\" (UID: \"ae6c4b3b-27a4-4d23-bdd0-0ea9e100d400\") " pod="openstack-operators/openstack-operator-controller-manager-698dfbd98-5pfmt" Nov 24 11:23:12 crc kubenswrapper[5072]: I1124 11:23:12.443189 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ae6c4b3b-27a4-4d23-bdd0-0ea9e100d400-webhook-certs\") pod \"openstack-operator-controller-manager-698dfbd98-5pfmt\" (UID: \"ae6c4b3b-27a4-4d23-bdd0-0ea9e100d400\") " pod="openstack-operators/openstack-operator-controller-manager-698dfbd98-5pfmt" Nov 24 11:23:12 crc kubenswrapper[5072]: I1124 11:23:12.596831 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-dcvrs" Nov 24 11:23:12 crc kubenswrapper[5072]: I1124 11:23:12.606320 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-698dfbd98-5pfmt" Nov 24 11:23:24 crc kubenswrapper[5072]: I1124 11:23:24.882487 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-bpsnt" event={"ID":"500235e4-633d-486d-8ea9-bc0830747b6f","Type":"ContainerStarted","Data":"6c152cdfd7dfa1dc7759b1f10bdda2941279f17b7d902b29250f02661acca8a1"} Nov 24 11:23:24 crc kubenswrapper[5072]: I1124 11:23:24.888492 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-6588bc459f-mnxdw" event={"ID":"7bf279a5-5615-474c-8f17-0066eb4a681d","Type":"ContainerStarted","Data":"8b04276c5f88ce65682039902b27c079bccfa765d497dd0f085622e6fcc9d9c8"} Nov 24 11:23:24 crc kubenswrapper[5072]: I1124 11:23:24.984356 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-698dfbd98-5pfmt"] Nov 24 11:23:25 crc kubenswrapper[5072]: E1124 11:23:25.272470 5072 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cnw66,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-baremetal-operator-controller-manager-544b9bb9-5sknj_openstack-operators(ff7d4c70-56ad-4baa-b7eb-bba77d3811bb): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 11:23:25 crc kubenswrapper[5072]: E1124 11:23:25.277185 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-5sknj" podUID="ff7d4c70-56ad-4baa-b7eb-bba77d3811bb" Nov 24 11:23:25 crc kubenswrapper[5072]: E1124 11:23:25.308750 5072 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qgn9s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-5bfcdc958c-7mzzw_openstack-operators(d7f60d9f-304e-4531-aeec-6c4a576d3a1e): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 11:23:25 crc kubenswrapper[5072]: E1124 11:23:25.309038 5072 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-msnw4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-7c57c8bbc4-b7nnc_openstack-operators(82a02d23-10da-4e39-a81a-9f63180ecc89): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 11:23:25 crc kubenswrapper[5072]: E1124 11:23:25.309070 5072 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6mq82,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-748dc6576f-rbff2_openstack-operators(39f25192-6179-44cd-894a-0ebf01a675e1): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 11:23:25 crc kubenswrapper[5072]: E1124 11:23:25.311711 5072 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vvqhv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-cb6c4fdb7-vwkpc_openstack-operators(9696dd76-5a2d-46d8-b344-bde781c44bd9): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 11:23:25 crc kubenswrapper[5072]: E1124 11:23:25.311774 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-rbff2" podUID="39f25192-6179-44cd-894a-0ebf01a675e1" Nov 24 11:23:25 crc kubenswrapper[5072]: E1124 11:23:25.311809 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-7mzzw" podUID="d7f60d9f-304e-4531-aeec-6c4a576d3a1e" Nov 24 11:23:25 crc kubenswrapper[5072]: E1124 11:23:25.311832 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-b7nnc" podUID="82a02d23-10da-4e39-a81a-9f63180ecc89" Nov 24 11:23:25 crc kubenswrapper[5072]: E1124 11:23:25.315437 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-vwkpc" podUID="9696dd76-5a2d-46d8-b344-bde781c44bd9" Nov 24 11:23:25 crc kubenswrapper[5072]: I1124 11:23:25.913711 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-7mzzw" event={"ID":"d7f60d9f-304e-4531-aeec-6c4a576d3a1e","Type":"ContainerStarted","Data":"186c572db3fe474dfcae9db77dbd34f6a2d87f8e4271ac30b64233c5dc2fa3db"} Nov 24 11:23:25 crc kubenswrapper[5072]: I1124 11:23:25.914859 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-7mzzw" Nov 24 11:23:25 crc kubenswrapper[5072]: E1124 11:23:25.923893 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-7mzzw" podUID="d7f60d9f-304e-4531-aeec-6c4a576d3a1e" Nov 24 11:23:25 crc kubenswrapper[5072]: I1124 11:23:25.928918 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-r7bsx" event={"ID":"321368f6-c64b-4d58-ae2a-e939d6d447f7","Type":"ContainerStarted","Data":"a3f89c9feda209e3a345dfa743401b9f3413c623969444b9bf400a7cce4d4f32"} Nov 24 11:23:25 crc kubenswrapper[5072]: I1124 11:23:25.932341 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-b7nnc" event={"ID":"82a02d23-10da-4e39-a81a-9f63180ecc89","Type":"ContainerStarted","Data":"857828d646f9ac2c0531d08d92ea402de257e2a0d96dab417f2ab00f8aa969df"} Nov 24 11:23:25 crc kubenswrapper[5072]: I1124 11:23:25.933703 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-b7nnc" Nov 24 11:23:25 crc kubenswrapper[5072]: E1124 11:23:25.938773 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-b7nnc" podUID="82a02d23-10da-4e39-a81a-9f63180ecc89" Nov 24 11:23:25 crc kubenswrapper[5072]: I1124 11:23:25.956202 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-cfj6h" event={"ID":"7c599673-db2a-4c37-88fa-45e7166f6c20","Type":"ContainerStarted","Data":"a963fbb744101042d7e692f3a2444cc42cf3e139bc51c881d8981801d2c3e3d7"} Nov 24 11:23:25 crc kubenswrapper[5072]: I1124 11:23:25.967810 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-698dfbd98-5pfmt" event={"ID":"ae6c4b3b-27a4-4d23-bdd0-0ea9e100d400","Type":"ContainerStarted","Data":"67038e968c4b81e3910f1fb600dd169daefcee3fdc843c297e99802a5a6cff76"} Nov 24 11:23:25 crc kubenswrapper[5072]: I1124 11:23:25.967850 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-698dfbd98-5pfmt" event={"ID":"ae6c4b3b-27a4-4d23-bdd0-0ea9e100d400","Type":"ContainerStarted","Data":"1c698daec70d467ba6fb25ea33a1f3be25e68e13067d1f5c28b5891a2b167933"} Nov 24 11:23:25 crc kubenswrapper[5072]: I1124 11:23:25.967955 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-698dfbd98-5pfmt" Nov 24 11:23:25 crc kubenswrapper[5072]: I1124 11:23:25.971605 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-vwkpc" event={"ID":"9696dd76-5a2d-46d8-b344-bde781c44bd9","Type":"ContainerStarted","Data":"654b01ee1aebce8482bdf7ceb69bb1ae9af284a6b581d5ad9c5deebbce340178"} Nov 24 11:23:25 crc kubenswrapper[5072]: I1124 11:23:25.972293 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-vwkpc" Nov 24 11:23:25 crc kubenswrapper[5072]: E1124 11:23:25.974068 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-vwkpc" podUID="9696dd76-5a2d-46d8-b344-bde781c44bd9" Nov 24 11:23:25 crc kubenswrapper[5072]: I1124 11:23:25.980361 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-jh4nt" event={"ID":"64a55d3a-a7ab-4bce-8497-1992e9591a90","Type":"ContainerStarted","Data":"283ae4ff3aebd24a898c757137d9854163f8c49dca7af2f5dece36344d2e9cfa"} Nov 24 11:23:25 crc kubenswrapper[5072]: I1124 11:23:25.999918 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-4jwxd" event={"ID":"a4945263-5f74-4c93-b782-8a381e40275c","Type":"ContainerStarted","Data":"2233c5ce9a25e1a363466f453ff293b0f7ee692960c5016f0cfffbedb0da9d57"} Nov 24 11:23:26 crc kubenswrapper[5072]: I1124 11:23:26.004015 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-5s9dg" event={"ID":"67cd7ebd-5d77-4c59-a1af-2283997e4de4","Type":"ContainerStarted","Data":"fb693d50b4e40c8071d25e013561905087bc97a124cb1f44094306bf9dd5d0a6"} Nov 24 11:23:26 crc kubenswrapper[5072]: I1124 11:23:26.023415 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-5sknj" event={"ID":"ff7d4c70-56ad-4baa-b7eb-bba77d3811bb","Type":"ContainerStarted","Data":"d4943ed1ee12ec3659da4532778f463e23ab5e064048df165ce8a14216a4fc57"} Nov 24 11:23:26 crc kubenswrapper[5072]: I1124 11:23:26.023883 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-5sknj" Nov 24 11:23:26 crc kubenswrapper[5072]: E1124 11:23:26.034662 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-5sknj" podUID="ff7d4c70-56ad-4baa-b7eb-bba77d3811bb" Nov 24 11:23:26 crc kubenswrapper[5072]: I1124 11:23:26.038352 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-698dfbd98-5pfmt" podStartSLOduration=18.038332669 podStartE2EDuration="18.038332669s" podCreationTimestamp="2025-11-24 11:23:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:23:26.036068412 +0000 UTC m=+857.747592888" watchObservedRunningTime="2025-11-24 11:23:26.038332669 +0000 UTC m=+857.749857155" Nov 24 11:23:26 crc kubenswrapper[5072]: I1124 11:23:26.047285 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-756nd" event={"ID":"459e53de-60cc-4763-a093-4940428df8c3","Type":"ContainerStarted","Data":"71988770a78c6f6640c67d57abb4dba8f7d739139f34e05b1ffaf9fa97114a95"} Nov 24 11:23:26 crc kubenswrapper[5072]: I1124 11:23:26.051660 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-rbff2" event={"ID":"39f25192-6179-44cd-894a-0ebf01a675e1","Type":"ContainerStarted","Data":"e0ed0655265531813787969b6825d71c5809b5d909cadb6f56f7c472935cf8a1"} Nov 24 11:23:26 crc kubenswrapper[5072]: I1124 11:23:26.051811 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-rbff2" Nov 24 11:23:26 crc kubenswrapper[5072]: E1124 11:23:26.052755 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-rbff2" podUID="39f25192-6179-44cd-894a-0ebf01a675e1" Nov 24 11:23:26 crc kubenswrapper[5072]: I1124 11:23:26.054485 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-774b86978c-qn647" event={"ID":"62a8ddcc-1b1e-4bd6-8e4b-41273932a900","Type":"ContainerStarted","Data":"92b93306c3e9f0a472096336319c3d5cbacccca5a510bfc6ba324c6489c03c6c"} Nov 24 11:23:27 crc kubenswrapper[5072]: E1124 11:23:27.065574 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-7mzzw" podUID="d7f60d9f-304e-4531-aeec-6c4a576d3a1e" Nov 24 11:23:27 crc kubenswrapper[5072]: E1124 11:23:27.065649 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-5sknj" podUID="ff7d4c70-56ad-4baa-b7eb-bba77d3811bb" Nov 24 11:23:27 crc kubenswrapper[5072]: E1124 11:23:27.065816 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-vwkpc" podUID="9696dd76-5a2d-46d8-b344-bde781c44bd9" Nov 24 11:23:27 crc kubenswrapper[5072]: E1124 11:23:27.065726 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-b7nnc" podUID="82a02d23-10da-4e39-a81a-9f63180ecc89" Nov 24 11:23:27 crc kubenswrapper[5072]: E1124 11:23:27.066313 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-rbff2" podUID="39f25192-6179-44cd-894a-0ebf01a675e1" Nov 24 11:23:30 crc kubenswrapper[5072]: I1124 11:23:30.289658 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-5sknj" Nov 24 11:23:30 crc kubenswrapper[5072]: E1124 11:23:30.292332 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-5sknj" podUID="ff7d4c70-56ad-4baa-b7eb-bba77d3811bb" Nov 24 11:23:32 crc kubenswrapper[5072]: I1124 11:23:32.611954 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-698dfbd98-5pfmt" Nov 24 11:23:35 crc kubenswrapper[5072]: I1124 11:23:35.164937 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-858778c9dc-lrk4z" event={"ID":"e8ca42b5-22f1-4101-bbf6-d053bda8b6f2","Type":"ContainerStarted","Data":"48f219844d140a3af7e92c1e718def217fcb821771b94ee4f43ea4c13cec231e"} Nov 24 11:23:35 crc kubenswrapper[5072]: I1124 11:23:35.175352 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lgdqp" event={"ID":"88168be8-a585-468a-a983-f56bbb31b4a0","Type":"ContainerStarted","Data":"6dacbb756a3beeff4f7c86f574f88c4d7f9807a5e6562c4363fcd5e2968ff71f"} Nov 24 11:23:35 crc kubenswrapper[5072]: I1124 11:23:35.179892 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-774b86978c-qn647" event={"ID":"62a8ddcc-1b1e-4bd6-8e4b-41273932a900","Type":"ContainerStarted","Data":"9f6b2598c827e09e73782008f570831b9c6e2d85b136781179b96fe9a412a5b5"} Nov 24 11:23:35 crc kubenswrapper[5072]: I1124 11:23:35.180219 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-774b86978c-qn647" Nov 24 11:23:35 crc kubenswrapper[5072]: I1124 11:23:35.189955 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-774b86978c-qn647" Nov 24 11:23:35 crc kubenswrapper[5072]: I1124 11:23:35.192127 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-864885998-bz2zj" event={"ID":"0d17eb13-802b-4d4a-b221-1481e16e1110","Type":"ContainerStarted","Data":"abdb7dd763347bb68443f0bad7fecd98480e701d9fc2cafb5c4d46c64e41e964"} Nov 24 11:23:35 crc kubenswrapper[5072]: I1124 11:23:35.196694 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lgdqp" podStartSLOduration=3.660897177 podStartE2EDuration="27.196676253s" podCreationTimestamp="2025-11-24 11:23:08 +0000 UTC" firstStartedPulling="2025-11-24 11:23:10.319229838 +0000 UTC m=+842.030754314" lastFinishedPulling="2025-11-24 11:23:33.855008904 +0000 UTC m=+865.566533390" observedRunningTime="2025-11-24 11:23:35.190312553 +0000 UTC m=+866.901837019" watchObservedRunningTime="2025-11-24 11:23:35.196676253 +0000 UTC m=+866.908200729" Nov 24 11:23:35 crc kubenswrapper[5072]: I1124 11:23:35.200255 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-r7mbw" event={"ID":"fc8a9f5f-37fe-417e-9016-886b359a5a71","Type":"ContainerStarted","Data":"f5a9917d83f371ac6709166fdf7dae702310e733e1f92ffdc3f5f7bf36ea637d"} Nov 24 11:23:35 crc kubenswrapper[5072]: I1124 11:23:35.214068 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-774b86978c-qn647" podStartSLOduration=3.473380893 podStartE2EDuration="28.214046431s" podCreationTimestamp="2025-11-24 11:23:07 +0000 UTC" firstStartedPulling="2025-11-24 11:23:09.793404567 +0000 UTC m=+841.504929043" lastFinishedPulling="2025-11-24 11:23:34.534070105 +0000 UTC m=+866.245594581" observedRunningTime="2025-11-24 11:23:35.206970822 +0000 UTC m=+866.918495288" watchObservedRunningTime="2025-11-24 11:23:35.214046431 +0000 UTC m=+866.925570897" Nov 24 11:23:35 crc kubenswrapper[5072]: I1124 11:23:35.217677 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-wkqz4" event={"ID":"bdcb07cf-3d31-40c8-bd3b-1c791408a3b9","Type":"ContainerStarted","Data":"b63e9a7e6f343fea7f9c62d53606f73991b12ad48fcbfc0e0ada214623dee5a5"} Nov 24 11:23:35 crc kubenswrapper[5072]: I1124 11:23:35.232685 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cb74df96-dvldw" event={"ID":"cd9a8dda-b29e-4e10-837a-d00bdcf6bdaa","Type":"ContainerStarted","Data":"5b6e6bff788f124c39cbbb843f5024c82530dba257dab0aad23b479ca353cc4a"} Nov 24 11:23:35 crc kubenswrapper[5072]: I1124 11:23:35.249827 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-4z4cm" event={"ID":"1b89d966-3ff3-451d-859c-0198a7cde893","Type":"ContainerStarted","Data":"1633dcf38ed5399540412bf3253ab9b5e825fc6e7b1e6231d05ff66d47818a27"} Nov 24 11:23:35 crc kubenswrapper[5072]: I1124 11:23:35.274294 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-p6hcl" event={"ID":"edb8360f-2977-47c4-9029-02341a92a6de","Type":"ContainerStarted","Data":"f4c09b03d3e17de28165d870dd9ba230b57d6a78ea7dd202eb9c3531941c0d5a"} Nov 24 11:23:35 crc kubenswrapper[5072]: I1124 11:23:35.287800 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-6588bc459f-mnxdw" event={"ID":"7bf279a5-5615-474c-8f17-0066eb4a681d","Type":"ContainerStarted","Data":"9435a8891f1afaa6fa4f9f8e957cb8cbb76a0b8bcf769cb87f9965ebb8c9ebc3"} Nov 24 11:23:35 crc kubenswrapper[5072]: I1124 11:23:35.288037 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-6588bc459f-mnxdw" Nov 24 11:23:35 crc kubenswrapper[5072]: I1124 11:23:35.296455 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-6588bc459f-mnxdw" Nov 24 11:23:35 crc kubenswrapper[5072]: I1124 11:23:35.331823 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-6588bc459f-mnxdw" podStartSLOduration=3.532981814 podStartE2EDuration="28.331805918s" podCreationTimestamp="2025-11-24 11:23:07 +0000 UTC" firstStartedPulling="2025-11-24 11:23:09.762753125 +0000 UTC m=+841.474277621" lastFinishedPulling="2025-11-24 11:23:34.561577229 +0000 UTC m=+866.273101725" observedRunningTime="2025-11-24 11:23:35.314068681 +0000 UTC m=+867.025593187" watchObservedRunningTime="2025-11-24 11:23:35.331805918 +0000 UTC m=+867.043330384" Nov 24 11:23:36 crc kubenswrapper[5072]: I1124 11:23:36.296180 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-r7bsx" event={"ID":"321368f6-c64b-4d58-ae2a-e939d6d447f7","Type":"ContainerStarted","Data":"ee64118cffa3f24ef8dc5833e38424fbf5d578c5189a5a68f0d547647266b056"} Nov 24 11:23:36 crc kubenswrapper[5072]: I1124 11:23:36.296600 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-r7bsx" Nov 24 11:23:36 crc kubenswrapper[5072]: I1124 11:23:36.305659 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-r7bsx" Nov 24 11:23:36 crc kubenswrapper[5072]: I1124 11:23:36.305988 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-858778c9dc-lrk4z" event={"ID":"e8ca42b5-22f1-4101-bbf6-d053bda8b6f2","Type":"ContainerStarted","Data":"fca42087fd44520fe0520d38182610ce62cfbb1d37bb18d51fa7880ab87494da"} Nov 24 11:23:36 crc kubenswrapper[5072]: I1124 11:23:36.306378 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-858778c9dc-lrk4z" Nov 24 11:23:36 crc kubenswrapper[5072]: I1124 11:23:36.313112 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-756nd" event={"ID":"459e53de-60cc-4763-a093-4940428df8c3","Type":"ContainerStarted","Data":"3091888dbac3cab7f33fafb75c815b0fd1655d2483a6c5a410bf7e855c15ab44"} Nov 24 11:23:36 crc kubenswrapper[5072]: I1124 11:23:36.313786 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-756nd" Nov 24 11:23:36 crc kubenswrapper[5072]: I1124 11:23:36.321187 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-756nd" Nov 24 11:23:36 crc kubenswrapper[5072]: I1124 11:23:36.322260 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-cfj6h" event={"ID":"7c599673-db2a-4c37-88fa-45e7166f6c20","Type":"ContainerStarted","Data":"dc14462c8d21ac37888902ff024baa01620036d917cd537c03ff72fbd68624bb"} Nov 24 11:23:36 crc kubenswrapper[5072]: I1124 11:23:36.324191 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-cfj6h" Nov 24 11:23:36 crc kubenswrapper[5072]: I1124 11:23:36.325585 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-r7bsx" podStartSLOduration=4.01576449 podStartE2EDuration="28.32556149s" podCreationTimestamp="2025-11-24 11:23:08 +0000 UTC" firstStartedPulling="2025-11-24 11:23:10.22405587 +0000 UTC m=+841.935580346" lastFinishedPulling="2025-11-24 11:23:34.53385285 +0000 UTC m=+866.245377346" observedRunningTime="2025-11-24 11:23:36.320710998 +0000 UTC m=+868.032235484" watchObservedRunningTime="2025-11-24 11:23:36.32556149 +0000 UTC m=+868.037085966" Nov 24 11:23:36 crc kubenswrapper[5072]: I1124 11:23:36.326792 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-cfj6h" Nov 24 11:23:36 crc kubenswrapper[5072]: I1124 11:23:36.328838 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-jh4nt" event={"ID":"64a55d3a-a7ab-4bce-8497-1992e9591a90","Type":"ContainerStarted","Data":"88548eecac6cd5587a49f9404103b88151012ef0812c304cc295697532c1128a"} Nov 24 11:23:36 crc kubenswrapper[5072]: I1124 11:23:36.329851 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-jh4nt" Nov 24 11:23:36 crc kubenswrapper[5072]: I1124 11:23:36.333627 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-jh4nt" Nov 24 11:23:36 crc kubenswrapper[5072]: I1124 11:23:36.334972 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-r7mbw" event={"ID":"fc8a9f5f-37fe-417e-9016-886b359a5a71","Type":"ContainerStarted","Data":"9216af4abcee4d60de30051d2df4124e241c9382db8fd4f3787ca4da70807525"} Nov 24 11:23:36 crc kubenswrapper[5072]: I1124 11:23:36.335960 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-r7mbw" Nov 24 11:23:36 crc kubenswrapper[5072]: I1124 11:23:36.339209 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-wkqz4" event={"ID":"bdcb07cf-3d31-40c8-bd3b-1c791408a3b9","Type":"ContainerStarted","Data":"a0a38893f30756e45d7303c5f788ae7a7d0b8ce312979c4270617780e552da33"} Nov 24 11:23:36 crc kubenswrapper[5072]: I1124 11:23:36.340071 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-wkqz4" Nov 24 11:23:36 crc kubenswrapper[5072]: I1124 11:23:36.342096 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-756nd" podStartSLOduration=4.515233007 podStartE2EDuration="29.342078197s" podCreationTimestamp="2025-11-24 11:23:07 +0000 UTC" firstStartedPulling="2025-11-24 11:23:09.715615107 +0000 UTC m=+841.427139583" lastFinishedPulling="2025-11-24 11:23:34.542460287 +0000 UTC m=+866.253984773" observedRunningTime="2025-11-24 11:23:36.338852805 +0000 UTC m=+868.050377281" watchObservedRunningTime="2025-11-24 11:23:36.342078197 +0000 UTC m=+868.053602683" Nov 24 11:23:36 crc kubenswrapper[5072]: I1124 11:23:36.353830 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-bpsnt" event={"ID":"500235e4-633d-486d-8ea9-bc0830747b6f","Type":"ContainerStarted","Data":"ebbb1d4a8230fb97e22d5357f945c8075cb534c9a4183963f772f913c4a113cc"} Nov 24 11:23:36 crc kubenswrapper[5072]: I1124 11:23:36.354753 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-bpsnt" Nov 24 11:23:36 crc kubenswrapper[5072]: I1124 11:23:36.356492 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-bpsnt" Nov 24 11:23:36 crc kubenswrapper[5072]: I1124 11:23:36.357675 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-4jwxd" event={"ID":"a4945263-5f74-4c93-b782-8a381e40275c","Type":"ContainerStarted","Data":"fd3a2b281f3e873d3c38b14aeb10445253e41c5692eede032595fb5801ec5ccc"} Nov 24 11:23:36 crc kubenswrapper[5072]: I1124 11:23:36.357920 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-4jwxd" Nov 24 11:23:36 crc kubenswrapper[5072]: I1124 11:23:36.361708 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-4jwxd" Nov 24 11:23:36 crc kubenswrapper[5072]: I1124 11:23:36.370837 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-p6hcl" event={"ID":"edb8360f-2977-47c4-9029-02341a92a6de","Type":"ContainerStarted","Data":"5d6ff4eae8e98236f11009b205bfd58a12da3557ce627a67ed8ad537b55394c1"} Nov 24 11:23:36 crc kubenswrapper[5072]: I1124 11:23:36.371516 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-p6hcl" Nov 24 11:23:36 crc kubenswrapper[5072]: I1124 11:23:36.373090 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cb74df96-dvldw" event={"ID":"cd9a8dda-b29e-4e10-837a-d00bdcf6bdaa","Type":"ContainerStarted","Data":"1f7daaeb0c78e54c3a640ead8c360ee8849cdac032613a48f32dfda65b089780"} Nov 24 11:23:36 crc kubenswrapper[5072]: I1124 11:23:36.373456 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5cb74df96-dvldw" Nov 24 11:23:36 crc kubenswrapper[5072]: I1124 11:23:36.376744 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-858778c9dc-lrk4z" podStartSLOduration=5.688569373 podStartE2EDuration="29.37673243s" podCreationTimestamp="2025-11-24 11:23:07 +0000 UTC" firstStartedPulling="2025-11-24 11:23:10.232529093 +0000 UTC m=+841.944053559" lastFinishedPulling="2025-11-24 11:23:33.92069214 +0000 UTC m=+865.632216616" observedRunningTime="2025-11-24 11:23:36.37437127 +0000 UTC m=+868.085895746" watchObservedRunningTime="2025-11-24 11:23:36.37673243 +0000 UTC m=+868.088256906" Nov 24 11:23:36 crc kubenswrapper[5072]: I1124 11:23:36.389221 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-5s9dg" event={"ID":"67cd7ebd-5d77-4c59-a1af-2283997e4de4","Type":"ContainerStarted","Data":"4d760761df24167191a7345bea59ac7c3e71245205fc023ffb474d6bcecae427"} Nov 24 11:23:36 crc kubenswrapper[5072]: I1124 11:23:36.389600 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-5s9dg" Nov 24 11:23:36 crc kubenswrapper[5072]: I1124 11:23:36.391766 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-5s9dg" Nov 24 11:23:36 crc kubenswrapper[5072]: I1124 11:23:36.392592 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-4z4cm" event={"ID":"1b89d966-3ff3-451d-859c-0198a7cde893","Type":"ContainerStarted","Data":"249ce8c4490e78a2c1d23829982e7496515518d951286b4889564c11ffcf93b4"} Nov 24 11:23:36 crc kubenswrapper[5072]: I1124 11:23:36.392974 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-4z4cm" Nov 24 11:23:36 crc kubenswrapper[5072]: I1124 11:23:36.397914 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-4jwxd" podStartSLOduration=4.54513919 podStartE2EDuration="29.397901843s" podCreationTimestamp="2025-11-24 11:23:07 +0000 UTC" firstStartedPulling="2025-11-24 11:23:09.68120506 +0000 UTC m=+841.392729546" lastFinishedPulling="2025-11-24 11:23:34.533967713 +0000 UTC m=+866.245492199" observedRunningTime="2025-11-24 11:23:36.39656751 +0000 UTC m=+868.108091986" watchObservedRunningTime="2025-11-24 11:23:36.397901843 +0000 UTC m=+868.109426319" Nov 24 11:23:36 crc kubenswrapper[5072]: I1124 11:23:36.401919 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-864885998-bz2zj" event={"ID":"0d17eb13-802b-4d4a-b221-1481e16e1110","Type":"ContainerStarted","Data":"ce4908c5513a6436d968a380ebd12e6fbffada82df51507b778d254596f69a9e"} Nov 24 11:23:36 crc kubenswrapper[5072]: I1124 11:23:36.403119 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-864885998-bz2zj" Nov 24 11:23:36 crc kubenswrapper[5072]: I1124 11:23:36.441239 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-cfj6h" podStartSLOduration=3.6459821310000002 podStartE2EDuration="28.441224245s" podCreationTimestamp="2025-11-24 11:23:08 +0000 UTC" firstStartedPulling="2025-11-24 11:23:09.778285246 +0000 UTC m=+841.489809722" lastFinishedPulling="2025-11-24 11:23:34.57352735 +0000 UTC m=+866.285051836" observedRunningTime="2025-11-24 11:23:36.438089716 +0000 UTC m=+868.149614192" watchObservedRunningTime="2025-11-24 11:23:36.441224245 +0000 UTC m=+868.152748721" Nov 24 11:23:36 crc kubenswrapper[5072]: I1124 11:23:36.444357 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-wkqz4" podStartSLOduration=7.056590047 podStartE2EDuration="29.444347074s" podCreationTimestamp="2025-11-24 11:23:07 +0000 UTC" firstStartedPulling="2025-11-24 11:23:10.30305467 +0000 UTC m=+842.014579146" lastFinishedPulling="2025-11-24 11:23:32.690811697 +0000 UTC m=+864.402336173" observedRunningTime="2025-11-24 11:23:36.424380361 +0000 UTC m=+868.135904837" watchObservedRunningTime="2025-11-24 11:23:36.444347074 +0000 UTC m=+868.155871550" Nov 24 11:23:36 crc kubenswrapper[5072]: I1124 11:23:36.462951 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-jh4nt" podStartSLOduration=3.715800771 podStartE2EDuration="28.462932902s" podCreationTimestamp="2025-11-24 11:23:08 +0000 UTC" firstStartedPulling="2025-11-24 11:23:09.791177461 +0000 UTC m=+841.502701937" lastFinishedPulling="2025-11-24 11:23:34.538309582 +0000 UTC m=+866.249834068" observedRunningTime="2025-11-24 11:23:36.456151761 +0000 UTC m=+868.167676237" watchObservedRunningTime="2025-11-24 11:23:36.462932902 +0000 UTC m=+868.174457388" Nov 24 11:23:36 crc kubenswrapper[5072]: I1124 11:23:36.475795 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-r7mbw" podStartSLOduration=6.534196934 podStartE2EDuration="29.475774206s" podCreationTimestamp="2025-11-24 11:23:07 +0000 UTC" firstStartedPulling="2025-11-24 11:23:10.302542958 +0000 UTC m=+842.014067444" lastFinishedPulling="2025-11-24 11:23:33.24412022 +0000 UTC m=+864.955644716" observedRunningTime="2025-11-24 11:23:36.475263003 +0000 UTC m=+868.186787479" watchObservedRunningTime="2025-11-24 11:23:36.475774206 +0000 UTC m=+868.187298692" Nov 24 11:23:36 crc kubenswrapper[5072]: I1124 11:23:36.489909 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-bpsnt" podStartSLOduration=4.661266477 podStartE2EDuration="29.489895682s" podCreationTimestamp="2025-11-24 11:23:07 +0000 UTC" firstStartedPulling="2025-11-24 11:23:09.733503748 +0000 UTC m=+841.445028224" lastFinishedPulling="2025-11-24 11:23:34.562132943 +0000 UTC m=+866.273657429" observedRunningTime="2025-11-24 11:23:36.488607269 +0000 UTC m=+868.200131745" watchObservedRunningTime="2025-11-24 11:23:36.489895682 +0000 UTC m=+868.201420158" Nov 24 11:23:36 crc kubenswrapper[5072]: I1124 11:23:36.536441 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-p6hcl" podStartSLOduration=5.066564079 podStartE2EDuration="28.536419944s" podCreationTimestamp="2025-11-24 11:23:08 +0000 UTC" firstStartedPulling="2025-11-24 11:23:10.302999659 +0000 UTC m=+842.014524135" lastFinishedPulling="2025-11-24 11:23:33.772855494 +0000 UTC m=+865.484380000" observedRunningTime="2025-11-24 11:23:36.531022438 +0000 UTC m=+868.242546914" watchObservedRunningTime="2025-11-24 11:23:36.536419944 +0000 UTC m=+868.247944420" Nov 24 11:23:36 crc kubenswrapper[5072]: I1124 11:23:36.536610 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5cb74df96-dvldw" podStartSLOduration=5.076651054 podStartE2EDuration="28.536605079s" podCreationTimestamp="2025-11-24 11:23:08 +0000 UTC" firstStartedPulling="2025-11-24 11:23:10.312779706 +0000 UTC m=+842.024304182" lastFinishedPulling="2025-11-24 11:23:33.772733701 +0000 UTC m=+865.484258207" observedRunningTime="2025-11-24 11:23:36.508968112 +0000 UTC m=+868.220492598" watchObservedRunningTime="2025-11-24 11:23:36.536605079 +0000 UTC m=+868.248129545" Nov 24 11:23:36 crc kubenswrapper[5072]: I1124 11:23:36.563895 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-864885998-bz2zj" podStartSLOduration=4.73921198 podStartE2EDuration="28.563877046s" podCreationTimestamp="2025-11-24 11:23:08 +0000 UTC" firstStartedPulling="2025-11-24 11:23:10.030287047 +0000 UTC m=+841.741811523" lastFinishedPulling="2025-11-24 11:23:33.854952123 +0000 UTC m=+865.566476589" observedRunningTime="2025-11-24 11:23:36.55929266 +0000 UTC m=+868.270817136" watchObservedRunningTime="2025-11-24 11:23:36.563877046 +0000 UTC m=+868.275401522" Nov 24 11:23:36 crc kubenswrapper[5072]: I1124 11:23:36.577974 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-5s9dg" podStartSLOduration=4.750800383 podStartE2EDuration="29.577956211s" podCreationTimestamp="2025-11-24 11:23:07 +0000 UTC" firstStartedPulling="2025-11-24 11:23:09.715129725 +0000 UTC m=+841.426654201" lastFinishedPulling="2025-11-24 11:23:34.542285543 +0000 UTC m=+866.253810029" observedRunningTime="2025-11-24 11:23:36.575245472 +0000 UTC m=+868.286769948" watchObservedRunningTime="2025-11-24 11:23:36.577956211 +0000 UTC m=+868.289480687" Nov 24 11:23:38 crc kubenswrapper[5072]: I1124 11:23:38.019209 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-b7nnc" Nov 24 11:23:38 crc kubenswrapper[5072]: I1124 11:23:38.051834 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-4z4cm" podStartSLOduration=6.363447359 podStartE2EDuration="30.05180495s" podCreationTimestamp="2025-11-24 11:23:08 +0000 UTC" firstStartedPulling="2025-11-24 11:23:10.232645546 +0000 UTC m=+841.944170022" lastFinishedPulling="2025-11-24 11:23:33.921003107 +0000 UTC m=+865.632527613" observedRunningTime="2025-11-24 11:23:36.594841846 +0000 UTC m=+868.306366322" watchObservedRunningTime="2025-11-24 11:23:38.05180495 +0000 UTC m=+869.763329466" Nov 24 11:23:38 crc kubenswrapper[5072]: I1124 11:23:38.361088 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-7mzzw" Nov 24 11:23:38 crc kubenswrapper[5072]: I1124 11:23:38.423192 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-b7nnc" event={"ID":"82a02d23-10da-4e39-a81a-9f63180ecc89","Type":"ContainerStarted","Data":"19f79cfabb4be03dd30b1ccda80b316c13b295092076d85eaade22f92d613553"} Nov 24 11:23:38 crc kubenswrapper[5072]: I1124 11:23:38.457097 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-vwkpc" Nov 24 11:23:38 crc kubenswrapper[5072]: I1124 11:23:38.495192 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-b7nnc" podStartSLOduration=17.350699601 podStartE2EDuration="31.495174252s" podCreationTimestamp="2025-11-24 11:23:07 +0000 UTC" firstStartedPulling="2025-11-24 11:23:10.302093766 +0000 UTC m=+842.013618242" lastFinishedPulling="2025-11-24 11:23:24.446568407 +0000 UTC m=+856.158092893" observedRunningTime="2025-11-24 11:23:38.459845352 +0000 UTC m=+870.171369878" watchObservedRunningTime="2025-11-24 11:23:38.495174252 +0000 UTC m=+870.206698738" Nov 24 11:23:38 crc kubenswrapper[5072]: I1124 11:23:38.639822 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-rbff2" Nov 24 11:23:39 crc kubenswrapper[5072]: I1124 11:23:39.431464 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-vwkpc" event={"ID":"9696dd76-5a2d-46d8-b344-bde781c44bd9","Type":"ContainerStarted","Data":"ab7456f29b1d1b184c7779a33fcf5a79d77ad881c46fe82cde75471553d09e01"} Nov 24 11:23:39 crc kubenswrapper[5072]: I1124 11:23:39.434185 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-rbff2" event={"ID":"39f25192-6179-44cd-894a-0ebf01a675e1","Type":"ContainerStarted","Data":"4e2fdff34fe52f67e71d9e8c1821d797452d8cb08d11bc9e0af91a615b414a47"} Nov 24 11:23:39 crc kubenswrapper[5072]: I1124 11:23:39.437102 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-7mzzw" event={"ID":"d7f60d9f-304e-4531-aeec-6c4a576d3a1e","Type":"ContainerStarted","Data":"127dfa93c907a7740c53d8ebead4f240f8da6ae399216ac91643a368373f136f"} Nov 24 11:23:39 crc kubenswrapper[5072]: I1124 11:23:39.461831 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-vwkpc" podStartSLOduration=17.811249567 podStartE2EDuration="32.461811091s" podCreationTimestamp="2025-11-24 11:23:07 +0000 UTC" firstStartedPulling="2025-11-24 11:23:09.795036469 +0000 UTC m=+841.506560945" lastFinishedPulling="2025-11-24 11:23:24.445597983 +0000 UTC m=+856.157122469" observedRunningTime="2025-11-24 11:23:39.449906481 +0000 UTC m=+871.161430997" watchObservedRunningTime="2025-11-24 11:23:39.461811091 +0000 UTC m=+871.173335577" Nov 24 11:23:39 crc kubenswrapper[5072]: I1124 11:23:39.478515 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-7mzzw" podStartSLOduration=17.809844152 podStartE2EDuration="32.478490442s" podCreationTimestamp="2025-11-24 11:23:07 +0000 UTC" firstStartedPulling="2025-11-24 11:23:09.776937392 +0000 UTC m=+841.488461868" lastFinishedPulling="2025-11-24 11:23:24.445583672 +0000 UTC m=+856.157108158" observedRunningTime="2025-11-24 11:23:39.470964932 +0000 UTC m=+871.182489418" watchObservedRunningTime="2025-11-24 11:23:39.478490442 +0000 UTC m=+871.190014938" Nov 24 11:23:39 crc kubenswrapper[5072]: I1124 11:23:39.492109 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-rbff2" podStartSLOduration=18.008984049 podStartE2EDuration="32.492091424s" podCreationTimestamp="2025-11-24 11:23:07 +0000 UTC" firstStartedPulling="2025-11-24 11:23:09.963483263 +0000 UTC m=+841.675007739" lastFinishedPulling="2025-11-24 11:23:24.446590598 +0000 UTC m=+856.158115114" observedRunningTime="2025-11-24 11:23:39.490650198 +0000 UTC m=+871.202174744" watchObservedRunningTime="2025-11-24 11:23:39.492091424 +0000 UTC m=+871.203615910" Nov 24 11:23:42 crc kubenswrapper[5072]: I1124 11:23:42.471014 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-5sknj" event={"ID":"ff7d4c70-56ad-4baa-b7eb-bba77d3811bb","Type":"ContainerStarted","Data":"ebcd6935d437a5eee2d2d77525cb7da4f6a74df92521d12c6d412400416e87d0"} Nov 24 11:23:42 crc kubenswrapper[5072]: I1124 11:23:42.509221 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-5sknj" podStartSLOduration=20.891760824 podStartE2EDuration="34.509201734s" podCreationTimestamp="2025-11-24 11:23:08 +0000 UTC" firstStartedPulling="2025-11-24 11:23:10.824247684 +0000 UTC m=+842.535772160" lastFinishedPulling="2025-11-24 11:23:24.441688584 +0000 UTC m=+856.153213070" observedRunningTime="2025-11-24 11:23:42.501496389 +0000 UTC m=+874.213020875" watchObservedRunningTime="2025-11-24 11:23:42.509201734 +0000 UTC m=+874.220726220" Nov 24 11:23:43 crc kubenswrapper[5072]: I1124 11:23:43.645478 5072 patch_prober.go:28] interesting pod/machine-config-daemon-jfxnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 11:23:43 crc kubenswrapper[5072]: I1124 11:23:43.645568 5072 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 11:23:48 crc kubenswrapper[5072]: I1124 11:23:48.337355 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-wkqz4" Nov 24 11:23:48 crc kubenswrapper[5072]: I1124 11:23:48.604663 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-r7mbw" Nov 24 11:23:48 crc kubenswrapper[5072]: I1124 11:23:48.731270 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-p6hcl" Nov 24 11:23:48 crc kubenswrapper[5072]: I1124 11:23:48.772303 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-4z4cm" Nov 24 11:23:48 crc kubenswrapper[5072]: I1124 11:23:48.890153 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-858778c9dc-lrk4z" Nov 24 11:23:48 crc kubenswrapper[5072]: I1124 11:23:48.891357 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5cb74df96-dvldw" Nov 24 11:23:48 crc kubenswrapper[5072]: I1124 11:23:48.929626 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-864885998-bz2zj" Nov 24 11:24:04 crc kubenswrapper[5072]: I1124 11:24:04.231490 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-s6zc5"] Nov 24 11:24:04 crc kubenswrapper[5072]: I1124 11:24:04.234047 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-s6zc5" Nov 24 11:24:04 crc kubenswrapper[5072]: I1124 11:24:04.237194 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Nov 24 11:24:04 crc kubenswrapper[5072]: I1124 11:24:04.237461 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-bmmjd" Nov 24 11:24:04 crc kubenswrapper[5072]: I1124 11:24:04.237931 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Nov 24 11:24:04 crc kubenswrapper[5072]: I1124 11:24:04.239885 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Nov 24 11:24:04 crc kubenswrapper[5072]: I1124 11:24:04.251922 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-s6zc5"] Nov 24 11:24:04 crc kubenswrapper[5072]: I1124 11:24:04.288602 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-dw29v"] Nov 24 11:24:04 crc kubenswrapper[5072]: I1124 11:24:04.289690 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-dw29v" Nov 24 11:24:04 crc kubenswrapper[5072]: I1124 11:24:04.293897 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Nov 24 11:24:04 crc kubenswrapper[5072]: I1124 11:24:04.303362 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-dw29v"] Nov 24 11:24:04 crc kubenswrapper[5072]: I1124 11:24:04.356237 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcstw\" (UniqueName: \"kubernetes.io/projected/cec5ba71-80bf-469f-adb9-5d73a3e8eef9-kube-api-access-tcstw\") pod \"dnsmasq-dns-675f4bcbfc-s6zc5\" (UID: \"cec5ba71-80bf-469f-adb9-5d73a3e8eef9\") " pod="openstack/dnsmasq-dns-675f4bcbfc-s6zc5" Nov 24 11:24:04 crc kubenswrapper[5072]: I1124 11:24:04.356290 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cec5ba71-80bf-469f-adb9-5d73a3e8eef9-config\") pod \"dnsmasq-dns-675f4bcbfc-s6zc5\" (UID: \"cec5ba71-80bf-469f-adb9-5d73a3e8eef9\") " pod="openstack/dnsmasq-dns-675f4bcbfc-s6zc5" Nov 24 11:24:04 crc kubenswrapper[5072]: I1124 11:24:04.457658 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f69faa2-9442-4a55-958e-c063925a5a93-config\") pod \"dnsmasq-dns-78dd6ddcc-dw29v\" (UID: \"9f69faa2-9442-4a55-958e-c063925a5a93\") " pod="openstack/dnsmasq-dns-78dd6ddcc-dw29v" Nov 24 11:24:04 crc kubenswrapper[5072]: I1124 11:24:04.457698 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sppkg\" (UniqueName: \"kubernetes.io/projected/9f69faa2-9442-4a55-958e-c063925a5a93-kube-api-access-sppkg\") pod \"dnsmasq-dns-78dd6ddcc-dw29v\" (UID: \"9f69faa2-9442-4a55-958e-c063925a5a93\") " pod="openstack/dnsmasq-dns-78dd6ddcc-dw29v" Nov 24 11:24:04 crc kubenswrapper[5072]: I1124 11:24:04.457735 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcstw\" (UniqueName: \"kubernetes.io/projected/cec5ba71-80bf-469f-adb9-5d73a3e8eef9-kube-api-access-tcstw\") pod \"dnsmasq-dns-675f4bcbfc-s6zc5\" (UID: \"cec5ba71-80bf-469f-adb9-5d73a3e8eef9\") " pod="openstack/dnsmasq-dns-675f4bcbfc-s6zc5" Nov 24 11:24:04 crc kubenswrapper[5072]: I1124 11:24:04.457755 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9f69faa2-9442-4a55-958e-c063925a5a93-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-dw29v\" (UID: \"9f69faa2-9442-4a55-958e-c063925a5a93\") " pod="openstack/dnsmasq-dns-78dd6ddcc-dw29v" Nov 24 11:24:04 crc kubenswrapper[5072]: I1124 11:24:04.457775 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cec5ba71-80bf-469f-adb9-5d73a3e8eef9-config\") pod \"dnsmasq-dns-675f4bcbfc-s6zc5\" (UID: \"cec5ba71-80bf-469f-adb9-5d73a3e8eef9\") " pod="openstack/dnsmasq-dns-675f4bcbfc-s6zc5" Nov 24 11:24:04 crc kubenswrapper[5072]: I1124 11:24:04.458688 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cec5ba71-80bf-469f-adb9-5d73a3e8eef9-config\") pod \"dnsmasq-dns-675f4bcbfc-s6zc5\" (UID: \"cec5ba71-80bf-469f-adb9-5d73a3e8eef9\") " pod="openstack/dnsmasq-dns-675f4bcbfc-s6zc5" Nov 24 11:24:04 crc kubenswrapper[5072]: I1124 11:24:04.485461 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcstw\" (UniqueName: \"kubernetes.io/projected/cec5ba71-80bf-469f-adb9-5d73a3e8eef9-kube-api-access-tcstw\") pod \"dnsmasq-dns-675f4bcbfc-s6zc5\" (UID: \"cec5ba71-80bf-469f-adb9-5d73a3e8eef9\") " pod="openstack/dnsmasq-dns-675f4bcbfc-s6zc5" Nov 24 11:24:04 crc kubenswrapper[5072]: I1124 11:24:04.550157 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-s6zc5" Nov 24 11:24:04 crc kubenswrapper[5072]: I1124 11:24:04.559490 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f69faa2-9442-4a55-958e-c063925a5a93-config\") pod \"dnsmasq-dns-78dd6ddcc-dw29v\" (UID: \"9f69faa2-9442-4a55-958e-c063925a5a93\") " pod="openstack/dnsmasq-dns-78dd6ddcc-dw29v" Nov 24 11:24:04 crc kubenswrapper[5072]: I1124 11:24:04.559533 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sppkg\" (UniqueName: \"kubernetes.io/projected/9f69faa2-9442-4a55-958e-c063925a5a93-kube-api-access-sppkg\") pod \"dnsmasq-dns-78dd6ddcc-dw29v\" (UID: \"9f69faa2-9442-4a55-958e-c063925a5a93\") " pod="openstack/dnsmasq-dns-78dd6ddcc-dw29v" Nov 24 11:24:04 crc kubenswrapper[5072]: I1124 11:24:04.559572 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9f69faa2-9442-4a55-958e-c063925a5a93-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-dw29v\" (UID: \"9f69faa2-9442-4a55-958e-c063925a5a93\") " pod="openstack/dnsmasq-dns-78dd6ddcc-dw29v" Nov 24 11:24:04 crc kubenswrapper[5072]: I1124 11:24:04.560503 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9f69faa2-9442-4a55-958e-c063925a5a93-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-dw29v\" (UID: \"9f69faa2-9442-4a55-958e-c063925a5a93\") " pod="openstack/dnsmasq-dns-78dd6ddcc-dw29v" Nov 24 11:24:04 crc kubenswrapper[5072]: I1124 11:24:04.560552 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f69faa2-9442-4a55-958e-c063925a5a93-config\") pod \"dnsmasq-dns-78dd6ddcc-dw29v\" (UID: \"9f69faa2-9442-4a55-958e-c063925a5a93\") " pod="openstack/dnsmasq-dns-78dd6ddcc-dw29v" Nov 24 11:24:04 crc kubenswrapper[5072]: I1124 11:24:04.578912 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sppkg\" (UniqueName: \"kubernetes.io/projected/9f69faa2-9442-4a55-958e-c063925a5a93-kube-api-access-sppkg\") pod \"dnsmasq-dns-78dd6ddcc-dw29v\" (UID: \"9f69faa2-9442-4a55-958e-c063925a5a93\") " pod="openstack/dnsmasq-dns-78dd6ddcc-dw29v" Nov 24 11:24:04 crc kubenswrapper[5072]: I1124 11:24:04.602954 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-dw29v" Nov 24 11:24:04 crc kubenswrapper[5072]: I1124 11:24:04.901102 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-dw29v"] Nov 24 11:24:05 crc kubenswrapper[5072]: W1124 11:24:05.023211 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcec5ba71_80bf_469f_adb9_5d73a3e8eef9.slice/crio-966e3b5661b43c72dd7f5e96365ac0e88ad00cd89928033de1b92c8a84afffc2 WatchSource:0}: Error finding container 966e3b5661b43c72dd7f5e96365ac0e88ad00cd89928033de1b92c8a84afffc2: Status 404 returned error can't find the container with id 966e3b5661b43c72dd7f5e96365ac0e88ad00cd89928033de1b92c8a84afffc2 Nov 24 11:24:05 crc kubenswrapper[5072]: I1124 11:24:05.038226 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-s6zc5"] Nov 24 11:24:05 crc kubenswrapper[5072]: I1124 11:24:05.696705 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-s6zc5" event={"ID":"cec5ba71-80bf-469f-adb9-5d73a3e8eef9","Type":"ContainerStarted","Data":"966e3b5661b43c72dd7f5e96365ac0e88ad00cd89928033de1b92c8a84afffc2"} Nov 24 11:24:05 crc kubenswrapper[5072]: I1124 11:24:05.699532 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-dw29v" event={"ID":"9f69faa2-9442-4a55-958e-c063925a5a93","Type":"ContainerStarted","Data":"e8bdb9eb9c6de57d5d9d50314dc20f71c6c57e95d7678c7176528a7eaf013b07"} Nov 24 11:24:07 crc kubenswrapper[5072]: I1124 11:24:07.194662 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-s6zc5"] Nov 24 11:24:07 crc kubenswrapper[5072]: I1124 11:24:07.220708 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-l5dss"] Nov 24 11:24:07 crc kubenswrapper[5072]: I1124 11:24:07.221798 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-l5dss" Nov 24 11:24:07 crc kubenswrapper[5072]: I1124 11:24:07.236336 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-l5dss"] Nov 24 11:24:07 crc kubenswrapper[5072]: I1124 11:24:07.408708 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvpmw\" (UniqueName: \"kubernetes.io/projected/583d674d-7ef5-4897-9a08-e278ac090ee5-kube-api-access-bvpmw\") pod \"dnsmasq-dns-666b6646f7-l5dss\" (UID: \"583d674d-7ef5-4897-9a08-e278ac090ee5\") " pod="openstack/dnsmasq-dns-666b6646f7-l5dss" Nov 24 11:24:07 crc kubenswrapper[5072]: I1124 11:24:07.408779 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/583d674d-7ef5-4897-9a08-e278ac090ee5-dns-svc\") pod \"dnsmasq-dns-666b6646f7-l5dss\" (UID: \"583d674d-7ef5-4897-9a08-e278ac090ee5\") " pod="openstack/dnsmasq-dns-666b6646f7-l5dss" Nov 24 11:24:07 crc kubenswrapper[5072]: I1124 11:24:07.408847 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/583d674d-7ef5-4897-9a08-e278ac090ee5-config\") pod \"dnsmasq-dns-666b6646f7-l5dss\" (UID: \"583d674d-7ef5-4897-9a08-e278ac090ee5\") " pod="openstack/dnsmasq-dns-666b6646f7-l5dss" Nov 24 11:24:07 crc kubenswrapper[5072]: I1124 11:24:07.511522 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvpmw\" (UniqueName: \"kubernetes.io/projected/583d674d-7ef5-4897-9a08-e278ac090ee5-kube-api-access-bvpmw\") pod \"dnsmasq-dns-666b6646f7-l5dss\" (UID: \"583d674d-7ef5-4897-9a08-e278ac090ee5\") " pod="openstack/dnsmasq-dns-666b6646f7-l5dss" Nov 24 11:24:07 crc kubenswrapper[5072]: I1124 11:24:07.511609 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/583d674d-7ef5-4897-9a08-e278ac090ee5-dns-svc\") pod \"dnsmasq-dns-666b6646f7-l5dss\" (UID: \"583d674d-7ef5-4897-9a08-e278ac090ee5\") " pod="openstack/dnsmasq-dns-666b6646f7-l5dss" Nov 24 11:24:07 crc kubenswrapper[5072]: I1124 11:24:07.511674 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/583d674d-7ef5-4897-9a08-e278ac090ee5-config\") pod \"dnsmasq-dns-666b6646f7-l5dss\" (UID: \"583d674d-7ef5-4897-9a08-e278ac090ee5\") " pod="openstack/dnsmasq-dns-666b6646f7-l5dss" Nov 24 11:24:07 crc kubenswrapper[5072]: I1124 11:24:07.512448 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/583d674d-7ef5-4897-9a08-e278ac090ee5-dns-svc\") pod \"dnsmasq-dns-666b6646f7-l5dss\" (UID: \"583d674d-7ef5-4897-9a08-e278ac090ee5\") " pod="openstack/dnsmasq-dns-666b6646f7-l5dss" Nov 24 11:24:07 crc kubenswrapper[5072]: I1124 11:24:07.512534 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/583d674d-7ef5-4897-9a08-e278ac090ee5-config\") pod \"dnsmasq-dns-666b6646f7-l5dss\" (UID: \"583d674d-7ef5-4897-9a08-e278ac090ee5\") " pod="openstack/dnsmasq-dns-666b6646f7-l5dss" Nov 24 11:24:07 crc kubenswrapper[5072]: I1124 11:24:07.544886 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-dw29v"] Nov 24 11:24:07 crc kubenswrapper[5072]: I1124 11:24:07.544915 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvpmw\" (UniqueName: \"kubernetes.io/projected/583d674d-7ef5-4897-9a08-e278ac090ee5-kube-api-access-bvpmw\") pod \"dnsmasq-dns-666b6646f7-l5dss\" (UID: \"583d674d-7ef5-4897-9a08-e278ac090ee5\") " pod="openstack/dnsmasq-dns-666b6646f7-l5dss" Nov 24 11:24:07 crc kubenswrapper[5072]: I1124 11:24:07.557685 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-kphnt"] Nov 24 11:24:07 crc kubenswrapper[5072]: I1124 11:24:07.558780 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-kphnt" Nov 24 11:24:07 crc kubenswrapper[5072]: I1124 11:24:07.577901 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-kphnt"] Nov 24 11:24:07 crc kubenswrapper[5072]: I1124 11:24:07.715479 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02573658-0503-4bdb-81a8-21e289b8d886-config\") pod \"dnsmasq-dns-57d769cc4f-kphnt\" (UID: \"02573658-0503-4bdb-81a8-21e289b8d886\") " pod="openstack/dnsmasq-dns-57d769cc4f-kphnt" Nov 24 11:24:07 crc kubenswrapper[5072]: I1124 11:24:07.715527 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/02573658-0503-4bdb-81a8-21e289b8d886-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-kphnt\" (UID: \"02573658-0503-4bdb-81a8-21e289b8d886\") " pod="openstack/dnsmasq-dns-57d769cc4f-kphnt" Nov 24 11:24:07 crc kubenswrapper[5072]: I1124 11:24:07.715563 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwrr8\" (UniqueName: \"kubernetes.io/projected/02573658-0503-4bdb-81a8-21e289b8d886-kube-api-access-mwrr8\") pod \"dnsmasq-dns-57d769cc4f-kphnt\" (UID: \"02573658-0503-4bdb-81a8-21e289b8d886\") " pod="openstack/dnsmasq-dns-57d769cc4f-kphnt" Nov 24 11:24:07 crc kubenswrapper[5072]: I1124 11:24:07.818360 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwrr8\" (UniqueName: \"kubernetes.io/projected/02573658-0503-4bdb-81a8-21e289b8d886-kube-api-access-mwrr8\") pod \"dnsmasq-dns-57d769cc4f-kphnt\" (UID: \"02573658-0503-4bdb-81a8-21e289b8d886\") " pod="openstack/dnsmasq-dns-57d769cc4f-kphnt" Nov 24 11:24:07 crc kubenswrapper[5072]: I1124 11:24:07.820170 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02573658-0503-4bdb-81a8-21e289b8d886-config\") pod \"dnsmasq-dns-57d769cc4f-kphnt\" (UID: \"02573658-0503-4bdb-81a8-21e289b8d886\") " pod="openstack/dnsmasq-dns-57d769cc4f-kphnt" Nov 24 11:24:07 crc kubenswrapper[5072]: I1124 11:24:07.820221 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/02573658-0503-4bdb-81a8-21e289b8d886-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-kphnt\" (UID: \"02573658-0503-4bdb-81a8-21e289b8d886\") " pod="openstack/dnsmasq-dns-57d769cc4f-kphnt" Nov 24 11:24:07 crc kubenswrapper[5072]: I1124 11:24:07.821046 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/02573658-0503-4bdb-81a8-21e289b8d886-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-kphnt\" (UID: \"02573658-0503-4bdb-81a8-21e289b8d886\") " pod="openstack/dnsmasq-dns-57d769cc4f-kphnt" Nov 24 11:24:07 crc kubenswrapper[5072]: I1124 11:24:07.821606 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02573658-0503-4bdb-81a8-21e289b8d886-config\") pod \"dnsmasq-dns-57d769cc4f-kphnt\" (UID: \"02573658-0503-4bdb-81a8-21e289b8d886\") " pod="openstack/dnsmasq-dns-57d769cc4f-kphnt" Nov 24 11:24:07 crc kubenswrapper[5072]: I1124 11:24:07.839406 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-l5dss" Nov 24 11:24:07 crc kubenswrapper[5072]: I1124 11:24:07.839849 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwrr8\" (UniqueName: \"kubernetes.io/projected/02573658-0503-4bdb-81a8-21e289b8d886-kube-api-access-mwrr8\") pod \"dnsmasq-dns-57d769cc4f-kphnt\" (UID: \"02573658-0503-4bdb-81a8-21e289b8d886\") " pod="openstack/dnsmasq-dns-57d769cc4f-kphnt" Nov 24 11:24:07 crc kubenswrapper[5072]: I1124 11:24:07.886478 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-kphnt" Nov 24 11:24:08 crc kubenswrapper[5072]: I1124 11:24:08.338366 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-l5dss"] Nov 24 11:24:08 crc kubenswrapper[5072]: W1124 11:24:08.351521 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod583d674d_7ef5_4897_9a08_e278ac090ee5.slice/crio-26538297febd3c859100c46d41b0ec11919862b5a9234e1d6dcc49d92cac6c37 WatchSource:0}: Error finding container 26538297febd3c859100c46d41b0ec11919862b5a9234e1d6dcc49d92cac6c37: Status 404 returned error can't find the container with id 26538297febd3c859100c46d41b0ec11919862b5a9234e1d6dcc49d92cac6c37 Nov 24 11:24:08 crc kubenswrapper[5072]: I1124 11:24:08.397052 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-kphnt"] Nov 24 11:24:08 crc kubenswrapper[5072]: W1124 11:24:08.400893 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod02573658_0503_4bdb_81a8_21e289b8d886.slice/crio-f1d5408544f7a154216e7acfc483e6a840484b361d798a83735cfa092bc0128d WatchSource:0}: Error finding container f1d5408544f7a154216e7acfc483e6a840484b361d798a83735cfa092bc0128d: Status 404 returned error can't find the container with id f1d5408544f7a154216e7acfc483e6a840484b361d798a83735cfa092bc0128d Nov 24 11:24:08 crc kubenswrapper[5072]: I1124 11:24:08.409436 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 11:24:08 crc kubenswrapper[5072]: I1124 11:24:08.412517 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 24 11:24:08 crc kubenswrapper[5072]: I1124 11:24:08.414563 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 11:24:08 crc kubenswrapper[5072]: I1124 11:24:08.415237 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 24 11:24:08 crc kubenswrapper[5072]: I1124 11:24:08.416337 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 24 11:24:08 crc kubenswrapper[5072]: I1124 11:24:08.416394 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 24 11:24:08 crc kubenswrapper[5072]: I1124 11:24:08.416467 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-md6cz" Nov 24 11:24:08 crc kubenswrapper[5072]: I1124 11:24:08.416485 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 24 11:24:08 crc kubenswrapper[5072]: I1124 11:24:08.416542 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 24 11:24:08 crc kubenswrapper[5072]: I1124 11:24:08.416627 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 24 11:24:08 crc kubenswrapper[5072]: I1124 11:24:08.530761 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwffz\" (UniqueName: \"kubernetes.io/projected/354afe75-70d3-4c45-a990-0299f821b0af-kube-api-access-lwffz\") pod \"rabbitmq-server-0\" (UID: \"354afe75-70d3-4c45-a990-0299f821b0af\") " pod="openstack/rabbitmq-server-0" Nov 24 11:24:08 crc kubenswrapper[5072]: I1124 11:24:08.530801 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/354afe75-70d3-4c45-a990-0299f821b0af-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"354afe75-70d3-4c45-a990-0299f821b0af\") " pod="openstack/rabbitmq-server-0" Nov 24 11:24:08 crc kubenswrapper[5072]: I1124 11:24:08.530852 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/354afe75-70d3-4c45-a990-0299f821b0af-config-data\") pod \"rabbitmq-server-0\" (UID: \"354afe75-70d3-4c45-a990-0299f821b0af\") " pod="openstack/rabbitmq-server-0" Nov 24 11:24:08 crc kubenswrapper[5072]: I1124 11:24:08.530884 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/354afe75-70d3-4c45-a990-0299f821b0af-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"354afe75-70d3-4c45-a990-0299f821b0af\") " pod="openstack/rabbitmq-server-0" Nov 24 11:24:08 crc kubenswrapper[5072]: I1124 11:24:08.530910 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/354afe75-70d3-4c45-a990-0299f821b0af-server-conf\") pod \"rabbitmq-server-0\" (UID: \"354afe75-70d3-4c45-a990-0299f821b0af\") " pod="openstack/rabbitmq-server-0" Nov 24 11:24:08 crc kubenswrapper[5072]: I1124 11:24:08.530929 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"354afe75-70d3-4c45-a990-0299f821b0af\") " pod="openstack/rabbitmq-server-0" Nov 24 11:24:08 crc kubenswrapper[5072]: I1124 11:24:08.530948 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/354afe75-70d3-4c45-a990-0299f821b0af-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"354afe75-70d3-4c45-a990-0299f821b0af\") " pod="openstack/rabbitmq-server-0" Nov 24 11:24:08 crc kubenswrapper[5072]: I1124 11:24:08.530972 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/354afe75-70d3-4c45-a990-0299f821b0af-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"354afe75-70d3-4c45-a990-0299f821b0af\") " pod="openstack/rabbitmq-server-0" Nov 24 11:24:08 crc kubenswrapper[5072]: I1124 11:24:08.531023 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/354afe75-70d3-4c45-a990-0299f821b0af-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"354afe75-70d3-4c45-a990-0299f821b0af\") " pod="openstack/rabbitmq-server-0" Nov 24 11:24:08 crc kubenswrapper[5072]: I1124 11:24:08.531063 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/354afe75-70d3-4c45-a990-0299f821b0af-pod-info\") pod \"rabbitmq-server-0\" (UID: \"354afe75-70d3-4c45-a990-0299f821b0af\") " pod="openstack/rabbitmq-server-0" Nov 24 11:24:08 crc kubenswrapper[5072]: I1124 11:24:08.531085 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/354afe75-70d3-4c45-a990-0299f821b0af-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"354afe75-70d3-4c45-a990-0299f821b0af\") " pod="openstack/rabbitmq-server-0" Nov 24 11:24:08 crc kubenswrapper[5072]: I1124 11:24:08.632650 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwffz\" (UniqueName: \"kubernetes.io/projected/354afe75-70d3-4c45-a990-0299f821b0af-kube-api-access-lwffz\") pod \"rabbitmq-server-0\" (UID: \"354afe75-70d3-4c45-a990-0299f821b0af\") " pod="openstack/rabbitmq-server-0" Nov 24 11:24:08 crc kubenswrapper[5072]: I1124 11:24:08.632698 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/354afe75-70d3-4c45-a990-0299f821b0af-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"354afe75-70d3-4c45-a990-0299f821b0af\") " pod="openstack/rabbitmq-server-0" Nov 24 11:24:08 crc kubenswrapper[5072]: I1124 11:24:08.632730 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/354afe75-70d3-4c45-a990-0299f821b0af-config-data\") pod \"rabbitmq-server-0\" (UID: \"354afe75-70d3-4c45-a990-0299f821b0af\") " pod="openstack/rabbitmq-server-0" Nov 24 11:24:08 crc kubenswrapper[5072]: I1124 11:24:08.632767 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/354afe75-70d3-4c45-a990-0299f821b0af-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"354afe75-70d3-4c45-a990-0299f821b0af\") " pod="openstack/rabbitmq-server-0" Nov 24 11:24:08 crc kubenswrapper[5072]: I1124 11:24:08.632795 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/354afe75-70d3-4c45-a990-0299f821b0af-server-conf\") pod \"rabbitmq-server-0\" (UID: \"354afe75-70d3-4c45-a990-0299f821b0af\") " pod="openstack/rabbitmq-server-0" Nov 24 11:24:08 crc kubenswrapper[5072]: I1124 11:24:08.632816 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"354afe75-70d3-4c45-a990-0299f821b0af\") " pod="openstack/rabbitmq-server-0" Nov 24 11:24:08 crc kubenswrapper[5072]: I1124 11:24:08.632849 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/354afe75-70d3-4c45-a990-0299f821b0af-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"354afe75-70d3-4c45-a990-0299f821b0af\") " pod="openstack/rabbitmq-server-0" Nov 24 11:24:08 crc kubenswrapper[5072]: I1124 11:24:08.632874 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/354afe75-70d3-4c45-a990-0299f821b0af-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"354afe75-70d3-4c45-a990-0299f821b0af\") " pod="openstack/rabbitmq-server-0" Nov 24 11:24:08 crc kubenswrapper[5072]: I1124 11:24:08.632895 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/354afe75-70d3-4c45-a990-0299f821b0af-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"354afe75-70d3-4c45-a990-0299f821b0af\") " pod="openstack/rabbitmq-server-0" Nov 24 11:24:08 crc kubenswrapper[5072]: I1124 11:24:08.632913 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/354afe75-70d3-4c45-a990-0299f821b0af-pod-info\") pod \"rabbitmq-server-0\" (UID: \"354afe75-70d3-4c45-a990-0299f821b0af\") " pod="openstack/rabbitmq-server-0" Nov 24 11:24:08 crc kubenswrapper[5072]: I1124 11:24:08.632941 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/354afe75-70d3-4c45-a990-0299f821b0af-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"354afe75-70d3-4c45-a990-0299f821b0af\") " pod="openstack/rabbitmq-server-0" Nov 24 11:24:08 crc kubenswrapper[5072]: I1124 11:24:08.633523 5072 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"354afe75-70d3-4c45-a990-0299f821b0af\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/rabbitmq-server-0" Nov 24 11:24:08 crc kubenswrapper[5072]: I1124 11:24:08.633625 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/354afe75-70d3-4c45-a990-0299f821b0af-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"354afe75-70d3-4c45-a990-0299f821b0af\") " pod="openstack/rabbitmq-server-0" Nov 24 11:24:08 crc kubenswrapper[5072]: I1124 11:24:08.633649 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/354afe75-70d3-4c45-a990-0299f821b0af-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"354afe75-70d3-4c45-a990-0299f821b0af\") " pod="openstack/rabbitmq-server-0" Nov 24 11:24:08 crc kubenswrapper[5072]: I1124 11:24:08.634281 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/354afe75-70d3-4c45-a990-0299f821b0af-server-conf\") pod \"rabbitmq-server-0\" (UID: \"354afe75-70d3-4c45-a990-0299f821b0af\") " pod="openstack/rabbitmq-server-0" Nov 24 11:24:08 crc kubenswrapper[5072]: I1124 11:24:08.634506 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/354afe75-70d3-4c45-a990-0299f821b0af-config-data\") pod \"rabbitmq-server-0\" (UID: \"354afe75-70d3-4c45-a990-0299f821b0af\") " pod="openstack/rabbitmq-server-0" Nov 24 11:24:08 crc kubenswrapper[5072]: I1124 11:24:08.635208 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/354afe75-70d3-4c45-a990-0299f821b0af-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"354afe75-70d3-4c45-a990-0299f821b0af\") " pod="openstack/rabbitmq-server-0" Nov 24 11:24:08 crc kubenswrapper[5072]: I1124 11:24:08.638661 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/354afe75-70d3-4c45-a990-0299f821b0af-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"354afe75-70d3-4c45-a990-0299f821b0af\") " pod="openstack/rabbitmq-server-0" Nov 24 11:24:08 crc kubenswrapper[5072]: I1124 11:24:08.638950 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/354afe75-70d3-4c45-a990-0299f821b0af-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"354afe75-70d3-4c45-a990-0299f821b0af\") " pod="openstack/rabbitmq-server-0" Nov 24 11:24:08 crc kubenswrapper[5072]: I1124 11:24:08.639037 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/354afe75-70d3-4c45-a990-0299f821b0af-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"354afe75-70d3-4c45-a990-0299f821b0af\") " pod="openstack/rabbitmq-server-0" Nov 24 11:24:08 crc kubenswrapper[5072]: I1124 11:24:08.648303 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/354afe75-70d3-4c45-a990-0299f821b0af-pod-info\") pod \"rabbitmq-server-0\" (UID: \"354afe75-70d3-4c45-a990-0299f821b0af\") " pod="openstack/rabbitmq-server-0" Nov 24 11:24:08 crc kubenswrapper[5072]: I1124 11:24:08.648851 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwffz\" (UniqueName: \"kubernetes.io/projected/354afe75-70d3-4c45-a990-0299f821b0af-kube-api-access-lwffz\") pod \"rabbitmq-server-0\" (UID: \"354afe75-70d3-4c45-a990-0299f821b0af\") " pod="openstack/rabbitmq-server-0" Nov 24 11:24:08 crc kubenswrapper[5072]: I1124 11:24:08.660258 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"354afe75-70d3-4c45-a990-0299f821b0af\") " pod="openstack/rabbitmq-server-0" Nov 24 11:24:08 crc kubenswrapper[5072]: I1124 11:24:08.691244 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 11:24:08 crc kubenswrapper[5072]: I1124 11:24:08.712164 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:24:08 crc kubenswrapper[5072]: I1124 11:24:08.718945 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 24 11:24:08 crc kubenswrapper[5072]: I1124 11:24:08.719100 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 11:24:08 crc kubenswrapper[5072]: I1124 11:24:08.719246 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 24 11:24:08 crc kubenswrapper[5072]: I1124 11:24:08.719800 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 24 11:24:08 crc kubenswrapper[5072]: I1124 11:24:08.719803 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 24 11:24:08 crc kubenswrapper[5072]: I1124 11:24:08.719924 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-np4n4" Nov 24 11:24:08 crc kubenswrapper[5072]: I1124 11:24:08.719999 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 24 11:24:08 crc kubenswrapper[5072]: I1124 11:24:08.720142 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 24 11:24:08 crc kubenswrapper[5072]: I1124 11:24:08.735691 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-kphnt" event={"ID":"02573658-0503-4bdb-81a8-21e289b8d886","Type":"ContainerStarted","Data":"f1d5408544f7a154216e7acfc483e6a840484b361d798a83735cfa092bc0128d"} Nov 24 11:24:08 crc kubenswrapper[5072]: I1124 11:24:08.741488 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-l5dss" event={"ID":"583d674d-7ef5-4897-9a08-e278ac090ee5","Type":"ContainerStarted","Data":"26538297febd3c859100c46d41b0ec11919862b5a9234e1d6dcc49d92cac6c37"} Nov 24 11:24:08 crc kubenswrapper[5072]: I1124 11:24:08.749998 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 24 11:24:08 crc kubenswrapper[5072]: I1124 11:24:08.835916 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cts8f\" (UniqueName: \"kubernetes.io/projected/224cff60-3d72-478d-9788-926bbca42ad2-kube-api-access-cts8f\") pod \"rabbitmq-cell1-server-0\" (UID: \"224cff60-3d72-478d-9788-926bbca42ad2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:24:08 crc kubenswrapper[5072]: I1124 11:24:08.836001 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/224cff60-3d72-478d-9788-926bbca42ad2-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"224cff60-3d72-478d-9788-926bbca42ad2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:24:08 crc kubenswrapper[5072]: I1124 11:24:08.838022 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/224cff60-3d72-478d-9788-926bbca42ad2-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"224cff60-3d72-478d-9788-926bbca42ad2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:24:08 crc kubenswrapper[5072]: I1124 11:24:08.838051 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/224cff60-3d72-478d-9788-926bbca42ad2-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"224cff60-3d72-478d-9788-926bbca42ad2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:24:08 crc kubenswrapper[5072]: I1124 11:24:08.838084 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/224cff60-3d72-478d-9788-926bbca42ad2-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"224cff60-3d72-478d-9788-926bbca42ad2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:24:08 crc kubenswrapper[5072]: I1124 11:24:08.838105 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/224cff60-3d72-478d-9788-926bbca42ad2-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"224cff60-3d72-478d-9788-926bbca42ad2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:24:08 crc kubenswrapper[5072]: I1124 11:24:08.838135 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/224cff60-3d72-478d-9788-926bbca42ad2-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"224cff60-3d72-478d-9788-926bbca42ad2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:24:08 crc kubenswrapper[5072]: I1124 11:24:08.838192 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/224cff60-3d72-478d-9788-926bbca42ad2-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"224cff60-3d72-478d-9788-926bbca42ad2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:24:08 crc kubenswrapper[5072]: I1124 11:24:08.838226 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/224cff60-3d72-478d-9788-926bbca42ad2-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"224cff60-3d72-478d-9788-926bbca42ad2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:24:08 crc kubenswrapper[5072]: I1124 11:24:08.838267 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"224cff60-3d72-478d-9788-926bbca42ad2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:24:08 crc kubenswrapper[5072]: I1124 11:24:08.838285 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/224cff60-3d72-478d-9788-926bbca42ad2-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"224cff60-3d72-478d-9788-926bbca42ad2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:24:08 crc kubenswrapper[5072]: I1124 11:24:08.939501 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/224cff60-3d72-478d-9788-926bbca42ad2-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"224cff60-3d72-478d-9788-926bbca42ad2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:24:08 crc kubenswrapper[5072]: I1124 11:24:08.939850 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/224cff60-3d72-478d-9788-926bbca42ad2-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"224cff60-3d72-478d-9788-926bbca42ad2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:24:08 crc kubenswrapper[5072]: I1124 11:24:08.939886 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/224cff60-3d72-478d-9788-926bbca42ad2-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"224cff60-3d72-478d-9788-926bbca42ad2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:24:08 crc kubenswrapper[5072]: I1124 11:24:08.939915 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/224cff60-3d72-478d-9788-926bbca42ad2-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"224cff60-3d72-478d-9788-926bbca42ad2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:24:08 crc kubenswrapper[5072]: I1124 11:24:08.939957 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/224cff60-3d72-478d-9788-926bbca42ad2-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"224cff60-3d72-478d-9788-926bbca42ad2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:24:08 crc kubenswrapper[5072]: I1124 11:24:08.939984 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/224cff60-3d72-478d-9788-926bbca42ad2-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"224cff60-3d72-478d-9788-926bbca42ad2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:24:08 crc kubenswrapper[5072]: I1124 11:24:08.940026 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/224cff60-3d72-478d-9788-926bbca42ad2-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"224cff60-3d72-478d-9788-926bbca42ad2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:24:08 crc kubenswrapper[5072]: I1124 11:24:08.940059 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"224cff60-3d72-478d-9788-926bbca42ad2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:24:08 crc kubenswrapper[5072]: I1124 11:24:08.940113 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/224cff60-3d72-478d-9788-926bbca42ad2-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"224cff60-3d72-478d-9788-926bbca42ad2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:24:08 crc kubenswrapper[5072]: I1124 11:24:08.940163 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cts8f\" (UniqueName: \"kubernetes.io/projected/224cff60-3d72-478d-9788-926bbca42ad2-kube-api-access-cts8f\") pod \"rabbitmq-cell1-server-0\" (UID: \"224cff60-3d72-478d-9788-926bbca42ad2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:24:08 crc kubenswrapper[5072]: I1124 11:24:08.940216 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/224cff60-3d72-478d-9788-926bbca42ad2-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"224cff60-3d72-478d-9788-926bbca42ad2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:24:08 crc kubenswrapper[5072]: I1124 11:24:08.940731 5072 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"224cff60-3d72-478d-9788-926bbca42ad2\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:24:08 crc kubenswrapper[5072]: I1124 11:24:08.940843 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/224cff60-3d72-478d-9788-926bbca42ad2-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"224cff60-3d72-478d-9788-926bbca42ad2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:24:08 crc kubenswrapper[5072]: I1124 11:24:08.943109 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/224cff60-3d72-478d-9788-926bbca42ad2-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"224cff60-3d72-478d-9788-926bbca42ad2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:24:08 crc kubenswrapper[5072]: I1124 11:24:08.943524 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/224cff60-3d72-478d-9788-926bbca42ad2-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"224cff60-3d72-478d-9788-926bbca42ad2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:24:08 crc kubenswrapper[5072]: I1124 11:24:08.943702 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/224cff60-3d72-478d-9788-926bbca42ad2-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"224cff60-3d72-478d-9788-926bbca42ad2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:24:08 crc kubenswrapper[5072]: I1124 11:24:08.944817 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/224cff60-3d72-478d-9788-926bbca42ad2-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"224cff60-3d72-478d-9788-926bbca42ad2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:24:08 crc kubenswrapper[5072]: I1124 11:24:08.946057 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/224cff60-3d72-478d-9788-926bbca42ad2-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"224cff60-3d72-478d-9788-926bbca42ad2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:24:08 crc kubenswrapper[5072]: I1124 11:24:08.946779 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/224cff60-3d72-478d-9788-926bbca42ad2-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"224cff60-3d72-478d-9788-926bbca42ad2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:24:08 crc kubenswrapper[5072]: I1124 11:24:08.948778 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/224cff60-3d72-478d-9788-926bbca42ad2-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"224cff60-3d72-478d-9788-926bbca42ad2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:24:08 crc kubenswrapper[5072]: I1124 11:24:08.950909 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/224cff60-3d72-478d-9788-926bbca42ad2-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"224cff60-3d72-478d-9788-926bbca42ad2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:24:08 crc kubenswrapper[5072]: I1124 11:24:08.959097 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cts8f\" (UniqueName: \"kubernetes.io/projected/224cff60-3d72-478d-9788-926bbca42ad2-kube-api-access-cts8f\") pod \"rabbitmq-cell1-server-0\" (UID: \"224cff60-3d72-478d-9788-926bbca42ad2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:24:08 crc kubenswrapper[5072]: I1124 11:24:08.968065 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"224cff60-3d72-478d-9788-926bbca42ad2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:24:09 crc kubenswrapper[5072]: I1124 11:24:09.045970 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-np4n4" Nov 24 11:24:09 crc kubenswrapper[5072]: I1124 11:24:09.054588 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:24:09 crc kubenswrapper[5072]: I1124 11:24:09.172347 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 11:24:09 crc kubenswrapper[5072]: W1124 11:24:09.182560 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod354afe75_70d3_4c45_a990_0299f821b0af.slice/crio-5d84a0f6dcbc41495cb0e6095d4bf49c2d0904b4b71e374fbc7755861fc0bf62 WatchSource:0}: Error finding container 5d84a0f6dcbc41495cb0e6095d4bf49c2d0904b4b71e374fbc7755861fc0bf62: Status 404 returned error can't find the container with id 5d84a0f6dcbc41495cb0e6095d4bf49c2d0904b4b71e374fbc7755861fc0bf62 Nov 24 11:24:09 crc kubenswrapper[5072]: I1124 11:24:09.593327 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 11:24:09 crc kubenswrapper[5072]: I1124 11:24:09.758384 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"354afe75-70d3-4c45-a990-0299f821b0af","Type":"ContainerStarted","Data":"5d84a0f6dcbc41495cb0e6095d4bf49c2d0904b4b71e374fbc7755861fc0bf62"} Nov 24 11:24:09 crc kubenswrapper[5072]: I1124 11:24:09.759908 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"224cff60-3d72-478d-9788-926bbca42ad2","Type":"ContainerStarted","Data":"9627420c3e20b82c910779ae70b18b459e6760fccf8bef29f33639e6dfc6cc89"} Nov 24 11:24:10 crc kubenswrapper[5072]: I1124 11:24:10.256622 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Nov 24 11:24:10 crc kubenswrapper[5072]: I1124 11:24:10.257792 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 24 11:24:10 crc kubenswrapper[5072]: I1124 11:24:10.260986 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Nov 24 11:24:10 crc kubenswrapper[5072]: I1124 11:24:10.261447 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-zdp6r" Nov 24 11:24:10 crc kubenswrapper[5072]: I1124 11:24:10.262640 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Nov 24 11:24:10 crc kubenswrapper[5072]: I1124 11:24:10.262752 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Nov 24 11:24:10 crc kubenswrapper[5072]: I1124 11:24:10.267201 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Nov 24 11:24:10 crc kubenswrapper[5072]: I1124 11:24:10.268413 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 24 11:24:10 crc kubenswrapper[5072]: I1124 11:24:10.370756 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0f143b81-90ef-461e-a3b5-36ceb68eda94-config-data-generated\") pod \"openstack-galera-0\" (UID: \"0f143b81-90ef-461e-a3b5-36ceb68eda94\") " pod="openstack/openstack-galera-0" Nov 24 11:24:10 crc kubenswrapper[5072]: I1124 11:24:10.370905 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0f143b81-90ef-461e-a3b5-36ceb68eda94-config-data-default\") pod \"openstack-galera-0\" (UID: \"0f143b81-90ef-461e-a3b5-36ceb68eda94\") " pod="openstack/openstack-galera-0" Nov 24 11:24:10 crc kubenswrapper[5072]: I1124 11:24:10.370945 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f143b81-90ef-461e-a3b5-36ceb68eda94-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"0f143b81-90ef-461e-a3b5-36ceb68eda94\") " pod="openstack/openstack-galera-0" Nov 24 11:24:10 crc kubenswrapper[5072]: I1124 11:24:10.371009 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f143b81-90ef-461e-a3b5-36ceb68eda94-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"0f143b81-90ef-461e-a3b5-36ceb68eda94\") " pod="openstack/openstack-galera-0" Nov 24 11:24:10 crc kubenswrapper[5072]: I1124 11:24:10.371171 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0f143b81-90ef-461e-a3b5-36ceb68eda94-operator-scripts\") pod \"openstack-galera-0\" (UID: \"0f143b81-90ef-461e-a3b5-36ceb68eda94\") " pod="openstack/openstack-galera-0" Nov 24 11:24:10 crc kubenswrapper[5072]: I1124 11:24:10.371225 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0f143b81-90ef-461e-a3b5-36ceb68eda94-kolla-config\") pod \"openstack-galera-0\" (UID: \"0f143b81-90ef-461e-a3b5-36ceb68eda94\") " pod="openstack/openstack-galera-0" Nov 24 11:24:10 crc kubenswrapper[5072]: I1124 11:24:10.371298 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-0\" (UID: \"0f143b81-90ef-461e-a3b5-36ceb68eda94\") " pod="openstack/openstack-galera-0" Nov 24 11:24:10 crc kubenswrapper[5072]: I1124 11:24:10.371365 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqdwm\" (UniqueName: \"kubernetes.io/projected/0f143b81-90ef-461e-a3b5-36ceb68eda94-kube-api-access-bqdwm\") pod \"openstack-galera-0\" (UID: \"0f143b81-90ef-461e-a3b5-36ceb68eda94\") " pod="openstack/openstack-galera-0" Nov 24 11:24:10 crc kubenswrapper[5072]: I1124 11:24:10.474416 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0f143b81-90ef-461e-a3b5-36ceb68eda94-config-data-default\") pod \"openstack-galera-0\" (UID: \"0f143b81-90ef-461e-a3b5-36ceb68eda94\") " pod="openstack/openstack-galera-0" Nov 24 11:24:10 crc kubenswrapper[5072]: I1124 11:24:10.474484 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f143b81-90ef-461e-a3b5-36ceb68eda94-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"0f143b81-90ef-461e-a3b5-36ceb68eda94\") " pod="openstack/openstack-galera-0" Nov 24 11:24:10 crc kubenswrapper[5072]: I1124 11:24:10.474519 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f143b81-90ef-461e-a3b5-36ceb68eda94-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"0f143b81-90ef-461e-a3b5-36ceb68eda94\") " pod="openstack/openstack-galera-0" Nov 24 11:24:10 crc kubenswrapper[5072]: I1124 11:24:10.474552 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0f143b81-90ef-461e-a3b5-36ceb68eda94-operator-scripts\") pod \"openstack-galera-0\" (UID: \"0f143b81-90ef-461e-a3b5-36ceb68eda94\") " pod="openstack/openstack-galera-0" Nov 24 11:24:10 crc kubenswrapper[5072]: I1124 11:24:10.474589 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0f143b81-90ef-461e-a3b5-36ceb68eda94-kolla-config\") pod \"openstack-galera-0\" (UID: \"0f143b81-90ef-461e-a3b5-36ceb68eda94\") " pod="openstack/openstack-galera-0" Nov 24 11:24:10 crc kubenswrapper[5072]: I1124 11:24:10.474631 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-0\" (UID: \"0f143b81-90ef-461e-a3b5-36ceb68eda94\") " pod="openstack/openstack-galera-0" Nov 24 11:24:10 crc kubenswrapper[5072]: I1124 11:24:10.474661 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqdwm\" (UniqueName: \"kubernetes.io/projected/0f143b81-90ef-461e-a3b5-36ceb68eda94-kube-api-access-bqdwm\") pod \"openstack-galera-0\" (UID: \"0f143b81-90ef-461e-a3b5-36ceb68eda94\") " pod="openstack/openstack-galera-0" Nov 24 11:24:10 crc kubenswrapper[5072]: I1124 11:24:10.474707 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0f143b81-90ef-461e-a3b5-36ceb68eda94-config-data-generated\") pod \"openstack-galera-0\" (UID: \"0f143b81-90ef-461e-a3b5-36ceb68eda94\") " pod="openstack/openstack-galera-0" Nov 24 11:24:10 crc kubenswrapper[5072]: I1124 11:24:10.475523 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0f143b81-90ef-461e-a3b5-36ceb68eda94-config-data-generated\") pod \"openstack-galera-0\" (UID: \"0f143b81-90ef-461e-a3b5-36ceb68eda94\") " pod="openstack/openstack-galera-0" Nov 24 11:24:10 crc kubenswrapper[5072]: I1124 11:24:10.476760 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0f143b81-90ef-461e-a3b5-36ceb68eda94-config-data-default\") pod \"openstack-galera-0\" (UID: \"0f143b81-90ef-461e-a3b5-36ceb68eda94\") " pod="openstack/openstack-galera-0" Nov 24 11:24:10 crc kubenswrapper[5072]: I1124 11:24:10.478010 5072 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-0\" (UID: \"0f143b81-90ef-461e-a3b5-36ceb68eda94\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/openstack-galera-0" Nov 24 11:24:10 crc kubenswrapper[5072]: I1124 11:24:10.478157 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0f143b81-90ef-461e-a3b5-36ceb68eda94-operator-scripts\") pod \"openstack-galera-0\" (UID: \"0f143b81-90ef-461e-a3b5-36ceb68eda94\") " pod="openstack/openstack-galera-0" Nov 24 11:24:10 crc kubenswrapper[5072]: I1124 11:24:10.478585 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0f143b81-90ef-461e-a3b5-36ceb68eda94-kolla-config\") pod \"openstack-galera-0\" (UID: \"0f143b81-90ef-461e-a3b5-36ceb68eda94\") " pod="openstack/openstack-galera-0" Nov 24 11:24:10 crc kubenswrapper[5072]: I1124 11:24:10.491266 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f143b81-90ef-461e-a3b5-36ceb68eda94-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"0f143b81-90ef-461e-a3b5-36ceb68eda94\") " pod="openstack/openstack-galera-0" Nov 24 11:24:10 crc kubenswrapper[5072]: I1124 11:24:10.491523 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f143b81-90ef-461e-a3b5-36ceb68eda94-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"0f143b81-90ef-461e-a3b5-36ceb68eda94\") " pod="openstack/openstack-galera-0" Nov 24 11:24:10 crc kubenswrapper[5072]: I1124 11:24:10.493511 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqdwm\" (UniqueName: \"kubernetes.io/projected/0f143b81-90ef-461e-a3b5-36ceb68eda94-kube-api-access-bqdwm\") pod \"openstack-galera-0\" (UID: \"0f143b81-90ef-461e-a3b5-36ceb68eda94\") " pod="openstack/openstack-galera-0" Nov 24 11:24:10 crc kubenswrapper[5072]: I1124 11:24:10.519027 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-0\" (UID: \"0f143b81-90ef-461e-a3b5-36ceb68eda94\") " pod="openstack/openstack-galera-0" Nov 24 11:24:10 crc kubenswrapper[5072]: I1124 11:24:10.584531 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 24 11:24:11 crc kubenswrapper[5072]: I1124 11:24:11.691400 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 24 11:24:11 crc kubenswrapper[5072]: I1124 11:24:11.692840 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 24 11:24:11 crc kubenswrapper[5072]: I1124 11:24:11.697155 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Nov 24 11:24:11 crc kubenswrapper[5072]: I1124 11:24:11.697167 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-w9pqk" Nov 24 11:24:11 crc kubenswrapper[5072]: I1124 11:24:11.697302 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Nov 24 11:24:11 crc kubenswrapper[5072]: I1124 11:24:11.698610 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Nov 24 11:24:11 crc kubenswrapper[5072]: I1124 11:24:11.699298 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 24 11:24:11 crc kubenswrapper[5072]: I1124 11:24:11.792807 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e05f8763-9e64-4bf6-84c8-25df03057309-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"e05f8763-9e64-4bf6-84c8-25df03057309\") " pod="openstack/openstack-cell1-galera-0" Nov 24 11:24:11 crc kubenswrapper[5072]: I1124 11:24:11.792919 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e05f8763-9e64-4bf6-84c8-25df03057309-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"e05f8763-9e64-4bf6-84c8-25df03057309\") " pod="openstack/openstack-cell1-galera-0" Nov 24 11:24:11 crc kubenswrapper[5072]: I1124 11:24:11.792950 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"e05f8763-9e64-4bf6-84c8-25df03057309\") " pod="openstack/openstack-cell1-galera-0" Nov 24 11:24:11 crc kubenswrapper[5072]: I1124 11:24:11.792982 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e05f8763-9e64-4bf6-84c8-25df03057309-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"e05f8763-9e64-4bf6-84c8-25df03057309\") " pod="openstack/openstack-cell1-galera-0" Nov 24 11:24:11 crc kubenswrapper[5072]: I1124 11:24:11.793019 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e05f8763-9e64-4bf6-84c8-25df03057309-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"e05f8763-9e64-4bf6-84c8-25df03057309\") " pod="openstack/openstack-cell1-galera-0" Nov 24 11:24:11 crc kubenswrapper[5072]: I1124 11:24:11.793060 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e05f8763-9e64-4bf6-84c8-25df03057309-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"e05f8763-9e64-4bf6-84c8-25df03057309\") " pod="openstack/openstack-cell1-galera-0" Nov 24 11:24:11 crc kubenswrapper[5072]: I1124 11:24:11.793077 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e05f8763-9e64-4bf6-84c8-25df03057309-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"e05f8763-9e64-4bf6-84c8-25df03057309\") " pod="openstack/openstack-cell1-galera-0" Nov 24 11:24:11 crc kubenswrapper[5072]: I1124 11:24:11.793098 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsw6s\" (UniqueName: \"kubernetes.io/projected/e05f8763-9e64-4bf6-84c8-25df03057309-kube-api-access-wsw6s\") pod \"openstack-cell1-galera-0\" (UID: \"e05f8763-9e64-4bf6-84c8-25df03057309\") " pod="openstack/openstack-cell1-galera-0" Nov 24 11:24:11 crc kubenswrapper[5072]: I1124 11:24:11.894995 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e05f8763-9e64-4bf6-84c8-25df03057309-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"e05f8763-9e64-4bf6-84c8-25df03057309\") " pod="openstack/openstack-cell1-galera-0" Nov 24 11:24:11 crc kubenswrapper[5072]: I1124 11:24:11.895055 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e05f8763-9e64-4bf6-84c8-25df03057309-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"e05f8763-9e64-4bf6-84c8-25df03057309\") " pod="openstack/openstack-cell1-galera-0" Nov 24 11:24:11 crc kubenswrapper[5072]: I1124 11:24:11.895098 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsw6s\" (UniqueName: \"kubernetes.io/projected/e05f8763-9e64-4bf6-84c8-25df03057309-kube-api-access-wsw6s\") pod \"openstack-cell1-galera-0\" (UID: \"e05f8763-9e64-4bf6-84c8-25df03057309\") " pod="openstack/openstack-cell1-galera-0" Nov 24 11:24:11 crc kubenswrapper[5072]: I1124 11:24:11.895162 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e05f8763-9e64-4bf6-84c8-25df03057309-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"e05f8763-9e64-4bf6-84c8-25df03057309\") " pod="openstack/openstack-cell1-galera-0" Nov 24 11:24:11 crc kubenswrapper[5072]: I1124 11:24:11.895228 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e05f8763-9e64-4bf6-84c8-25df03057309-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"e05f8763-9e64-4bf6-84c8-25df03057309\") " pod="openstack/openstack-cell1-galera-0" Nov 24 11:24:11 crc kubenswrapper[5072]: I1124 11:24:11.895262 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"e05f8763-9e64-4bf6-84c8-25df03057309\") " pod="openstack/openstack-cell1-galera-0" Nov 24 11:24:11 crc kubenswrapper[5072]: I1124 11:24:11.895314 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e05f8763-9e64-4bf6-84c8-25df03057309-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"e05f8763-9e64-4bf6-84c8-25df03057309\") " pod="openstack/openstack-cell1-galera-0" Nov 24 11:24:11 crc kubenswrapper[5072]: I1124 11:24:11.895334 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e05f8763-9e64-4bf6-84c8-25df03057309-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"e05f8763-9e64-4bf6-84c8-25df03057309\") " pod="openstack/openstack-cell1-galera-0" Nov 24 11:24:11 crc kubenswrapper[5072]: I1124 11:24:11.895898 5072 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"e05f8763-9e64-4bf6-84c8-25df03057309\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/openstack-cell1-galera-0" Nov 24 11:24:11 crc kubenswrapper[5072]: I1124 11:24:11.897003 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e05f8763-9e64-4bf6-84c8-25df03057309-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"e05f8763-9e64-4bf6-84c8-25df03057309\") " pod="openstack/openstack-cell1-galera-0" Nov 24 11:24:11 crc kubenswrapper[5072]: I1124 11:24:11.899330 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e05f8763-9e64-4bf6-84c8-25df03057309-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"e05f8763-9e64-4bf6-84c8-25df03057309\") " pod="openstack/openstack-cell1-galera-0" Nov 24 11:24:11 crc kubenswrapper[5072]: I1124 11:24:11.900695 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e05f8763-9e64-4bf6-84c8-25df03057309-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"e05f8763-9e64-4bf6-84c8-25df03057309\") " pod="openstack/openstack-cell1-galera-0" Nov 24 11:24:11 crc kubenswrapper[5072]: I1124 11:24:11.909937 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e05f8763-9e64-4bf6-84c8-25df03057309-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"e05f8763-9e64-4bf6-84c8-25df03057309\") " pod="openstack/openstack-cell1-galera-0" Nov 24 11:24:11 crc kubenswrapper[5072]: I1124 11:24:11.911741 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e05f8763-9e64-4bf6-84c8-25df03057309-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"e05f8763-9e64-4bf6-84c8-25df03057309\") " pod="openstack/openstack-cell1-galera-0" Nov 24 11:24:11 crc kubenswrapper[5072]: I1124 11:24:11.914896 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e05f8763-9e64-4bf6-84c8-25df03057309-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"e05f8763-9e64-4bf6-84c8-25df03057309\") " pod="openstack/openstack-cell1-galera-0" Nov 24 11:24:11 crc kubenswrapper[5072]: I1124 11:24:11.919186 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsw6s\" (UniqueName: \"kubernetes.io/projected/e05f8763-9e64-4bf6-84c8-25df03057309-kube-api-access-wsw6s\") pod \"openstack-cell1-galera-0\" (UID: \"e05f8763-9e64-4bf6-84c8-25df03057309\") " pod="openstack/openstack-cell1-galera-0" Nov 24 11:24:11 crc kubenswrapper[5072]: I1124 11:24:11.926359 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"e05f8763-9e64-4bf6-84c8-25df03057309\") " pod="openstack/openstack-cell1-galera-0" Nov 24 11:24:12 crc kubenswrapper[5072]: I1124 11:24:12.017492 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 24 11:24:12 crc kubenswrapper[5072]: I1124 11:24:12.055472 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Nov 24 11:24:12 crc kubenswrapper[5072]: I1124 11:24:12.056305 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 24 11:24:12 crc kubenswrapper[5072]: I1124 11:24:12.060089 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-m2bkd" Nov 24 11:24:12 crc kubenswrapper[5072]: I1124 11:24:12.060228 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Nov 24 11:24:12 crc kubenswrapper[5072]: I1124 11:24:12.060422 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Nov 24 11:24:12 crc kubenswrapper[5072]: I1124 11:24:12.070014 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 24 11:24:12 crc kubenswrapper[5072]: I1124 11:24:12.207168 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-np5vt\" (UniqueName: \"kubernetes.io/projected/f0ecdfec-d313-40dc-97a6-344109151fe8-kube-api-access-np5vt\") pod \"memcached-0\" (UID: \"f0ecdfec-d313-40dc-97a6-344109151fe8\") " pod="openstack/memcached-0" Nov 24 11:24:12 crc kubenswrapper[5072]: I1124 11:24:12.207715 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f0ecdfec-d313-40dc-97a6-344109151fe8-kolla-config\") pod \"memcached-0\" (UID: \"f0ecdfec-d313-40dc-97a6-344109151fe8\") " pod="openstack/memcached-0" Nov 24 11:24:12 crc kubenswrapper[5072]: I1124 11:24:12.207749 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0ecdfec-d313-40dc-97a6-344109151fe8-memcached-tls-certs\") pod \"memcached-0\" (UID: \"f0ecdfec-d313-40dc-97a6-344109151fe8\") " pod="openstack/memcached-0" Nov 24 11:24:12 crc kubenswrapper[5072]: I1124 11:24:12.207808 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f0ecdfec-d313-40dc-97a6-344109151fe8-config-data\") pod \"memcached-0\" (UID: \"f0ecdfec-d313-40dc-97a6-344109151fe8\") " pod="openstack/memcached-0" Nov 24 11:24:12 crc kubenswrapper[5072]: I1124 11:24:12.207850 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0ecdfec-d313-40dc-97a6-344109151fe8-combined-ca-bundle\") pod \"memcached-0\" (UID: \"f0ecdfec-d313-40dc-97a6-344109151fe8\") " pod="openstack/memcached-0" Nov 24 11:24:12 crc kubenswrapper[5072]: I1124 11:24:12.309822 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0ecdfec-d313-40dc-97a6-344109151fe8-combined-ca-bundle\") pod \"memcached-0\" (UID: \"f0ecdfec-d313-40dc-97a6-344109151fe8\") " pod="openstack/memcached-0" Nov 24 11:24:12 crc kubenswrapper[5072]: I1124 11:24:12.309870 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-np5vt\" (UniqueName: \"kubernetes.io/projected/f0ecdfec-d313-40dc-97a6-344109151fe8-kube-api-access-np5vt\") pod \"memcached-0\" (UID: \"f0ecdfec-d313-40dc-97a6-344109151fe8\") " pod="openstack/memcached-0" Nov 24 11:24:12 crc kubenswrapper[5072]: I1124 11:24:12.309907 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f0ecdfec-d313-40dc-97a6-344109151fe8-kolla-config\") pod \"memcached-0\" (UID: \"f0ecdfec-d313-40dc-97a6-344109151fe8\") " pod="openstack/memcached-0" Nov 24 11:24:12 crc kubenswrapper[5072]: I1124 11:24:12.309942 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0ecdfec-d313-40dc-97a6-344109151fe8-memcached-tls-certs\") pod \"memcached-0\" (UID: \"f0ecdfec-d313-40dc-97a6-344109151fe8\") " pod="openstack/memcached-0" Nov 24 11:24:12 crc kubenswrapper[5072]: I1124 11:24:12.309998 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f0ecdfec-d313-40dc-97a6-344109151fe8-config-data\") pod \"memcached-0\" (UID: \"f0ecdfec-d313-40dc-97a6-344109151fe8\") " pod="openstack/memcached-0" Nov 24 11:24:12 crc kubenswrapper[5072]: I1124 11:24:12.311986 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f0ecdfec-d313-40dc-97a6-344109151fe8-config-data\") pod \"memcached-0\" (UID: \"f0ecdfec-d313-40dc-97a6-344109151fe8\") " pod="openstack/memcached-0" Nov 24 11:24:12 crc kubenswrapper[5072]: I1124 11:24:12.312521 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f0ecdfec-d313-40dc-97a6-344109151fe8-kolla-config\") pod \"memcached-0\" (UID: \"f0ecdfec-d313-40dc-97a6-344109151fe8\") " pod="openstack/memcached-0" Nov 24 11:24:12 crc kubenswrapper[5072]: I1124 11:24:12.313699 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0ecdfec-d313-40dc-97a6-344109151fe8-memcached-tls-certs\") pod \"memcached-0\" (UID: \"f0ecdfec-d313-40dc-97a6-344109151fe8\") " pod="openstack/memcached-0" Nov 24 11:24:12 crc kubenswrapper[5072]: I1124 11:24:12.315444 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0ecdfec-d313-40dc-97a6-344109151fe8-combined-ca-bundle\") pod \"memcached-0\" (UID: \"f0ecdfec-d313-40dc-97a6-344109151fe8\") " pod="openstack/memcached-0" Nov 24 11:24:12 crc kubenswrapper[5072]: I1124 11:24:12.331797 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-np5vt\" (UniqueName: \"kubernetes.io/projected/f0ecdfec-d313-40dc-97a6-344109151fe8-kube-api-access-np5vt\") pod \"memcached-0\" (UID: \"f0ecdfec-d313-40dc-97a6-344109151fe8\") " pod="openstack/memcached-0" Nov 24 11:24:12 crc kubenswrapper[5072]: I1124 11:24:12.378497 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 24 11:24:13 crc kubenswrapper[5072]: I1124 11:24:13.622585 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 11:24:13 crc kubenswrapper[5072]: I1124 11:24:13.624081 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 24 11:24:13 crc kubenswrapper[5072]: I1124 11:24:13.626656 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-x62dt" Nov 24 11:24:13 crc kubenswrapper[5072]: I1124 11:24:13.631045 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 11:24:13 crc kubenswrapper[5072]: I1124 11:24:13.646613 5072 patch_prober.go:28] interesting pod/machine-config-daemon-jfxnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 11:24:13 crc kubenswrapper[5072]: I1124 11:24:13.646667 5072 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 11:24:13 crc kubenswrapper[5072]: I1124 11:24:13.732931 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhsr4\" (UniqueName: \"kubernetes.io/projected/550025c7-4dd7-452e-85f8-6355aaa6feb6-kube-api-access-hhsr4\") pod \"kube-state-metrics-0\" (UID: \"550025c7-4dd7-452e-85f8-6355aaa6feb6\") " pod="openstack/kube-state-metrics-0" Nov 24 11:24:13 crc kubenswrapper[5072]: I1124 11:24:13.833925 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhsr4\" (UniqueName: \"kubernetes.io/projected/550025c7-4dd7-452e-85f8-6355aaa6feb6-kube-api-access-hhsr4\") pod \"kube-state-metrics-0\" (UID: \"550025c7-4dd7-452e-85f8-6355aaa6feb6\") " pod="openstack/kube-state-metrics-0" Nov 24 11:24:13 crc kubenswrapper[5072]: I1124 11:24:13.866402 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhsr4\" (UniqueName: \"kubernetes.io/projected/550025c7-4dd7-452e-85f8-6355aaa6feb6-kube-api-access-hhsr4\") pod \"kube-state-metrics-0\" (UID: \"550025c7-4dd7-452e-85f8-6355aaa6feb6\") " pod="openstack/kube-state-metrics-0" Nov 24 11:24:13 crc kubenswrapper[5072]: I1124 11:24:13.947869 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 24 11:24:17 crc kubenswrapper[5072]: I1124 11:24:17.403653 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 24 11:24:17 crc kubenswrapper[5072]: I1124 11:24:17.405634 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 24 11:24:17 crc kubenswrapper[5072]: I1124 11:24:17.410583 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Nov 24 11:24:17 crc kubenswrapper[5072]: I1124 11:24:17.410603 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Nov 24 11:24:17 crc kubenswrapper[5072]: I1124 11:24:17.410905 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-gb6zn" Nov 24 11:24:17 crc kubenswrapper[5072]: I1124 11:24:17.411338 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Nov 24 11:24:17 crc kubenswrapper[5072]: I1124 11:24:17.411478 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Nov 24 11:24:17 crc kubenswrapper[5072]: I1124 11:24:17.426049 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 24 11:24:17 crc kubenswrapper[5072]: I1124 11:24:17.493784 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9zpv\" (UniqueName: \"kubernetes.io/projected/e8ca3957-ce1c-49e8-a56b-d0f406d2e078-kube-api-access-h9zpv\") pod \"ovsdbserver-nb-0\" (UID: \"e8ca3957-ce1c-49e8-a56b-d0f406d2e078\") " pod="openstack/ovsdbserver-nb-0" Nov 24 11:24:17 crc kubenswrapper[5072]: I1124 11:24:17.493831 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-nb-0\" (UID: \"e8ca3957-ce1c-49e8-a56b-d0f406d2e078\") " pod="openstack/ovsdbserver-nb-0" Nov 24 11:24:17 crc kubenswrapper[5072]: I1124 11:24:17.493870 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e8ca3957-ce1c-49e8-a56b-d0f406d2e078-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"e8ca3957-ce1c-49e8-a56b-d0f406d2e078\") " pod="openstack/ovsdbserver-nb-0" Nov 24 11:24:17 crc kubenswrapper[5072]: I1124 11:24:17.493891 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8ca3957-ce1c-49e8-a56b-d0f406d2e078-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"e8ca3957-ce1c-49e8-a56b-d0f406d2e078\") " pod="openstack/ovsdbserver-nb-0" Nov 24 11:24:17 crc kubenswrapper[5072]: I1124 11:24:17.494097 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8ca3957-ce1c-49e8-a56b-d0f406d2e078-config\") pod \"ovsdbserver-nb-0\" (UID: \"e8ca3957-ce1c-49e8-a56b-d0f406d2e078\") " pod="openstack/ovsdbserver-nb-0" Nov 24 11:24:17 crc kubenswrapper[5072]: I1124 11:24:17.494360 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e8ca3957-ce1c-49e8-a56b-d0f406d2e078-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"e8ca3957-ce1c-49e8-a56b-d0f406d2e078\") " pod="openstack/ovsdbserver-nb-0" Nov 24 11:24:17 crc kubenswrapper[5072]: I1124 11:24:17.494617 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8ca3957-ce1c-49e8-a56b-d0f406d2e078-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"e8ca3957-ce1c-49e8-a56b-d0f406d2e078\") " pod="openstack/ovsdbserver-nb-0" Nov 24 11:24:17 crc kubenswrapper[5072]: I1124 11:24:17.494708 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8ca3957-ce1c-49e8-a56b-d0f406d2e078-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"e8ca3957-ce1c-49e8-a56b-d0f406d2e078\") " pod="openstack/ovsdbserver-nb-0" Nov 24 11:24:17 crc kubenswrapper[5072]: I1124 11:24:17.596883 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9zpv\" (UniqueName: \"kubernetes.io/projected/e8ca3957-ce1c-49e8-a56b-d0f406d2e078-kube-api-access-h9zpv\") pod \"ovsdbserver-nb-0\" (UID: \"e8ca3957-ce1c-49e8-a56b-d0f406d2e078\") " pod="openstack/ovsdbserver-nb-0" Nov 24 11:24:17 crc kubenswrapper[5072]: I1124 11:24:17.596971 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-nb-0\" (UID: \"e8ca3957-ce1c-49e8-a56b-d0f406d2e078\") " pod="openstack/ovsdbserver-nb-0" Nov 24 11:24:17 crc kubenswrapper[5072]: I1124 11:24:17.597033 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e8ca3957-ce1c-49e8-a56b-d0f406d2e078-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"e8ca3957-ce1c-49e8-a56b-d0f406d2e078\") " pod="openstack/ovsdbserver-nb-0" Nov 24 11:24:17 crc kubenswrapper[5072]: I1124 11:24:17.597056 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8ca3957-ce1c-49e8-a56b-d0f406d2e078-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"e8ca3957-ce1c-49e8-a56b-d0f406d2e078\") " pod="openstack/ovsdbserver-nb-0" Nov 24 11:24:17 crc kubenswrapper[5072]: I1124 11:24:17.597107 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8ca3957-ce1c-49e8-a56b-d0f406d2e078-config\") pod \"ovsdbserver-nb-0\" (UID: \"e8ca3957-ce1c-49e8-a56b-d0f406d2e078\") " pod="openstack/ovsdbserver-nb-0" Nov 24 11:24:17 crc kubenswrapper[5072]: I1124 11:24:17.597149 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e8ca3957-ce1c-49e8-a56b-d0f406d2e078-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"e8ca3957-ce1c-49e8-a56b-d0f406d2e078\") " pod="openstack/ovsdbserver-nb-0" Nov 24 11:24:17 crc kubenswrapper[5072]: I1124 11:24:17.597212 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8ca3957-ce1c-49e8-a56b-d0f406d2e078-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"e8ca3957-ce1c-49e8-a56b-d0f406d2e078\") " pod="openstack/ovsdbserver-nb-0" Nov 24 11:24:17 crc kubenswrapper[5072]: I1124 11:24:17.597259 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8ca3957-ce1c-49e8-a56b-d0f406d2e078-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"e8ca3957-ce1c-49e8-a56b-d0f406d2e078\") " pod="openstack/ovsdbserver-nb-0" Nov 24 11:24:17 crc kubenswrapper[5072]: I1124 11:24:17.597630 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e8ca3957-ce1c-49e8-a56b-d0f406d2e078-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"e8ca3957-ce1c-49e8-a56b-d0f406d2e078\") " pod="openstack/ovsdbserver-nb-0" Nov 24 11:24:17 crc kubenswrapper[5072]: I1124 11:24:17.598481 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e8ca3957-ce1c-49e8-a56b-d0f406d2e078-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"e8ca3957-ce1c-49e8-a56b-d0f406d2e078\") " pod="openstack/ovsdbserver-nb-0" Nov 24 11:24:17 crc kubenswrapper[5072]: I1124 11:24:17.599016 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8ca3957-ce1c-49e8-a56b-d0f406d2e078-config\") pod \"ovsdbserver-nb-0\" (UID: \"e8ca3957-ce1c-49e8-a56b-d0f406d2e078\") " pod="openstack/ovsdbserver-nb-0" Nov 24 11:24:17 crc kubenswrapper[5072]: I1124 11:24:17.599110 5072 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-nb-0\" (UID: \"e8ca3957-ce1c-49e8-a56b-d0f406d2e078\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/ovsdbserver-nb-0" Nov 24 11:24:17 crc kubenswrapper[5072]: I1124 11:24:17.616472 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8ca3957-ce1c-49e8-a56b-d0f406d2e078-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"e8ca3957-ce1c-49e8-a56b-d0f406d2e078\") " pod="openstack/ovsdbserver-nb-0" Nov 24 11:24:17 crc kubenswrapper[5072]: I1124 11:24:17.616551 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8ca3957-ce1c-49e8-a56b-d0f406d2e078-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"e8ca3957-ce1c-49e8-a56b-d0f406d2e078\") " pod="openstack/ovsdbserver-nb-0" Nov 24 11:24:17 crc kubenswrapper[5072]: I1124 11:24:17.631447 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8ca3957-ce1c-49e8-a56b-d0f406d2e078-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"e8ca3957-ce1c-49e8-a56b-d0f406d2e078\") " pod="openstack/ovsdbserver-nb-0" Nov 24 11:24:17 crc kubenswrapper[5072]: I1124 11:24:17.635677 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9zpv\" (UniqueName: \"kubernetes.io/projected/e8ca3957-ce1c-49e8-a56b-d0f406d2e078-kube-api-access-h9zpv\") pod \"ovsdbserver-nb-0\" (UID: \"e8ca3957-ce1c-49e8-a56b-d0f406d2e078\") " pod="openstack/ovsdbserver-nb-0" Nov 24 11:24:17 crc kubenswrapper[5072]: I1124 11:24:17.684560 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-nb-0\" (UID: \"e8ca3957-ce1c-49e8-a56b-d0f406d2e078\") " pod="openstack/ovsdbserver-nb-0" Nov 24 11:24:17 crc kubenswrapper[5072]: I1124 11:24:17.701137 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 24 11:24:17 crc kubenswrapper[5072]: I1124 11:24:17.728471 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 24 11:24:17 crc kubenswrapper[5072]: I1124 11:24:17.823395 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ltkhm"] Nov 24 11:24:17 crc kubenswrapper[5072]: I1124 11:24:17.824629 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ltkhm" Nov 24 11:24:17 crc kubenswrapper[5072]: I1124 11:24:17.829042 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Nov 24 11:24:17 crc kubenswrapper[5072]: I1124 11:24:17.829331 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Nov 24 11:24:17 crc kubenswrapper[5072]: I1124 11:24:17.829513 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-zvztw" Nov 24 11:24:17 crc kubenswrapper[5072]: I1124 11:24:17.832582 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ltkhm"] Nov 24 11:24:17 crc kubenswrapper[5072]: I1124 11:24:17.836897 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-7tcxz"] Nov 24 11:24:17 crc kubenswrapper[5072]: I1124 11:24:17.838466 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-7tcxz" Nov 24 11:24:17 crc kubenswrapper[5072]: I1124 11:24:17.863216 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-7tcxz"] Nov 24 11:24:17 crc kubenswrapper[5072]: I1124 11:24:17.901245 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a15ce4b3-7344-4b9f-983a-0065209e9d68-var-run\") pod \"ovn-controller-ovs-7tcxz\" (UID: \"a15ce4b3-7344-4b9f-983a-0065209e9d68\") " pod="openstack/ovn-controller-ovs-7tcxz" Nov 24 11:24:17 crc kubenswrapper[5072]: I1124 11:24:17.901295 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/a15ce4b3-7344-4b9f-983a-0065209e9d68-var-lib\") pod \"ovn-controller-ovs-7tcxz\" (UID: \"a15ce4b3-7344-4b9f-983a-0065209e9d68\") " pod="openstack/ovn-controller-ovs-7tcxz" Nov 24 11:24:17 crc kubenswrapper[5072]: I1124 11:24:17.901326 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d1f48ba7-b537-4282-9eef-aee78410afcb-var-run\") pod \"ovn-controller-ltkhm\" (UID: \"d1f48ba7-b537-4282-9eef-aee78410afcb\") " pod="openstack/ovn-controller-ltkhm" Nov 24 11:24:17 crc kubenswrapper[5072]: I1124 11:24:17.901450 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bg2xh\" (UniqueName: \"kubernetes.io/projected/d1f48ba7-b537-4282-9eef-aee78410afcb-kube-api-access-bg2xh\") pod \"ovn-controller-ltkhm\" (UID: \"d1f48ba7-b537-4282-9eef-aee78410afcb\") " pod="openstack/ovn-controller-ltkhm" Nov 24 11:24:17 crc kubenswrapper[5072]: I1124 11:24:17.901499 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d1f48ba7-b537-4282-9eef-aee78410afcb-var-run-ovn\") pod \"ovn-controller-ltkhm\" (UID: \"d1f48ba7-b537-4282-9eef-aee78410afcb\") " pod="openstack/ovn-controller-ltkhm" Nov 24 11:24:17 crc kubenswrapper[5072]: I1124 11:24:17.901517 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d1f48ba7-b537-4282-9eef-aee78410afcb-var-log-ovn\") pod \"ovn-controller-ltkhm\" (UID: \"d1f48ba7-b537-4282-9eef-aee78410afcb\") " pod="openstack/ovn-controller-ltkhm" Nov 24 11:24:17 crc kubenswrapper[5072]: I1124 11:24:17.901539 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1f48ba7-b537-4282-9eef-aee78410afcb-combined-ca-bundle\") pod \"ovn-controller-ltkhm\" (UID: \"d1f48ba7-b537-4282-9eef-aee78410afcb\") " pod="openstack/ovn-controller-ltkhm" Nov 24 11:24:17 crc kubenswrapper[5072]: I1124 11:24:17.901697 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2vvl\" (UniqueName: \"kubernetes.io/projected/a15ce4b3-7344-4b9f-983a-0065209e9d68-kube-api-access-p2vvl\") pod \"ovn-controller-ovs-7tcxz\" (UID: \"a15ce4b3-7344-4b9f-983a-0065209e9d68\") " pod="openstack/ovn-controller-ovs-7tcxz" Nov 24 11:24:17 crc kubenswrapper[5072]: I1124 11:24:17.901766 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/a15ce4b3-7344-4b9f-983a-0065209e9d68-etc-ovs\") pod \"ovn-controller-ovs-7tcxz\" (UID: \"a15ce4b3-7344-4b9f-983a-0065209e9d68\") " pod="openstack/ovn-controller-ovs-7tcxz" Nov 24 11:24:17 crc kubenswrapper[5072]: I1124 11:24:17.901788 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d1f48ba7-b537-4282-9eef-aee78410afcb-scripts\") pod \"ovn-controller-ltkhm\" (UID: \"d1f48ba7-b537-4282-9eef-aee78410afcb\") " pod="openstack/ovn-controller-ltkhm" Nov 24 11:24:17 crc kubenswrapper[5072]: I1124 11:24:17.901862 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a15ce4b3-7344-4b9f-983a-0065209e9d68-var-log\") pod \"ovn-controller-ovs-7tcxz\" (UID: \"a15ce4b3-7344-4b9f-983a-0065209e9d68\") " pod="openstack/ovn-controller-ovs-7tcxz" Nov 24 11:24:17 crc kubenswrapper[5072]: I1124 11:24:17.901878 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a15ce4b3-7344-4b9f-983a-0065209e9d68-scripts\") pod \"ovn-controller-ovs-7tcxz\" (UID: \"a15ce4b3-7344-4b9f-983a-0065209e9d68\") " pod="openstack/ovn-controller-ovs-7tcxz" Nov 24 11:24:17 crc kubenswrapper[5072]: I1124 11:24:17.901898 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1f48ba7-b537-4282-9eef-aee78410afcb-ovn-controller-tls-certs\") pod \"ovn-controller-ltkhm\" (UID: \"d1f48ba7-b537-4282-9eef-aee78410afcb\") " pod="openstack/ovn-controller-ltkhm" Nov 24 11:24:18 crc kubenswrapper[5072]: I1124 11:24:18.003565 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d1f48ba7-b537-4282-9eef-aee78410afcb-var-run-ovn\") pod \"ovn-controller-ltkhm\" (UID: \"d1f48ba7-b537-4282-9eef-aee78410afcb\") " pod="openstack/ovn-controller-ltkhm" Nov 24 11:24:18 crc kubenswrapper[5072]: I1124 11:24:18.004875 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d1f48ba7-b537-4282-9eef-aee78410afcb-var-log-ovn\") pod \"ovn-controller-ltkhm\" (UID: \"d1f48ba7-b537-4282-9eef-aee78410afcb\") " pod="openstack/ovn-controller-ltkhm" Nov 24 11:24:18 crc kubenswrapper[5072]: I1124 11:24:18.004831 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d1f48ba7-b537-4282-9eef-aee78410afcb-var-run-ovn\") pod \"ovn-controller-ltkhm\" (UID: \"d1f48ba7-b537-4282-9eef-aee78410afcb\") " pod="openstack/ovn-controller-ltkhm" Nov 24 11:24:18 crc kubenswrapper[5072]: I1124 11:24:18.007675 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1f48ba7-b537-4282-9eef-aee78410afcb-combined-ca-bundle\") pod \"ovn-controller-ltkhm\" (UID: \"d1f48ba7-b537-4282-9eef-aee78410afcb\") " pod="openstack/ovn-controller-ltkhm" Nov 24 11:24:18 crc kubenswrapper[5072]: I1124 11:24:18.014606 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d1f48ba7-b537-4282-9eef-aee78410afcb-var-log-ovn\") pod \"ovn-controller-ltkhm\" (UID: \"d1f48ba7-b537-4282-9eef-aee78410afcb\") " pod="openstack/ovn-controller-ltkhm" Nov 24 11:24:18 crc kubenswrapper[5072]: I1124 11:24:18.014675 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1f48ba7-b537-4282-9eef-aee78410afcb-combined-ca-bundle\") pod \"ovn-controller-ltkhm\" (UID: \"d1f48ba7-b537-4282-9eef-aee78410afcb\") " pod="openstack/ovn-controller-ltkhm" Nov 24 11:24:18 crc kubenswrapper[5072]: I1124 11:24:18.014929 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2vvl\" (UniqueName: \"kubernetes.io/projected/a15ce4b3-7344-4b9f-983a-0065209e9d68-kube-api-access-p2vvl\") pod \"ovn-controller-ovs-7tcxz\" (UID: \"a15ce4b3-7344-4b9f-983a-0065209e9d68\") " pod="openstack/ovn-controller-ovs-7tcxz" Nov 24 11:24:18 crc kubenswrapper[5072]: I1124 11:24:18.014991 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/a15ce4b3-7344-4b9f-983a-0065209e9d68-etc-ovs\") pod \"ovn-controller-ovs-7tcxz\" (UID: \"a15ce4b3-7344-4b9f-983a-0065209e9d68\") " pod="openstack/ovn-controller-ovs-7tcxz" Nov 24 11:24:18 crc kubenswrapper[5072]: I1124 11:24:18.015065 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d1f48ba7-b537-4282-9eef-aee78410afcb-scripts\") pod \"ovn-controller-ltkhm\" (UID: \"d1f48ba7-b537-4282-9eef-aee78410afcb\") " pod="openstack/ovn-controller-ltkhm" Nov 24 11:24:18 crc kubenswrapper[5072]: I1124 11:24:18.015207 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a15ce4b3-7344-4b9f-983a-0065209e9d68-var-log\") pod \"ovn-controller-ovs-7tcxz\" (UID: \"a15ce4b3-7344-4b9f-983a-0065209e9d68\") " pod="openstack/ovn-controller-ovs-7tcxz" Nov 24 11:24:18 crc kubenswrapper[5072]: I1124 11:24:18.015231 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a15ce4b3-7344-4b9f-983a-0065209e9d68-scripts\") pod \"ovn-controller-ovs-7tcxz\" (UID: \"a15ce4b3-7344-4b9f-983a-0065209e9d68\") " pod="openstack/ovn-controller-ovs-7tcxz" Nov 24 11:24:18 crc kubenswrapper[5072]: I1124 11:24:18.015249 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1f48ba7-b537-4282-9eef-aee78410afcb-ovn-controller-tls-certs\") pod \"ovn-controller-ltkhm\" (UID: \"d1f48ba7-b537-4282-9eef-aee78410afcb\") " pod="openstack/ovn-controller-ltkhm" Nov 24 11:24:18 crc kubenswrapper[5072]: I1124 11:24:18.015425 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a15ce4b3-7344-4b9f-983a-0065209e9d68-var-run\") pod \"ovn-controller-ovs-7tcxz\" (UID: \"a15ce4b3-7344-4b9f-983a-0065209e9d68\") " pod="openstack/ovn-controller-ovs-7tcxz" Nov 24 11:24:18 crc kubenswrapper[5072]: I1124 11:24:18.015489 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a15ce4b3-7344-4b9f-983a-0065209e9d68-var-log\") pod \"ovn-controller-ovs-7tcxz\" (UID: \"a15ce4b3-7344-4b9f-983a-0065209e9d68\") " pod="openstack/ovn-controller-ovs-7tcxz" Nov 24 11:24:18 crc kubenswrapper[5072]: I1124 11:24:18.015641 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a15ce4b3-7344-4b9f-983a-0065209e9d68-var-run\") pod \"ovn-controller-ovs-7tcxz\" (UID: \"a15ce4b3-7344-4b9f-983a-0065209e9d68\") " pod="openstack/ovn-controller-ovs-7tcxz" Nov 24 11:24:18 crc kubenswrapper[5072]: I1124 11:24:18.015688 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/a15ce4b3-7344-4b9f-983a-0065209e9d68-var-lib\") pod \"ovn-controller-ovs-7tcxz\" (UID: \"a15ce4b3-7344-4b9f-983a-0065209e9d68\") " pod="openstack/ovn-controller-ovs-7tcxz" Nov 24 11:24:18 crc kubenswrapper[5072]: I1124 11:24:18.015709 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d1f48ba7-b537-4282-9eef-aee78410afcb-var-run\") pod \"ovn-controller-ltkhm\" (UID: \"d1f48ba7-b537-4282-9eef-aee78410afcb\") " pod="openstack/ovn-controller-ltkhm" Nov 24 11:24:18 crc kubenswrapper[5072]: I1124 11:24:18.015751 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bg2xh\" (UniqueName: \"kubernetes.io/projected/d1f48ba7-b537-4282-9eef-aee78410afcb-kube-api-access-bg2xh\") pod \"ovn-controller-ltkhm\" (UID: \"d1f48ba7-b537-4282-9eef-aee78410afcb\") " pod="openstack/ovn-controller-ltkhm" Nov 24 11:24:18 crc kubenswrapper[5072]: I1124 11:24:18.016434 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/a15ce4b3-7344-4b9f-983a-0065209e9d68-var-lib\") pod \"ovn-controller-ovs-7tcxz\" (UID: \"a15ce4b3-7344-4b9f-983a-0065209e9d68\") " pod="openstack/ovn-controller-ovs-7tcxz" Nov 24 11:24:18 crc kubenswrapper[5072]: I1124 11:24:18.016929 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d1f48ba7-b537-4282-9eef-aee78410afcb-var-run\") pod \"ovn-controller-ltkhm\" (UID: \"d1f48ba7-b537-4282-9eef-aee78410afcb\") " pod="openstack/ovn-controller-ltkhm" Nov 24 11:24:18 crc kubenswrapper[5072]: I1124 11:24:18.017011 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/a15ce4b3-7344-4b9f-983a-0065209e9d68-etc-ovs\") pod \"ovn-controller-ovs-7tcxz\" (UID: \"a15ce4b3-7344-4b9f-983a-0065209e9d68\") " pod="openstack/ovn-controller-ovs-7tcxz" Nov 24 11:24:18 crc kubenswrapper[5072]: I1124 11:24:18.017338 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d1f48ba7-b537-4282-9eef-aee78410afcb-scripts\") pod \"ovn-controller-ltkhm\" (UID: \"d1f48ba7-b537-4282-9eef-aee78410afcb\") " pod="openstack/ovn-controller-ltkhm" Nov 24 11:24:18 crc kubenswrapper[5072]: I1124 11:24:18.017366 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a15ce4b3-7344-4b9f-983a-0065209e9d68-scripts\") pod \"ovn-controller-ovs-7tcxz\" (UID: \"a15ce4b3-7344-4b9f-983a-0065209e9d68\") " pod="openstack/ovn-controller-ovs-7tcxz" Nov 24 11:24:18 crc kubenswrapper[5072]: I1124 11:24:18.019303 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1f48ba7-b537-4282-9eef-aee78410afcb-ovn-controller-tls-certs\") pod \"ovn-controller-ltkhm\" (UID: \"d1f48ba7-b537-4282-9eef-aee78410afcb\") " pod="openstack/ovn-controller-ltkhm" Nov 24 11:24:18 crc kubenswrapper[5072]: I1124 11:24:18.030121 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2vvl\" (UniqueName: \"kubernetes.io/projected/a15ce4b3-7344-4b9f-983a-0065209e9d68-kube-api-access-p2vvl\") pod \"ovn-controller-ovs-7tcxz\" (UID: \"a15ce4b3-7344-4b9f-983a-0065209e9d68\") " pod="openstack/ovn-controller-ovs-7tcxz" Nov 24 11:24:18 crc kubenswrapper[5072]: I1124 11:24:18.036426 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bg2xh\" (UniqueName: \"kubernetes.io/projected/d1f48ba7-b537-4282-9eef-aee78410afcb-kube-api-access-bg2xh\") pod \"ovn-controller-ltkhm\" (UID: \"d1f48ba7-b537-4282-9eef-aee78410afcb\") " pod="openstack/ovn-controller-ltkhm" Nov 24 11:24:18 crc kubenswrapper[5072]: I1124 11:24:18.156821 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ltkhm" Nov 24 11:24:18 crc kubenswrapper[5072]: I1124 11:24:18.170922 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-7tcxz" Nov 24 11:24:21 crc kubenswrapper[5072]: I1124 11:24:21.095579 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 24 11:24:21 crc kubenswrapper[5072]: I1124 11:24:21.097205 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 24 11:24:21 crc kubenswrapper[5072]: I1124 11:24:21.101175 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Nov 24 11:24:21 crc kubenswrapper[5072]: I1124 11:24:21.101472 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-7k6tz" Nov 24 11:24:21 crc kubenswrapper[5072]: I1124 11:24:21.101642 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Nov 24 11:24:21 crc kubenswrapper[5072]: I1124 11:24:21.105500 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Nov 24 11:24:21 crc kubenswrapper[5072]: I1124 11:24:21.111652 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 24 11:24:21 crc kubenswrapper[5072]: I1124 11:24:21.167892 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c95fc4be-5531-4d4d-98a5-aeb6d64b732d-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"c95fc4be-5531-4d4d-98a5-aeb6d64b732d\") " pod="openstack/ovsdbserver-sb-0" Nov 24 11:24:21 crc kubenswrapper[5072]: I1124 11:24:21.168055 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c95fc4be-5531-4d4d-98a5-aeb6d64b732d-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c95fc4be-5531-4d4d-98a5-aeb6d64b732d\") " pod="openstack/ovsdbserver-sb-0" Nov 24 11:24:21 crc kubenswrapper[5072]: I1124 11:24:21.168111 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c95fc4be-5531-4d4d-98a5-aeb6d64b732d-config\") pod \"ovsdbserver-sb-0\" (UID: \"c95fc4be-5531-4d4d-98a5-aeb6d64b732d\") " pod="openstack/ovsdbserver-sb-0" Nov 24 11:24:21 crc kubenswrapper[5072]: I1124 11:24:21.168137 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c95fc4be-5531-4d4d-98a5-aeb6d64b732d-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c95fc4be-5531-4d4d-98a5-aeb6d64b732d\") " pod="openstack/ovsdbserver-sb-0" Nov 24 11:24:21 crc kubenswrapper[5072]: I1124 11:24:21.168196 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c95fc4be-5531-4d4d-98a5-aeb6d64b732d-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"c95fc4be-5531-4d4d-98a5-aeb6d64b732d\") " pod="openstack/ovsdbserver-sb-0" Nov 24 11:24:21 crc kubenswrapper[5072]: I1124 11:24:21.168257 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c95fc4be-5531-4d4d-98a5-aeb6d64b732d-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"c95fc4be-5531-4d4d-98a5-aeb6d64b732d\") " pod="openstack/ovsdbserver-sb-0" Nov 24 11:24:21 crc kubenswrapper[5072]: I1124 11:24:21.168312 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c77fw\" (UniqueName: \"kubernetes.io/projected/c95fc4be-5531-4d4d-98a5-aeb6d64b732d-kube-api-access-c77fw\") pod \"ovsdbserver-sb-0\" (UID: \"c95fc4be-5531-4d4d-98a5-aeb6d64b732d\") " pod="openstack/ovsdbserver-sb-0" Nov 24 11:24:21 crc kubenswrapper[5072]: I1124 11:24:21.168362 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"c95fc4be-5531-4d4d-98a5-aeb6d64b732d\") " pod="openstack/ovsdbserver-sb-0" Nov 24 11:24:21 crc kubenswrapper[5072]: I1124 11:24:21.270062 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c95fc4be-5531-4d4d-98a5-aeb6d64b732d-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c95fc4be-5531-4d4d-98a5-aeb6d64b732d\") " pod="openstack/ovsdbserver-sb-0" Nov 24 11:24:21 crc kubenswrapper[5072]: I1124 11:24:21.270099 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c95fc4be-5531-4d4d-98a5-aeb6d64b732d-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c95fc4be-5531-4d4d-98a5-aeb6d64b732d\") " pod="openstack/ovsdbserver-sb-0" Nov 24 11:24:21 crc kubenswrapper[5072]: I1124 11:24:21.270120 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c95fc4be-5531-4d4d-98a5-aeb6d64b732d-config\") pod \"ovsdbserver-sb-0\" (UID: \"c95fc4be-5531-4d4d-98a5-aeb6d64b732d\") " pod="openstack/ovsdbserver-sb-0" Nov 24 11:24:21 crc kubenswrapper[5072]: I1124 11:24:21.270137 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c95fc4be-5531-4d4d-98a5-aeb6d64b732d-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"c95fc4be-5531-4d4d-98a5-aeb6d64b732d\") " pod="openstack/ovsdbserver-sb-0" Nov 24 11:24:21 crc kubenswrapper[5072]: I1124 11:24:21.270156 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c95fc4be-5531-4d4d-98a5-aeb6d64b732d-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"c95fc4be-5531-4d4d-98a5-aeb6d64b732d\") " pod="openstack/ovsdbserver-sb-0" Nov 24 11:24:21 crc kubenswrapper[5072]: I1124 11:24:21.270195 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c77fw\" (UniqueName: \"kubernetes.io/projected/c95fc4be-5531-4d4d-98a5-aeb6d64b732d-kube-api-access-c77fw\") pod \"ovsdbserver-sb-0\" (UID: \"c95fc4be-5531-4d4d-98a5-aeb6d64b732d\") " pod="openstack/ovsdbserver-sb-0" Nov 24 11:24:21 crc kubenswrapper[5072]: I1124 11:24:21.270219 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"c95fc4be-5531-4d4d-98a5-aeb6d64b732d\") " pod="openstack/ovsdbserver-sb-0" Nov 24 11:24:21 crc kubenswrapper[5072]: I1124 11:24:21.270248 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c95fc4be-5531-4d4d-98a5-aeb6d64b732d-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"c95fc4be-5531-4d4d-98a5-aeb6d64b732d\") " pod="openstack/ovsdbserver-sb-0" Nov 24 11:24:21 crc kubenswrapper[5072]: I1124 11:24:21.270676 5072 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"c95fc4be-5531-4d4d-98a5-aeb6d64b732d\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/ovsdbserver-sb-0" Nov 24 11:24:21 crc kubenswrapper[5072]: I1124 11:24:21.270842 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c95fc4be-5531-4d4d-98a5-aeb6d64b732d-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"c95fc4be-5531-4d4d-98a5-aeb6d64b732d\") " pod="openstack/ovsdbserver-sb-0" Nov 24 11:24:21 crc kubenswrapper[5072]: I1124 11:24:21.271183 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c95fc4be-5531-4d4d-98a5-aeb6d64b732d-config\") pod \"ovsdbserver-sb-0\" (UID: \"c95fc4be-5531-4d4d-98a5-aeb6d64b732d\") " pod="openstack/ovsdbserver-sb-0" Nov 24 11:24:21 crc kubenswrapper[5072]: I1124 11:24:21.271865 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c95fc4be-5531-4d4d-98a5-aeb6d64b732d-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"c95fc4be-5531-4d4d-98a5-aeb6d64b732d\") " pod="openstack/ovsdbserver-sb-0" Nov 24 11:24:21 crc kubenswrapper[5072]: I1124 11:24:21.276252 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c95fc4be-5531-4d4d-98a5-aeb6d64b732d-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c95fc4be-5531-4d4d-98a5-aeb6d64b732d\") " pod="openstack/ovsdbserver-sb-0" Nov 24 11:24:21 crc kubenswrapper[5072]: I1124 11:24:21.276425 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c95fc4be-5531-4d4d-98a5-aeb6d64b732d-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"c95fc4be-5531-4d4d-98a5-aeb6d64b732d\") " pod="openstack/ovsdbserver-sb-0" Nov 24 11:24:21 crc kubenswrapper[5072]: I1124 11:24:21.279890 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c95fc4be-5531-4d4d-98a5-aeb6d64b732d-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c95fc4be-5531-4d4d-98a5-aeb6d64b732d\") " pod="openstack/ovsdbserver-sb-0" Nov 24 11:24:21 crc kubenswrapper[5072]: I1124 11:24:21.289186 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c77fw\" (UniqueName: \"kubernetes.io/projected/c95fc4be-5531-4d4d-98a5-aeb6d64b732d-kube-api-access-c77fw\") pod \"ovsdbserver-sb-0\" (UID: \"c95fc4be-5531-4d4d-98a5-aeb6d64b732d\") " pod="openstack/ovsdbserver-sb-0" Nov 24 11:24:21 crc kubenswrapper[5072]: I1124 11:24:21.293619 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"c95fc4be-5531-4d4d-98a5-aeb6d64b732d\") " pod="openstack/ovsdbserver-sb-0" Nov 24 11:24:21 crc kubenswrapper[5072]: I1124 11:24:21.442431 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 24 11:24:22 crc kubenswrapper[5072]: I1124 11:24:22.805878 5072 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 11:24:22 crc kubenswrapper[5072]: E1124 11:24:22.819046 5072 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Nov 24 11:24:22 crc kubenswrapper[5072]: E1124 11:24:22.819557 5072 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lwffz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(354afe75-70d3-4c45-a990-0299f821b0af): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 11:24:22 crc kubenswrapper[5072]: E1124 11:24:22.820775 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="354afe75-70d3-4c45-a990-0299f821b0af" Nov 24 11:24:22 crc kubenswrapper[5072]: I1124 11:24:22.891129 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"0f143b81-90ef-461e-a3b5-36ceb68eda94","Type":"ContainerStarted","Data":"1b1ef37bb68d01a8bdb5531d8d043537732d9235602ee8e7b861054780e5050f"} Nov 24 11:24:22 crc kubenswrapper[5072]: E1124 11:24:22.892522 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="354afe75-70d3-4c45-a990-0299f821b0af" Nov 24 11:24:23 crc kubenswrapper[5072]: E1124 11:24:23.584817 5072 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 24 11:24:23 crc kubenswrapper[5072]: E1124 11:24:23.584875 5072 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 24 11:24:23 crc kubenswrapper[5072]: E1124 11:24:23.585097 5072 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sppkg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-dw29v_openstack(9f69faa2-9442-4a55-958e-c063925a5a93): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 11:24:23 crc kubenswrapper[5072]: E1124 11:24:23.585258 5072 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mwrr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-kphnt_openstack(02573658-0503-4bdb-81a8-21e289b8d886): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 11:24:23 crc kubenswrapper[5072]: E1124 11:24:23.586454 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-dw29v" podUID="9f69faa2-9442-4a55-958e-c063925a5a93" Nov 24 11:24:23 crc kubenswrapper[5072]: E1124 11:24:23.586513 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-kphnt" podUID="02573658-0503-4bdb-81a8-21e289b8d886" Nov 24 11:24:23 crc kubenswrapper[5072]: E1124 11:24:23.597105 5072 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 24 11:24:23 crc kubenswrapper[5072]: E1124 11:24:23.597210 5072 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tcstw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-s6zc5_openstack(cec5ba71-80bf-469f-adb9-5d73a3e8eef9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 11:24:23 crc kubenswrapper[5072]: E1124 11:24:23.598328 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-s6zc5" podUID="cec5ba71-80bf-469f-adb9-5d73a3e8eef9" Nov 24 11:24:23 crc kubenswrapper[5072]: E1124 11:24:23.656665 5072 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 24 11:24:23 crc kubenswrapper[5072]: E1124 11:24:23.656812 5072 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bvpmw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-l5dss_openstack(583d674d-7ef5-4897-9a08-e278ac090ee5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 11:24:23 crc kubenswrapper[5072]: E1124 11:24:23.659234 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-l5dss" podUID="583d674d-7ef5-4897-9a08-e278ac090ee5" Nov 24 11:24:23 crc kubenswrapper[5072]: E1124 11:24:23.905509 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-l5dss" podUID="583d674d-7ef5-4897-9a08-e278ac090ee5" Nov 24 11:24:23 crc kubenswrapper[5072]: E1124 11:24:23.906430 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-kphnt" podUID="02573658-0503-4bdb-81a8-21e289b8d886" Nov 24 11:24:24 crc kubenswrapper[5072]: I1124 11:24:24.075643 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 11:24:24 crc kubenswrapper[5072]: I1124 11:24:24.090930 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 24 11:24:24 crc kubenswrapper[5072]: W1124 11:24:24.094473 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod550025c7_4dd7_452e_85f8_6355aaa6feb6.slice/crio-f6dd3c766c75daad560ecfaf23e6c529a4a3e71322c280b71db505fb9d9412b6 WatchSource:0}: Error finding container f6dd3c766c75daad560ecfaf23e6c529a4a3e71322c280b71db505fb9d9412b6: Status 404 returned error can't find the container with id f6dd3c766c75daad560ecfaf23e6c529a4a3e71322c280b71db505fb9d9412b6 Nov 24 11:24:24 crc kubenswrapper[5072]: I1124 11:24:24.103862 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 24 11:24:24 crc kubenswrapper[5072]: W1124 11:24:24.108607 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode05f8763_9e64_4bf6_84c8_25df03057309.slice/crio-d9c047dbd213c0e3b2fb6785da02fe55647e37eb3b512235a5a98f300af1fcd1 WatchSource:0}: Error finding container d9c047dbd213c0e3b2fb6785da02fe55647e37eb3b512235a5a98f300af1fcd1: Status 404 returned error can't find the container with id d9c047dbd213c0e3b2fb6785da02fe55647e37eb3b512235a5a98f300af1fcd1 Nov 24 11:24:24 crc kubenswrapper[5072]: W1124 11:24:24.109685 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode8ca3957_ce1c_49e8_a56b_d0f406d2e078.slice/crio-f25a02a28ec5ecdd45781a3e006178d1db548e0c538d776d227ed00132692463 WatchSource:0}: Error finding container f25a02a28ec5ecdd45781a3e006178d1db548e0c538d776d227ed00132692463: Status 404 returned error can't find the container with id f25a02a28ec5ecdd45781a3e006178d1db548e0c538d776d227ed00132692463 Nov 24 11:24:24 crc kubenswrapper[5072]: I1124 11:24:24.367948 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-dw29v" Nov 24 11:24:24 crc kubenswrapper[5072]: I1124 11:24:24.405021 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ltkhm"] Nov 24 11:24:24 crc kubenswrapper[5072]: I1124 11:24:24.413874 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 24 11:24:24 crc kubenswrapper[5072]: W1124 11:24:24.419320 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf0ecdfec_d313_40dc_97a6_344109151fe8.slice/crio-1c428f839ad4d7c37251c6aa06d0318dd227fcb6c8c027fd79a03da2b581c4a2 WatchSource:0}: Error finding container 1c428f839ad4d7c37251c6aa06d0318dd227fcb6c8c027fd79a03da2b581c4a2: Status 404 returned error can't find the container with id 1c428f839ad4d7c37251c6aa06d0318dd227fcb6c8c027fd79a03da2b581c4a2 Nov 24 11:24:24 crc kubenswrapper[5072]: W1124 11:24:24.420983 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd1f48ba7_b537_4282_9eef_aee78410afcb.slice/crio-3bad2d583e6e4ae5b280ee1ce50e2d90d2b4d4b398ae5f55198beda4080f66e1 WatchSource:0}: Error finding container 3bad2d583e6e4ae5b280ee1ce50e2d90d2b4d4b398ae5f55198beda4080f66e1: Status 404 returned error can't find the container with id 3bad2d583e6e4ae5b280ee1ce50e2d90d2b4d4b398ae5f55198beda4080f66e1 Nov 24 11:24:24 crc kubenswrapper[5072]: I1124 11:24:24.424190 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f69faa2-9442-4a55-958e-c063925a5a93-config\") pod \"9f69faa2-9442-4a55-958e-c063925a5a93\" (UID: \"9f69faa2-9442-4a55-958e-c063925a5a93\") " Nov 24 11:24:24 crc kubenswrapper[5072]: I1124 11:24:24.424225 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9f69faa2-9442-4a55-958e-c063925a5a93-dns-svc\") pod \"9f69faa2-9442-4a55-958e-c063925a5a93\" (UID: \"9f69faa2-9442-4a55-958e-c063925a5a93\") " Nov 24 11:24:24 crc kubenswrapper[5072]: I1124 11:24:24.424344 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sppkg\" (UniqueName: \"kubernetes.io/projected/9f69faa2-9442-4a55-958e-c063925a5a93-kube-api-access-sppkg\") pod \"9f69faa2-9442-4a55-958e-c063925a5a93\" (UID: \"9f69faa2-9442-4a55-958e-c063925a5a93\") " Nov 24 11:24:24 crc kubenswrapper[5072]: I1124 11:24:24.425137 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f69faa2-9442-4a55-958e-c063925a5a93-config" (OuterVolumeSpecName: "config") pod "9f69faa2-9442-4a55-958e-c063925a5a93" (UID: "9f69faa2-9442-4a55-958e-c063925a5a93"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:24:24 crc kubenswrapper[5072]: I1124 11:24:24.425477 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f69faa2-9442-4a55-958e-c063925a5a93-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9f69faa2-9442-4a55-958e-c063925a5a93" (UID: "9f69faa2-9442-4a55-958e-c063925a5a93"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:24:24 crc kubenswrapper[5072]: I1124 11:24:24.433063 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f69faa2-9442-4a55-958e-c063925a5a93-kube-api-access-sppkg" (OuterVolumeSpecName: "kube-api-access-sppkg") pod "9f69faa2-9442-4a55-958e-c063925a5a93" (UID: "9f69faa2-9442-4a55-958e-c063925a5a93"). InnerVolumeSpecName "kube-api-access-sppkg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:24:24 crc kubenswrapper[5072]: I1124 11:24:24.513961 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 24 11:24:24 crc kubenswrapper[5072]: I1124 11:24:24.526072 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sppkg\" (UniqueName: \"kubernetes.io/projected/9f69faa2-9442-4a55-958e-c063925a5a93-kube-api-access-sppkg\") on node \"crc\" DevicePath \"\"" Nov 24 11:24:24 crc kubenswrapper[5072]: I1124 11:24:24.526109 5072 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f69faa2-9442-4a55-958e-c063925a5a93-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:24:24 crc kubenswrapper[5072]: I1124 11:24:24.526118 5072 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9f69faa2-9442-4a55-958e-c063925a5a93-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 11:24:24 crc kubenswrapper[5072]: I1124 11:24:24.542779 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-s6zc5" Nov 24 11:24:24 crc kubenswrapper[5072]: W1124 11:24:24.591111 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc95fc4be_5531_4d4d_98a5_aeb6d64b732d.slice/crio-1bf3fadc1d1c619e8fb2479f3f6f381d823a67425cd5d636bf0391bbabfe4c3b WatchSource:0}: Error finding container 1bf3fadc1d1c619e8fb2479f3f6f381d823a67425cd5d636bf0391bbabfe4c3b: Status 404 returned error can't find the container with id 1bf3fadc1d1c619e8fb2479f3f6f381d823a67425cd5d636bf0391bbabfe4c3b Nov 24 11:24:24 crc kubenswrapper[5072]: I1124 11:24:24.609768 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-7tcxz"] Nov 24 11:24:24 crc kubenswrapper[5072]: W1124 11:24:24.612684 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda15ce4b3_7344_4b9f_983a_0065209e9d68.slice/crio-951ea005760b8b0ca03c53a2b17af1e8d9f948456d56eeef0b779aa2e94cba25 WatchSource:0}: Error finding container 951ea005760b8b0ca03c53a2b17af1e8d9f948456d56eeef0b779aa2e94cba25: Status 404 returned error can't find the container with id 951ea005760b8b0ca03c53a2b17af1e8d9f948456d56eeef0b779aa2e94cba25 Nov 24 11:24:24 crc kubenswrapper[5072]: I1124 11:24:24.627654 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cec5ba71-80bf-469f-adb9-5d73a3e8eef9-config\") pod \"cec5ba71-80bf-469f-adb9-5d73a3e8eef9\" (UID: \"cec5ba71-80bf-469f-adb9-5d73a3e8eef9\") " Nov 24 11:24:24 crc kubenswrapper[5072]: I1124 11:24:24.627800 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tcstw\" (UniqueName: \"kubernetes.io/projected/cec5ba71-80bf-469f-adb9-5d73a3e8eef9-kube-api-access-tcstw\") pod \"cec5ba71-80bf-469f-adb9-5d73a3e8eef9\" (UID: \"cec5ba71-80bf-469f-adb9-5d73a3e8eef9\") " Nov 24 11:24:24 crc kubenswrapper[5072]: I1124 11:24:24.629157 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cec5ba71-80bf-469f-adb9-5d73a3e8eef9-config" (OuterVolumeSpecName: "config") pod "cec5ba71-80bf-469f-adb9-5d73a3e8eef9" (UID: "cec5ba71-80bf-469f-adb9-5d73a3e8eef9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:24:24 crc kubenswrapper[5072]: I1124 11:24:24.632124 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cec5ba71-80bf-469f-adb9-5d73a3e8eef9-kube-api-access-tcstw" (OuterVolumeSpecName: "kube-api-access-tcstw") pod "cec5ba71-80bf-469f-adb9-5d73a3e8eef9" (UID: "cec5ba71-80bf-469f-adb9-5d73a3e8eef9"). InnerVolumeSpecName "kube-api-access-tcstw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:24:24 crc kubenswrapper[5072]: I1124 11:24:24.763837 5072 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cec5ba71-80bf-469f-adb9-5d73a3e8eef9-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:24:24 crc kubenswrapper[5072]: I1124 11:24:24.763873 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tcstw\" (UniqueName: \"kubernetes.io/projected/cec5ba71-80bf-469f-adb9-5d73a3e8eef9-kube-api-access-tcstw\") on node \"crc\" DevicePath \"\"" Nov 24 11:24:24 crc kubenswrapper[5072]: I1124 11:24:24.910188 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"f0ecdfec-d313-40dc-97a6-344109151fe8","Type":"ContainerStarted","Data":"1c428f839ad4d7c37251c6aa06d0318dd227fcb6c8c027fd79a03da2b581c4a2"} Nov 24 11:24:24 crc kubenswrapper[5072]: I1124 11:24:24.911988 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-dw29v" event={"ID":"9f69faa2-9442-4a55-958e-c063925a5a93","Type":"ContainerDied","Data":"e8bdb9eb9c6de57d5d9d50314dc20f71c6c57e95d7678c7176528a7eaf013b07"} Nov 24 11:24:24 crc kubenswrapper[5072]: I1124 11:24:24.911995 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-dw29v" Nov 24 11:24:24 crc kubenswrapper[5072]: I1124 11:24:24.913297 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"224cff60-3d72-478d-9788-926bbca42ad2","Type":"ContainerStarted","Data":"2e81d597c043ecd78e584bee1d8d13ad13881786d38a4fbb7fe5f5e65775c121"} Nov 24 11:24:24 crc kubenswrapper[5072]: I1124 11:24:24.915096 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"e8ca3957-ce1c-49e8-a56b-d0f406d2e078","Type":"ContainerStarted","Data":"f25a02a28ec5ecdd45781a3e006178d1db548e0c538d776d227ed00132692463"} Nov 24 11:24:24 crc kubenswrapper[5072]: I1124 11:24:24.917044 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-7tcxz" event={"ID":"a15ce4b3-7344-4b9f-983a-0065209e9d68","Type":"ContainerStarted","Data":"951ea005760b8b0ca03c53a2b17af1e8d9f948456d56eeef0b779aa2e94cba25"} Nov 24 11:24:24 crc kubenswrapper[5072]: I1124 11:24:24.918728 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-s6zc5" event={"ID":"cec5ba71-80bf-469f-adb9-5d73a3e8eef9","Type":"ContainerDied","Data":"966e3b5661b43c72dd7f5e96365ac0e88ad00cd89928033de1b92c8a84afffc2"} Nov 24 11:24:24 crc kubenswrapper[5072]: I1124 11:24:24.918775 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-s6zc5" Nov 24 11:24:24 crc kubenswrapper[5072]: I1124 11:24:24.920912 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"550025c7-4dd7-452e-85f8-6355aaa6feb6","Type":"ContainerStarted","Data":"f6dd3c766c75daad560ecfaf23e6c529a4a3e71322c280b71db505fb9d9412b6"} Nov 24 11:24:24 crc kubenswrapper[5072]: I1124 11:24:24.922777 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"c95fc4be-5531-4d4d-98a5-aeb6d64b732d","Type":"ContainerStarted","Data":"1bf3fadc1d1c619e8fb2479f3f6f381d823a67425cd5d636bf0391bbabfe4c3b"} Nov 24 11:24:24 crc kubenswrapper[5072]: I1124 11:24:24.924199 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"e05f8763-9e64-4bf6-84c8-25df03057309","Type":"ContainerStarted","Data":"d9c047dbd213c0e3b2fb6785da02fe55647e37eb3b512235a5a98f300af1fcd1"} Nov 24 11:24:24 crc kubenswrapper[5072]: I1124 11:24:24.925197 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ltkhm" event={"ID":"d1f48ba7-b537-4282-9eef-aee78410afcb","Type":"ContainerStarted","Data":"3bad2d583e6e4ae5b280ee1ce50e2d90d2b4d4b398ae5f55198beda4080f66e1"} Nov 24 11:24:25 crc kubenswrapper[5072]: I1124 11:24:25.008068 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-dw29v"] Nov 24 11:24:25 crc kubenswrapper[5072]: I1124 11:24:25.014850 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-dw29v"] Nov 24 11:24:25 crc kubenswrapper[5072]: I1124 11:24:25.029486 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f69faa2-9442-4a55-958e-c063925a5a93" path="/var/lib/kubelet/pods/9f69faa2-9442-4a55-958e-c063925a5a93/volumes" Nov 24 11:24:25 crc kubenswrapper[5072]: I1124 11:24:25.029821 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-s6zc5"] Nov 24 11:24:25 crc kubenswrapper[5072]: I1124 11:24:25.029845 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-s6zc5"] Nov 24 11:24:27 crc kubenswrapper[5072]: I1124 11:24:27.025708 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cec5ba71-80bf-469f-adb9-5d73a3e8eef9" path="/var/lib/kubelet/pods/cec5ba71-80bf-469f-adb9-5d73a3e8eef9/volumes" Nov 24 11:24:30 crc kubenswrapper[5072]: I1124 11:24:30.976050 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"e05f8763-9e64-4bf6-84c8-25df03057309","Type":"ContainerStarted","Data":"29c5f08bd04e62feab044b00e4c71dae69d7c00128fee5bbd0ab77f34beb5f9d"} Nov 24 11:24:30 crc kubenswrapper[5072]: I1124 11:24:30.978828 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"f0ecdfec-d313-40dc-97a6-344109151fe8","Type":"ContainerStarted","Data":"9135002b5fb6aad24e752407e1d5e9324c536f3ce03c8500cf090ef9c809281e"} Nov 24 11:24:30 crc kubenswrapper[5072]: I1124 11:24:30.978950 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Nov 24 11:24:31 crc kubenswrapper[5072]: I1124 11:24:31.990193 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"550025c7-4dd7-452e-85f8-6355aaa6feb6","Type":"ContainerStarted","Data":"4ae022196a19d67accf88e4d57f525bb2bb37c8d0e158d122c4641d674f78983"} Nov 24 11:24:31 crc kubenswrapper[5072]: I1124 11:24:31.990643 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 24 11:24:31 crc kubenswrapper[5072]: I1124 11:24:31.993400 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"c95fc4be-5531-4d4d-98a5-aeb6d64b732d","Type":"ContainerStarted","Data":"d8b6ec11d0f4dc426f5a07280ddc5903ce765fead7812dd1584e77dd63bf9088"} Nov 24 11:24:31 crc kubenswrapper[5072]: I1124 11:24:31.997001 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"0f143b81-90ef-461e-a3b5-36ceb68eda94","Type":"ContainerStarted","Data":"a5c8d0a3bbf524f21446eef6b921c3504ae54471b96b8bf211c20085f74b99e7"} Nov 24 11:24:32 crc kubenswrapper[5072]: I1124 11:24:32.000389 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"e8ca3957-ce1c-49e8-a56b-d0f406d2e078","Type":"ContainerStarted","Data":"35bdf4c62935995c3077414dec74b4363179fe604c85570ca0f99156d7c59986"} Nov 24 11:24:32 crc kubenswrapper[5072]: I1124 11:24:32.005231 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ltkhm" event={"ID":"d1f48ba7-b537-4282-9eef-aee78410afcb","Type":"ContainerStarted","Data":"fe1e39878fd36d7f0d034c9fc9482a3b20dc2bfdfa3bd96c0605ee948902bebb"} Nov 24 11:24:32 crc kubenswrapper[5072]: I1124 11:24:32.005402 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ltkhm" Nov 24 11:24:32 crc kubenswrapper[5072]: I1124 11:24:32.009117 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=14.632217826 podStartE2EDuration="20.009096821s" podCreationTimestamp="2025-11-24 11:24:12 +0000 UTC" firstStartedPulling="2025-11-24 11:24:24.423712854 +0000 UTC m=+916.135237330" lastFinishedPulling="2025-11-24 11:24:29.800591839 +0000 UTC m=+921.512116325" observedRunningTime="2025-11-24 11:24:31.013978446 +0000 UTC m=+922.725502922" watchObservedRunningTime="2025-11-24 11:24:32.009096821 +0000 UTC m=+923.720621287" Nov 24 11:24:32 crc kubenswrapper[5072]: I1124 11:24:32.009796 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=12.407129026 podStartE2EDuration="19.009790549s" podCreationTimestamp="2025-11-24 11:24:13 +0000 UTC" firstStartedPulling="2025-11-24 11:24:24.101528286 +0000 UTC m=+915.813052762" lastFinishedPulling="2025-11-24 11:24:30.704189809 +0000 UTC m=+922.415714285" observedRunningTime="2025-11-24 11:24:32.004239199 +0000 UTC m=+923.715763675" watchObservedRunningTime="2025-11-24 11:24:32.009790549 +0000 UTC m=+923.721315025" Nov 24 11:24:32 crc kubenswrapper[5072]: I1124 11:24:32.012032 5072 generic.go:334] "Generic (PLEG): container finished" podID="a15ce4b3-7344-4b9f-983a-0065209e9d68" containerID="0408f54589c129e400b251fb00deb1567a2165533c7544818b5d99f009b204e4" exitCode=0 Nov 24 11:24:32 crc kubenswrapper[5072]: I1124 11:24:32.012080 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-7tcxz" event={"ID":"a15ce4b3-7344-4b9f-983a-0065209e9d68","Type":"ContainerDied","Data":"0408f54589c129e400b251fb00deb1567a2165533c7544818b5d99f009b204e4"} Nov 24 11:24:32 crc kubenswrapper[5072]: I1124 11:24:32.072872 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ltkhm" podStartSLOduration=8.874907932 podStartE2EDuration="15.072854988s" podCreationTimestamp="2025-11-24 11:24:17 +0000 UTC" firstStartedPulling="2025-11-24 11:24:24.423634632 +0000 UTC m=+916.135159108" lastFinishedPulling="2025-11-24 11:24:30.621581688 +0000 UTC m=+922.333106164" observedRunningTime="2025-11-24 11:24:32.069977316 +0000 UTC m=+923.781501792" watchObservedRunningTime="2025-11-24 11:24:32.072854988 +0000 UTC m=+923.784379464" Nov 24 11:24:33 crc kubenswrapper[5072]: I1124 11:24:33.036333 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-7tcxz" event={"ID":"a15ce4b3-7344-4b9f-983a-0065209e9d68","Type":"ContainerStarted","Data":"09b2a1860161f1752788ac3249529f560e13a95768fdb51937cd40308630aac1"} Nov 24 11:24:33 crc kubenswrapper[5072]: I1124 11:24:33.037010 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-7tcxz" event={"ID":"a15ce4b3-7344-4b9f-983a-0065209e9d68","Type":"ContainerStarted","Data":"abf5814964b544a64ed0373e4eb01c0e544b46a116633f564bdfb385c3051168"} Nov 24 11:24:33 crc kubenswrapper[5072]: I1124 11:24:33.171339 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-7tcxz" Nov 24 11:24:33 crc kubenswrapper[5072]: I1124 11:24:33.171465 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-7tcxz" Nov 24 11:24:35 crc kubenswrapper[5072]: I1124 11:24:35.041337 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"c95fc4be-5531-4d4d-98a5-aeb6d64b732d","Type":"ContainerStarted","Data":"5f869a949cd3e10a7cca1c89005b23cc1b9b16b6f580cd0feda1679249acbac5"} Nov 24 11:24:35 crc kubenswrapper[5072]: I1124 11:24:35.043783 5072 generic.go:334] "Generic (PLEG): container finished" podID="0f143b81-90ef-461e-a3b5-36ceb68eda94" containerID="a5c8d0a3bbf524f21446eef6b921c3504ae54471b96b8bf211c20085f74b99e7" exitCode=0 Nov 24 11:24:35 crc kubenswrapper[5072]: I1124 11:24:35.043856 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"0f143b81-90ef-461e-a3b5-36ceb68eda94","Type":"ContainerDied","Data":"a5c8d0a3bbf524f21446eef6b921c3504ae54471b96b8bf211c20085f74b99e7"} Nov 24 11:24:35 crc kubenswrapper[5072]: I1124 11:24:35.062602 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"e8ca3957-ce1c-49e8-a56b-d0f406d2e078","Type":"ContainerStarted","Data":"1d6e96e49d7af705002219b2bebed50638ea93c92a5539fd2aaff0b29c14c35b"} Nov 24 11:24:35 crc kubenswrapper[5072]: I1124 11:24:35.063713 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-7tcxz" podStartSLOduration=12.055482171 podStartE2EDuration="18.063698196s" podCreationTimestamp="2025-11-24 11:24:17 +0000 UTC" firstStartedPulling="2025-11-24 11:24:24.615072396 +0000 UTC m=+916.326596872" lastFinishedPulling="2025-11-24 11:24:30.623288381 +0000 UTC m=+922.334812897" observedRunningTime="2025-11-24 11:24:33.046481853 +0000 UTC m=+924.758006409" watchObservedRunningTime="2025-11-24 11:24:35.063698196 +0000 UTC m=+926.775222682" Nov 24 11:24:35 crc kubenswrapper[5072]: I1124 11:24:35.066356 5072 generic.go:334] "Generic (PLEG): container finished" podID="e05f8763-9e64-4bf6-84c8-25df03057309" containerID="29c5f08bd04e62feab044b00e4c71dae69d7c00128fee5bbd0ab77f34beb5f9d" exitCode=0 Nov 24 11:24:35 crc kubenswrapper[5072]: I1124 11:24:35.067093 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"e05f8763-9e64-4bf6-84c8-25df03057309","Type":"ContainerDied","Data":"29c5f08bd04e62feab044b00e4c71dae69d7c00128fee5bbd0ab77f34beb5f9d"} Nov 24 11:24:35 crc kubenswrapper[5072]: I1124 11:24:35.079294 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=5.625974169 podStartE2EDuration="15.079269948s" podCreationTimestamp="2025-11-24 11:24:20 +0000 UTC" firstStartedPulling="2025-11-24 11:24:24.593500722 +0000 UTC m=+916.305025198" lastFinishedPulling="2025-11-24 11:24:34.046796461 +0000 UTC m=+925.758320977" observedRunningTime="2025-11-24 11:24:35.074232971 +0000 UTC m=+926.785757457" watchObservedRunningTime="2025-11-24 11:24:35.079269948 +0000 UTC m=+926.790794434" Nov 24 11:24:35 crc kubenswrapper[5072]: I1124 11:24:35.159702 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=9.24260448 podStartE2EDuration="19.159684664s" podCreationTimestamp="2025-11-24 11:24:16 +0000 UTC" firstStartedPulling="2025-11-24 11:24:24.112673277 +0000 UTC m=+915.824197753" lastFinishedPulling="2025-11-24 11:24:34.029753461 +0000 UTC m=+925.741277937" observedRunningTime="2025-11-24 11:24:35.152161135 +0000 UTC m=+926.863685621" watchObservedRunningTime="2025-11-24 11:24:35.159684664 +0000 UTC m=+926.871209150" Nov 24 11:24:35 crc kubenswrapper[5072]: I1124 11:24:35.729154 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Nov 24 11:24:35 crc kubenswrapper[5072]: I1124 11:24:35.795508 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Nov 24 11:24:36 crc kubenswrapper[5072]: I1124 11:24:36.078697 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Nov 24 11:24:36 crc kubenswrapper[5072]: I1124 11:24:36.127584 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Nov 24 11:24:36 crc kubenswrapper[5072]: I1124 11:24:36.431929 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-kphnt"] Nov 24 11:24:36 crc kubenswrapper[5072]: I1124 11:24:36.442622 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Nov 24 11:24:36 crc kubenswrapper[5072]: I1124 11:24:36.442753 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Nov 24 11:24:36 crc kubenswrapper[5072]: I1124 11:24:36.457016 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-dwffh"] Nov 24 11:24:36 crc kubenswrapper[5072]: I1124 11:24:36.458300 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-dwffh" Nov 24 11:24:36 crc kubenswrapper[5072]: I1124 11:24:36.462295 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Nov 24 11:24:36 crc kubenswrapper[5072]: I1124 11:24:36.471747 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-lhcvx"] Nov 24 11:24:36 crc kubenswrapper[5072]: I1124 11:24:36.472999 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-lhcvx" Nov 24 11:24:36 crc kubenswrapper[5072]: I1124 11:24:36.490847 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Nov 24 11:24:36 crc kubenswrapper[5072]: I1124 11:24:36.494336 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-dwffh"] Nov 24 11:24:36 crc kubenswrapper[5072]: I1124 11:24:36.499620 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sx8z7\" (UniqueName: \"kubernetes.io/projected/6dc3beca-8832-4852-a397-cca5accca1a1-kube-api-access-sx8z7\") pod \"ovn-controller-metrics-dwffh\" (UID: \"6dc3beca-8832-4852-a397-cca5accca1a1\") " pod="openstack/ovn-controller-metrics-dwffh" Nov 24 11:24:36 crc kubenswrapper[5072]: I1124 11:24:36.499720 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8q2z\" (UniqueName: \"kubernetes.io/projected/44ffb5b8-a638-4300-a2dc-6a0007c09e1c-kube-api-access-j8q2z\") pod \"dnsmasq-dns-7fd796d7df-lhcvx\" (UID: \"44ffb5b8-a638-4300-a2dc-6a0007c09e1c\") " pod="openstack/dnsmasq-dns-7fd796d7df-lhcvx" Nov 24 11:24:36 crc kubenswrapper[5072]: I1124 11:24:36.499745 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dc3beca-8832-4852-a397-cca5accca1a1-combined-ca-bundle\") pod \"ovn-controller-metrics-dwffh\" (UID: \"6dc3beca-8832-4852-a397-cca5accca1a1\") " pod="openstack/ovn-controller-metrics-dwffh" Nov 24 11:24:36 crc kubenswrapper[5072]: I1124 11:24:36.499783 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/44ffb5b8-a638-4300-a2dc-6a0007c09e1c-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-lhcvx\" (UID: \"44ffb5b8-a638-4300-a2dc-6a0007c09e1c\") " pod="openstack/dnsmasq-dns-7fd796d7df-lhcvx" Nov 24 11:24:36 crc kubenswrapper[5072]: I1124 11:24:36.499993 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6dc3beca-8832-4852-a397-cca5accca1a1-config\") pod \"ovn-controller-metrics-dwffh\" (UID: \"6dc3beca-8832-4852-a397-cca5accca1a1\") " pod="openstack/ovn-controller-metrics-dwffh" Nov 24 11:24:36 crc kubenswrapper[5072]: I1124 11:24:36.500032 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44ffb5b8-a638-4300-a2dc-6a0007c09e1c-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-lhcvx\" (UID: \"44ffb5b8-a638-4300-a2dc-6a0007c09e1c\") " pod="openstack/dnsmasq-dns-7fd796d7df-lhcvx" Nov 24 11:24:36 crc kubenswrapper[5072]: I1124 11:24:36.500145 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6dc3beca-8832-4852-a397-cca5accca1a1-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-dwffh\" (UID: \"6dc3beca-8832-4852-a397-cca5accca1a1\") " pod="openstack/ovn-controller-metrics-dwffh" Nov 24 11:24:36 crc kubenswrapper[5072]: I1124 11:24:36.500178 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44ffb5b8-a638-4300-a2dc-6a0007c09e1c-config\") pod \"dnsmasq-dns-7fd796d7df-lhcvx\" (UID: \"44ffb5b8-a638-4300-a2dc-6a0007c09e1c\") " pod="openstack/dnsmasq-dns-7fd796d7df-lhcvx" Nov 24 11:24:36 crc kubenswrapper[5072]: I1124 11:24:36.500199 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/6dc3beca-8832-4852-a397-cca5accca1a1-ovn-rundir\") pod \"ovn-controller-metrics-dwffh\" (UID: \"6dc3beca-8832-4852-a397-cca5accca1a1\") " pod="openstack/ovn-controller-metrics-dwffh" Nov 24 11:24:36 crc kubenswrapper[5072]: I1124 11:24:36.500221 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/6dc3beca-8832-4852-a397-cca5accca1a1-ovs-rundir\") pod \"ovn-controller-metrics-dwffh\" (UID: \"6dc3beca-8832-4852-a397-cca5accca1a1\") " pod="openstack/ovn-controller-metrics-dwffh" Nov 24 11:24:36 crc kubenswrapper[5072]: I1124 11:24:36.507346 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Nov 24 11:24:36 crc kubenswrapper[5072]: I1124 11:24:36.538782 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-lhcvx"] Nov 24 11:24:36 crc kubenswrapper[5072]: I1124 11:24:36.601884 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8q2z\" (UniqueName: \"kubernetes.io/projected/44ffb5b8-a638-4300-a2dc-6a0007c09e1c-kube-api-access-j8q2z\") pod \"dnsmasq-dns-7fd796d7df-lhcvx\" (UID: \"44ffb5b8-a638-4300-a2dc-6a0007c09e1c\") " pod="openstack/dnsmasq-dns-7fd796d7df-lhcvx" Nov 24 11:24:36 crc kubenswrapper[5072]: I1124 11:24:36.601928 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dc3beca-8832-4852-a397-cca5accca1a1-combined-ca-bundle\") pod \"ovn-controller-metrics-dwffh\" (UID: \"6dc3beca-8832-4852-a397-cca5accca1a1\") " pod="openstack/ovn-controller-metrics-dwffh" Nov 24 11:24:36 crc kubenswrapper[5072]: I1124 11:24:36.601967 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/44ffb5b8-a638-4300-a2dc-6a0007c09e1c-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-lhcvx\" (UID: \"44ffb5b8-a638-4300-a2dc-6a0007c09e1c\") " pod="openstack/dnsmasq-dns-7fd796d7df-lhcvx" Nov 24 11:24:36 crc kubenswrapper[5072]: I1124 11:24:36.601997 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6dc3beca-8832-4852-a397-cca5accca1a1-config\") pod \"ovn-controller-metrics-dwffh\" (UID: \"6dc3beca-8832-4852-a397-cca5accca1a1\") " pod="openstack/ovn-controller-metrics-dwffh" Nov 24 11:24:36 crc kubenswrapper[5072]: I1124 11:24:36.602025 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44ffb5b8-a638-4300-a2dc-6a0007c09e1c-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-lhcvx\" (UID: \"44ffb5b8-a638-4300-a2dc-6a0007c09e1c\") " pod="openstack/dnsmasq-dns-7fd796d7df-lhcvx" Nov 24 11:24:36 crc kubenswrapper[5072]: I1124 11:24:36.602095 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6dc3beca-8832-4852-a397-cca5accca1a1-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-dwffh\" (UID: \"6dc3beca-8832-4852-a397-cca5accca1a1\") " pod="openstack/ovn-controller-metrics-dwffh" Nov 24 11:24:36 crc kubenswrapper[5072]: I1124 11:24:36.602120 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44ffb5b8-a638-4300-a2dc-6a0007c09e1c-config\") pod \"dnsmasq-dns-7fd796d7df-lhcvx\" (UID: \"44ffb5b8-a638-4300-a2dc-6a0007c09e1c\") " pod="openstack/dnsmasq-dns-7fd796d7df-lhcvx" Nov 24 11:24:36 crc kubenswrapper[5072]: I1124 11:24:36.602141 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/6dc3beca-8832-4852-a397-cca5accca1a1-ovn-rundir\") pod \"ovn-controller-metrics-dwffh\" (UID: \"6dc3beca-8832-4852-a397-cca5accca1a1\") " pod="openstack/ovn-controller-metrics-dwffh" Nov 24 11:24:36 crc kubenswrapper[5072]: I1124 11:24:36.602162 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/6dc3beca-8832-4852-a397-cca5accca1a1-ovs-rundir\") pod \"ovn-controller-metrics-dwffh\" (UID: \"6dc3beca-8832-4852-a397-cca5accca1a1\") " pod="openstack/ovn-controller-metrics-dwffh" Nov 24 11:24:36 crc kubenswrapper[5072]: I1124 11:24:36.602211 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sx8z7\" (UniqueName: \"kubernetes.io/projected/6dc3beca-8832-4852-a397-cca5accca1a1-kube-api-access-sx8z7\") pod \"ovn-controller-metrics-dwffh\" (UID: \"6dc3beca-8832-4852-a397-cca5accca1a1\") " pod="openstack/ovn-controller-metrics-dwffh" Nov 24 11:24:36 crc kubenswrapper[5072]: I1124 11:24:36.602517 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/6dc3beca-8832-4852-a397-cca5accca1a1-ovs-rundir\") pod \"ovn-controller-metrics-dwffh\" (UID: \"6dc3beca-8832-4852-a397-cca5accca1a1\") " pod="openstack/ovn-controller-metrics-dwffh" Nov 24 11:24:36 crc kubenswrapper[5072]: I1124 11:24:36.602916 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44ffb5b8-a638-4300-a2dc-6a0007c09e1c-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-lhcvx\" (UID: \"44ffb5b8-a638-4300-a2dc-6a0007c09e1c\") " pod="openstack/dnsmasq-dns-7fd796d7df-lhcvx" Nov 24 11:24:36 crc kubenswrapper[5072]: I1124 11:24:36.602943 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/44ffb5b8-a638-4300-a2dc-6a0007c09e1c-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-lhcvx\" (UID: \"44ffb5b8-a638-4300-a2dc-6a0007c09e1c\") " pod="openstack/dnsmasq-dns-7fd796d7df-lhcvx" Nov 24 11:24:36 crc kubenswrapper[5072]: I1124 11:24:36.603239 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44ffb5b8-a638-4300-a2dc-6a0007c09e1c-config\") pod \"dnsmasq-dns-7fd796d7df-lhcvx\" (UID: \"44ffb5b8-a638-4300-a2dc-6a0007c09e1c\") " pod="openstack/dnsmasq-dns-7fd796d7df-lhcvx" Nov 24 11:24:36 crc kubenswrapper[5072]: I1124 11:24:36.603470 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6dc3beca-8832-4852-a397-cca5accca1a1-config\") pod \"ovn-controller-metrics-dwffh\" (UID: \"6dc3beca-8832-4852-a397-cca5accca1a1\") " pod="openstack/ovn-controller-metrics-dwffh" Nov 24 11:24:36 crc kubenswrapper[5072]: I1124 11:24:36.604136 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/6dc3beca-8832-4852-a397-cca5accca1a1-ovn-rundir\") pod \"ovn-controller-metrics-dwffh\" (UID: \"6dc3beca-8832-4852-a397-cca5accca1a1\") " pod="openstack/ovn-controller-metrics-dwffh" Nov 24 11:24:36 crc kubenswrapper[5072]: I1124 11:24:36.608803 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6dc3beca-8832-4852-a397-cca5accca1a1-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-dwffh\" (UID: \"6dc3beca-8832-4852-a397-cca5accca1a1\") " pod="openstack/ovn-controller-metrics-dwffh" Nov 24 11:24:36 crc kubenswrapper[5072]: I1124 11:24:36.611014 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dc3beca-8832-4852-a397-cca5accca1a1-combined-ca-bundle\") pod \"ovn-controller-metrics-dwffh\" (UID: \"6dc3beca-8832-4852-a397-cca5accca1a1\") " pod="openstack/ovn-controller-metrics-dwffh" Nov 24 11:24:36 crc kubenswrapper[5072]: I1124 11:24:36.618149 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8q2z\" (UniqueName: \"kubernetes.io/projected/44ffb5b8-a638-4300-a2dc-6a0007c09e1c-kube-api-access-j8q2z\") pod \"dnsmasq-dns-7fd796d7df-lhcvx\" (UID: \"44ffb5b8-a638-4300-a2dc-6a0007c09e1c\") " pod="openstack/dnsmasq-dns-7fd796d7df-lhcvx" Nov 24 11:24:36 crc kubenswrapper[5072]: I1124 11:24:36.620339 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sx8z7\" (UniqueName: \"kubernetes.io/projected/6dc3beca-8832-4852-a397-cca5accca1a1-kube-api-access-sx8z7\") pod \"ovn-controller-metrics-dwffh\" (UID: \"6dc3beca-8832-4852-a397-cca5accca1a1\") " pod="openstack/ovn-controller-metrics-dwffh" Nov 24 11:24:36 crc kubenswrapper[5072]: I1124 11:24:36.705991 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-l5dss"] Nov 24 11:24:36 crc kubenswrapper[5072]: I1124 11:24:36.760879 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-7wvdb"] Nov 24 11:24:36 crc kubenswrapper[5072]: I1124 11:24:36.762596 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-7wvdb" Nov 24 11:24:36 crc kubenswrapper[5072]: I1124 11:24:36.768030 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Nov 24 11:24:36 crc kubenswrapper[5072]: I1124 11:24:36.769673 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-7wvdb"] Nov 24 11:24:36 crc kubenswrapper[5072]: I1124 11:24:36.819709 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d0f0d5b2-2676-4305-8072-10fce8aeb222-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-7wvdb\" (UID: \"d0f0d5b2-2676-4305-8072-10fce8aeb222\") " pod="openstack/dnsmasq-dns-86db49b7ff-7wvdb" Nov 24 11:24:36 crc kubenswrapper[5072]: I1124 11:24:36.819750 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-dwffh" Nov 24 11:24:36 crc kubenswrapper[5072]: I1124 11:24:36.819806 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d0f0d5b2-2676-4305-8072-10fce8aeb222-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-7wvdb\" (UID: \"d0f0d5b2-2676-4305-8072-10fce8aeb222\") " pod="openstack/dnsmasq-dns-86db49b7ff-7wvdb" Nov 24 11:24:36 crc kubenswrapper[5072]: I1124 11:24:36.819842 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0f0d5b2-2676-4305-8072-10fce8aeb222-config\") pod \"dnsmasq-dns-86db49b7ff-7wvdb\" (UID: \"d0f0d5b2-2676-4305-8072-10fce8aeb222\") " pod="openstack/dnsmasq-dns-86db49b7ff-7wvdb" Nov 24 11:24:36 crc kubenswrapper[5072]: I1124 11:24:36.819883 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d0f0d5b2-2676-4305-8072-10fce8aeb222-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-7wvdb\" (UID: \"d0f0d5b2-2676-4305-8072-10fce8aeb222\") " pod="openstack/dnsmasq-dns-86db49b7ff-7wvdb" Nov 24 11:24:36 crc kubenswrapper[5072]: I1124 11:24:36.820081 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rn6sq\" (UniqueName: \"kubernetes.io/projected/d0f0d5b2-2676-4305-8072-10fce8aeb222-kube-api-access-rn6sq\") pod \"dnsmasq-dns-86db49b7ff-7wvdb\" (UID: \"d0f0d5b2-2676-4305-8072-10fce8aeb222\") " pod="openstack/dnsmasq-dns-86db49b7ff-7wvdb" Nov 24 11:24:36 crc kubenswrapper[5072]: I1124 11:24:36.840983 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-lhcvx" Nov 24 11:24:36 crc kubenswrapper[5072]: I1124 11:24:36.922028 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d0f0d5b2-2676-4305-8072-10fce8aeb222-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-7wvdb\" (UID: \"d0f0d5b2-2676-4305-8072-10fce8aeb222\") " pod="openstack/dnsmasq-dns-86db49b7ff-7wvdb" Nov 24 11:24:36 crc kubenswrapper[5072]: I1124 11:24:36.922340 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0f0d5b2-2676-4305-8072-10fce8aeb222-config\") pod \"dnsmasq-dns-86db49b7ff-7wvdb\" (UID: \"d0f0d5b2-2676-4305-8072-10fce8aeb222\") " pod="openstack/dnsmasq-dns-86db49b7ff-7wvdb" Nov 24 11:24:36 crc kubenswrapper[5072]: I1124 11:24:36.922413 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d0f0d5b2-2676-4305-8072-10fce8aeb222-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-7wvdb\" (UID: \"d0f0d5b2-2676-4305-8072-10fce8aeb222\") " pod="openstack/dnsmasq-dns-86db49b7ff-7wvdb" Nov 24 11:24:36 crc kubenswrapper[5072]: I1124 11:24:36.922440 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rn6sq\" (UniqueName: \"kubernetes.io/projected/d0f0d5b2-2676-4305-8072-10fce8aeb222-kube-api-access-rn6sq\") pod \"dnsmasq-dns-86db49b7ff-7wvdb\" (UID: \"d0f0d5b2-2676-4305-8072-10fce8aeb222\") " pod="openstack/dnsmasq-dns-86db49b7ff-7wvdb" Nov 24 11:24:36 crc kubenswrapper[5072]: I1124 11:24:36.922583 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d0f0d5b2-2676-4305-8072-10fce8aeb222-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-7wvdb\" (UID: \"d0f0d5b2-2676-4305-8072-10fce8aeb222\") " pod="openstack/dnsmasq-dns-86db49b7ff-7wvdb" Nov 24 11:24:36 crc kubenswrapper[5072]: I1124 11:24:36.923038 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d0f0d5b2-2676-4305-8072-10fce8aeb222-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-7wvdb\" (UID: \"d0f0d5b2-2676-4305-8072-10fce8aeb222\") " pod="openstack/dnsmasq-dns-86db49b7ff-7wvdb" Nov 24 11:24:36 crc kubenswrapper[5072]: I1124 11:24:36.923577 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0f0d5b2-2676-4305-8072-10fce8aeb222-config\") pod \"dnsmasq-dns-86db49b7ff-7wvdb\" (UID: \"d0f0d5b2-2676-4305-8072-10fce8aeb222\") " pod="openstack/dnsmasq-dns-86db49b7ff-7wvdb" Nov 24 11:24:36 crc kubenswrapper[5072]: I1124 11:24:36.923594 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d0f0d5b2-2676-4305-8072-10fce8aeb222-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-7wvdb\" (UID: \"d0f0d5b2-2676-4305-8072-10fce8aeb222\") " pod="openstack/dnsmasq-dns-86db49b7ff-7wvdb" Nov 24 11:24:36 crc kubenswrapper[5072]: I1124 11:24:36.923665 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d0f0d5b2-2676-4305-8072-10fce8aeb222-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-7wvdb\" (UID: \"d0f0d5b2-2676-4305-8072-10fce8aeb222\") " pod="openstack/dnsmasq-dns-86db49b7ff-7wvdb" Nov 24 11:24:36 crc kubenswrapper[5072]: I1124 11:24:36.949281 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rn6sq\" (UniqueName: \"kubernetes.io/projected/d0f0d5b2-2676-4305-8072-10fce8aeb222-kube-api-access-rn6sq\") pod \"dnsmasq-dns-86db49b7ff-7wvdb\" (UID: \"d0f0d5b2-2676-4305-8072-10fce8aeb222\") " pod="openstack/dnsmasq-dns-86db49b7ff-7wvdb" Nov 24 11:24:36 crc kubenswrapper[5072]: I1124 11:24:36.983781 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-l5dss" Nov 24 11:24:37 crc kubenswrapper[5072]: I1124 11:24:37.023958 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/583d674d-7ef5-4897-9a08-e278ac090ee5-config\") pod \"583d674d-7ef5-4897-9a08-e278ac090ee5\" (UID: \"583d674d-7ef5-4897-9a08-e278ac090ee5\") " Nov 24 11:24:37 crc kubenswrapper[5072]: I1124 11:24:37.024063 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/583d674d-7ef5-4897-9a08-e278ac090ee5-dns-svc\") pod \"583d674d-7ef5-4897-9a08-e278ac090ee5\" (UID: \"583d674d-7ef5-4897-9a08-e278ac090ee5\") " Nov 24 11:24:37 crc kubenswrapper[5072]: I1124 11:24:37.024278 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/583d674d-7ef5-4897-9a08-e278ac090ee5-config" (OuterVolumeSpecName: "config") pod "583d674d-7ef5-4897-9a08-e278ac090ee5" (UID: "583d674d-7ef5-4897-9a08-e278ac090ee5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:24:37 crc kubenswrapper[5072]: I1124 11:24:37.024596 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/583d674d-7ef5-4897-9a08-e278ac090ee5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "583d674d-7ef5-4897-9a08-e278ac090ee5" (UID: "583d674d-7ef5-4897-9a08-e278ac090ee5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:24:37 crc kubenswrapper[5072]: I1124 11:24:37.024095 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvpmw\" (UniqueName: \"kubernetes.io/projected/583d674d-7ef5-4897-9a08-e278ac090ee5-kube-api-access-bvpmw\") pod \"583d674d-7ef5-4897-9a08-e278ac090ee5\" (UID: \"583d674d-7ef5-4897-9a08-e278ac090ee5\") " Nov 24 11:24:37 crc kubenswrapper[5072]: I1124 11:24:37.025200 5072 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/583d674d-7ef5-4897-9a08-e278ac090ee5-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:24:37 crc kubenswrapper[5072]: I1124 11:24:37.025218 5072 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/583d674d-7ef5-4897-9a08-e278ac090ee5-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 11:24:37 crc kubenswrapper[5072]: I1124 11:24:37.028853 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/583d674d-7ef5-4897-9a08-e278ac090ee5-kube-api-access-bvpmw" (OuterVolumeSpecName: "kube-api-access-bvpmw") pod "583d674d-7ef5-4897-9a08-e278ac090ee5" (UID: "583d674d-7ef5-4897-9a08-e278ac090ee5"). InnerVolumeSpecName "kube-api-access-bvpmw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:24:37 crc kubenswrapper[5072]: I1124 11:24:37.086331 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"e05f8763-9e64-4bf6-84c8-25df03057309","Type":"ContainerStarted","Data":"b5a93fadd6ffb996ac158d47a2b9fa6fe9201ee1f85ac761b4d5154f8c5628bb"} Nov 24 11:24:37 crc kubenswrapper[5072]: I1124 11:24:37.100416 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-7wvdb" Nov 24 11:24:37 crc kubenswrapper[5072]: I1124 11:24:37.101845 5072 generic.go:334] "Generic (PLEG): container finished" podID="02573658-0503-4bdb-81a8-21e289b8d886" containerID="d89d11dbbaba34e97777abc177a74e143e226d58a7012a39d24a51576326acd0" exitCode=0 Nov 24 11:24:37 crc kubenswrapper[5072]: I1124 11:24:37.101891 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-kphnt" event={"ID":"02573658-0503-4bdb-81a8-21e289b8d886","Type":"ContainerDied","Data":"d89d11dbbaba34e97777abc177a74e143e226d58a7012a39d24a51576326acd0"} Nov 24 11:24:37 crc kubenswrapper[5072]: I1124 11:24:37.111225 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-l5dss" event={"ID":"583d674d-7ef5-4897-9a08-e278ac090ee5","Type":"ContainerDied","Data":"26538297febd3c859100c46d41b0ec11919862b5a9234e1d6dcc49d92cac6c37"} Nov 24 11:24:37 crc kubenswrapper[5072]: I1124 11:24:37.111331 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-l5dss" Nov 24 11:24:37 crc kubenswrapper[5072]: I1124 11:24:37.117204 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"0f143b81-90ef-461e-a3b5-36ceb68eda94","Type":"ContainerStarted","Data":"2068fcf7c8cd59881a712a2380cc30ee6efdc6ffe08dcf034d2260ddd9f4def9"} Nov 24 11:24:37 crc kubenswrapper[5072]: I1124 11:24:37.117520 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=20.608577049 podStartE2EDuration="27.117506781s" podCreationTimestamp="2025-11-24 11:24:10 +0000 UTC" firstStartedPulling="2025-11-24 11:24:24.112653846 +0000 UTC m=+915.824178322" lastFinishedPulling="2025-11-24 11:24:30.621583568 +0000 UTC m=+922.333108054" observedRunningTime="2025-11-24 11:24:37.115284655 +0000 UTC m=+928.826809121" watchObservedRunningTime="2025-11-24 11:24:37.117506781 +0000 UTC m=+928.829031257" Nov 24 11:24:37 crc kubenswrapper[5072]: I1124 11:24:37.126495 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvpmw\" (UniqueName: \"kubernetes.io/projected/583d674d-7ef5-4897-9a08-e278ac090ee5-kube-api-access-bvpmw\") on node \"crc\" DevicePath \"\"" Nov 24 11:24:37 crc kubenswrapper[5072]: I1124 11:24:37.155667 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Nov 24 11:24:37 crc kubenswrapper[5072]: I1124 11:24:37.200515 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-l5dss"] Nov 24 11:24:37 crc kubenswrapper[5072]: I1124 11:24:37.207245 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-l5dss"] Nov 24 11:24:37 crc kubenswrapper[5072]: I1124 11:24:37.220324 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=20.347955822 podStartE2EDuration="28.220299221s" podCreationTimestamp="2025-11-24 11:24:09 +0000 UTC" firstStartedPulling="2025-11-24 11:24:22.805670961 +0000 UTC m=+914.517195437" lastFinishedPulling="2025-11-24 11:24:30.67801436 +0000 UTC m=+922.389538836" observedRunningTime="2025-11-24 11:24:37.219890461 +0000 UTC m=+928.931414937" watchObservedRunningTime="2025-11-24 11:24:37.220299221 +0000 UTC m=+928.931823697" Nov 24 11:24:37 crc kubenswrapper[5072]: I1124 11:24:37.305304 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-dwffh"] Nov 24 11:24:37 crc kubenswrapper[5072]: I1124 11:24:37.339416 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-lhcvx"] Nov 24 11:24:37 crc kubenswrapper[5072]: I1124 11:24:37.362521 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Nov 24 11:24:37 crc kubenswrapper[5072]: I1124 11:24:37.364233 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 24 11:24:37 crc kubenswrapper[5072]: I1124 11:24:37.366607 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Nov 24 11:24:37 crc kubenswrapper[5072]: I1124 11:24:37.367264 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-gv5v2" Nov 24 11:24:37 crc kubenswrapper[5072]: I1124 11:24:37.367532 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Nov 24 11:24:37 crc kubenswrapper[5072]: I1124 11:24:37.367606 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Nov 24 11:24:37 crc kubenswrapper[5072]: I1124 11:24:37.368457 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 24 11:24:37 crc kubenswrapper[5072]: I1124 11:24:37.383597 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Nov 24 11:24:37 crc kubenswrapper[5072]: I1124 11:24:37.439236 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67176bb7-8d1f-453f-b403-7e2f323f41f8-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"67176bb7-8d1f-453f-b403-7e2f323f41f8\") " pod="openstack/ovn-northd-0" Nov 24 11:24:37 crc kubenswrapper[5072]: I1124 11:24:37.439282 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67176bb7-8d1f-453f-b403-7e2f323f41f8-config\") pod \"ovn-northd-0\" (UID: \"67176bb7-8d1f-453f-b403-7e2f323f41f8\") " pod="openstack/ovn-northd-0" Nov 24 11:24:37 crc kubenswrapper[5072]: I1124 11:24:37.439316 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtb48\" (UniqueName: \"kubernetes.io/projected/67176bb7-8d1f-453f-b403-7e2f323f41f8-kube-api-access-xtb48\") pod \"ovn-northd-0\" (UID: \"67176bb7-8d1f-453f-b403-7e2f323f41f8\") " pod="openstack/ovn-northd-0" Nov 24 11:24:37 crc kubenswrapper[5072]: I1124 11:24:37.439361 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/67176bb7-8d1f-453f-b403-7e2f323f41f8-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"67176bb7-8d1f-453f-b403-7e2f323f41f8\") " pod="openstack/ovn-northd-0" Nov 24 11:24:37 crc kubenswrapper[5072]: I1124 11:24:37.439394 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/67176bb7-8d1f-453f-b403-7e2f323f41f8-scripts\") pod \"ovn-northd-0\" (UID: \"67176bb7-8d1f-453f-b403-7e2f323f41f8\") " pod="openstack/ovn-northd-0" Nov 24 11:24:37 crc kubenswrapper[5072]: I1124 11:24:37.439423 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/67176bb7-8d1f-453f-b403-7e2f323f41f8-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"67176bb7-8d1f-453f-b403-7e2f323f41f8\") " pod="openstack/ovn-northd-0" Nov 24 11:24:37 crc kubenswrapper[5072]: I1124 11:24:37.439459 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/67176bb7-8d1f-453f-b403-7e2f323f41f8-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"67176bb7-8d1f-453f-b403-7e2f323f41f8\") " pod="openstack/ovn-northd-0" Nov 24 11:24:37 crc kubenswrapper[5072]: I1124 11:24:37.462838 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-kphnt" Nov 24 11:24:37 crc kubenswrapper[5072]: I1124 11:24:37.540599 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwrr8\" (UniqueName: \"kubernetes.io/projected/02573658-0503-4bdb-81a8-21e289b8d886-kube-api-access-mwrr8\") pod \"02573658-0503-4bdb-81a8-21e289b8d886\" (UID: \"02573658-0503-4bdb-81a8-21e289b8d886\") " Nov 24 11:24:37 crc kubenswrapper[5072]: I1124 11:24:37.540735 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/02573658-0503-4bdb-81a8-21e289b8d886-dns-svc\") pod \"02573658-0503-4bdb-81a8-21e289b8d886\" (UID: \"02573658-0503-4bdb-81a8-21e289b8d886\") " Nov 24 11:24:37 crc kubenswrapper[5072]: I1124 11:24:37.540794 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02573658-0503-4bdb-81a8-21e289b8d886-config\") pod \"02573658-0503-4bdb-81a8-21e289b8d886\" (UID: \"02573658-0503-4bdb-81a8-21e289b8d886\") " Nov 24 11:24:37 crc kubenswrapper[5072]: I1124 11:24:37.541065 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/67176bb7-8d1f-453f-b403-7e2f323f41f8-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"67176bb7-8d1f-453f-b403-7e2f323f41f8\") " pod="openstack/ovn-northd-0" Nov 24 11:24:37 crc kubenswrapper[5072]: I1124 11:24:37.541093 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/67176bb7-8d1f-453f-b403-7e2f323f41f8-scripts\") pod \"ovn-northd-0\" (UID: \"67176bb7-8d1f-453f-b403-7e2f323f41f8\") " pod="openstack/ovn-northd-0" Nov 24 11:24:37 crc kubenswrapper[5072]: I1124 11:24:37.541129 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/67176bb7-8d1f-453f-b403-7e2f323f41f8-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"67176bb7-8d1f-453f-b403-7e2f323f41f8\") " pod="openstack/ovn-northd-0" Nov 24 11:24:37 crc kubenswrapper[5072]: I1124 11:24:37.541173 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/67176bb7-8d1f-453f-b403-7e2f323f41f8-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"67176bb7-8d1f-453f-b403-7e2f323f41f8\") " pod="openstack/ovn-northd-0" Nov 24 11:24:37 crc kubenswrapper[5072]: I1124 11:24:37.541229 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67176bb7-8d1f-453f-b403-7e2f323f41f8-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"67176bb7-8d1f-453f-b403-7e2f323f41f8\") " pod="openstack/ovn-northd-0" Nov 24 11:24:37 crc kubenswrapper[5072]: I1124 11:24:37.541257 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67176bb7-8d1f-453f-b403-7e2f323f41f8-config\") pod \"ovn-northd-0\" (UID: \"67176bb7-8d1f-453f-b403-7e2f323f41f8\") " pod="openstack/ovn-northd-0" Nov 24 11:24:37 crc kubenswrapper[5072]: I1124 11:24:37.541289 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtb48\" (UniqueName: \"kubernetes.io/projected/67176bb7-8d1f-453f-b403-7e2f323f41f8-kube-api-access-xtb48\") pod \"ovn-northd-0\" (UID: \"67176bb7-8d1f-453f-b403-7e2f323f41f8\") " pod="openstack/ovn-northd-0" Nov 24 11:24:37 crc kubenswrapper[5072]: I1124 11:24:37.541546 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/67176bb7-8d1f-453f-b403-7e2f323f41f8-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"67176bb7-8d1f-453f-b403-7e2f323f41f8\") " pod="openstack/ovn-northd-0" Nov 24 11:24:37 crc kubenswrapper[5072]: I1124 11:24:37.544032 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67176bb7-8d1f-453f-b403-7e2f323f41f8-config\") pod \"ovn-northd-0\" (UID: \"67176bb7-8d1f-453f-b403-7e2f323f41f8\") " pod="openstack/ovn-northd-0" Nov 24 11:24:37 crc kubenswrapper[5072]: I1124 11:24:37.544072 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/67176bb7-8d1f-453f-b403-7e2f323f41f8-scripts\") pod \"ovn-northd-0\" (UID: \"67176bb7-8d1f-453f-b403-7e2f323f41f8\") " pod="openstack/ovn-northd-0" Nov 24 11:24:37 crc kubenswrapper[5072]: I1124 11:24:37.545601 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/67176bb7-8d1f-453f-b403-7e2f323f41f8-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"67176bb7-8d1f-453f-b403-7e2f323f41f8\") " pod="openstack/ovn-northd-0" Nov 24 11:24:37 crc kubenswrapper[5072]: I1124 11:24:37.547000 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02573658-0503-4bdb-81a8-21e289b8d886-kube-api-access-mwrr8" (OuterVolumeSpecName: "kube-api-access-mwrr8") pod "02573658-0503-4bdb-81a8-21e289b8d886" (UID: "02573658-0503-4bdb-81a8-21e289b8d886"). InnerVolumeSpecName "kube-api-access-mwrr8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:24:37 crc kubenswrapper[5072]: I1124 11:24:37.549775 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/67176bb7-8d1f-453f-b403-7e2f323f41f8-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"67176bb7-8d1f-453f-b403-7e2f323f41f8\") " pod="openstack/ovn-northd-0" Nov 24 11:24:37 crc kubenswrapper[5072]: I1124 11:24:37.552176 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67176bb7-8d1f-453f-b403-7e2f323f41f8-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"67176bb7-8d1f-453f-b403-7e2f323f41f8\") " pod="openstack/ovn-northd-0" Nov 24 11:24:37 crc kubenswrapper[5072]: I1124 11:24:37.564159 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtb48\" (UniqueName: \"kubernetes.io/projected/67176bb7-8d1f-453f-b403-7e2f323f41f8-kube-api-access-xtb48\") pod \"ovn-northd-0\" (UID: \"67176bb7-8d1f-453f-b403-7e2f323f41f8\") " pod="openstack/ovn-northd-0" Nov 24 11:24:37 crc kubenswrapper[5072]: I1124 11:24:37.564246 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02573658-0503-4bdb-81a8-21e289b8d886-config" (OuterVolumeSpecName: "config") pod "02573658-0503-4bdb-81a8-21e289b8d886" (UID: "02573658-0503-4bdb-81a8-21e289b8d886"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:24:37 crc kubenswrapper[5072]: I1124 11:24:37.570171 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02573658-0503-4bdb-81a8-21e289b8d886-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "02573658-0503-4bdb-81a8-21e289b8d886" (UID: "02573658-0503-4bdb-81a8-21e289b8d886"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:24:37 crc kubenswrapper[5072]: I1124 11:24:37.642671 5072 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02573658-0503-4bdb-81a8-21e289b8d886-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:24:37 crc kubenswrapper[5072]: I1124 11:24:37.642917 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mwrr8\" (UniqueName: \"kubernetes.io/projected/02573658-0503-4bdb-81a8-21e289b8d886-kube-api-access-mwrr8\") on node \"crc\" DevicePath \"\"" Nov 24 11:24:37 crc kubenswrapper[5072]: I1124 11:24:37.642928 5072 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/02573658-0503-4bdb-81a8-21e289b8d886-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 11:24:37 crc kubenswrapper[5072]: I1124 11:24:37.646615 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-7wvdb"] Nov 24 11:24:37 crc kubenswrapper[5072]: W1124 11:24:37.653078 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0f0d5b2_2676_4305_8072_10fce8aeb222.slice/crio-36b8257379145b0813bc1becfb8bb5ddc9ec4bd3f06bba01edf4ac90f3c467c8 WatchSource:0}: Error finding container 36b8257379145b0813bc1becfb8bb5ddc9ec4bd3f06bba01edf4ac90f3c467c8: Status 404 returned error can't find the container with id 36b8257379145b0813bc1becfb8bb5ddc9ec4bd3f06bba01edf4ac90f3c467c8 Nov 24 11:24:37 crc kubenswrapper[5072]: I1124 11:24:37.685454 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 24 11:24:38 crc kubenswrapper[5072]: I1124 11:24:38.132294 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-kphnt" Nov 24 11:24:38 crc kubenswrapper[5072]: I1124 11:24:38.132328 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-kphnt" event={"ID":"02573658-0503-4bdb-81a8-21e289b8d886","Type":"ContainerDied","Data":"f1d5408544f7a154216e7acfc483e6a840484b361d798a83735cfa092bc0128d"} Nov 24 11:24:38 crc kubenswrapper[5072]: I1124 11:24:38.132773 5072 scope.go:117] "RemoveContainer" containerID="d89d11dbbaba34e97777abc177a74e143e226d58a7012a39d24a51576326acd0" Nov 24 11:24:38 crc kubenswrapper[5072]: I1124 11:24:38.133700 5072 generic.go:334] "Generic (PLEG): container finished" podID="d0f0d5b2-2676-4305-8072-10fce8aeb222" containerID="a70e1e2dd4d7bb256024f237e7927abfac9c32bc27e0ac8bda31ff2b80a34be9" exitCode=0 Nov 24 11:24:38 crc kubenswrapper[5072]: I1124 11:24:38.133765 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-7wvdb" event={"ID":"d0f0d5b2-2676-4305-8072-10fce8aeb222","Type":"ContainerDied","Data":"a70e1e2dd4d7bb256024f237e7927abfac9c32bc27e0ac8bda31ff2b80a34be9"} Nov 24 11:24:38 crc kubenswrapper[5072]: I1124 11:24:38.133791 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-7wvdb" event={"ID":"d0f0d5b2-2676-4305-8072-10fce8aeb222","Type":"ContainerStarted","Data":"36b8257379145b0813bc1becfb8bb5ddc9ec4bd3f06bba01edf4ac90f3c467c8"} Nov 24 11:24:38 crc kubenswrapper[5072]: I1124 11:24:38.142483 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-dwffh" event={"ID":"6dc3beca-8832-4852-a397-cca5accca1a1","Type":"ContainerStarted","Data":"a4235e21ce84c0f73b98caf189bce4e268b5ebb655b0d382d41d64e238cbd595"} Nov 24 11:24:38 crc kubenswrapper[5072]: I1124 11:24:38.142522 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-dwffh" event={"ID":"6dc3beca-8832-4852-a397-cca5accca1a1","Type":"ContainerStarted","Data":"5c3565c48affbe9974cafac1d4e387abb83a94e566a2f1a4512da4d848a5c1f1"} Nov 24 11:24:38 crc kubenswrapper[5072]: I1124 11:24:38.143640 5072 generic.go:334] "Generic (PLEG): container finished" podID="44ffb5b8-a638-4300-a2dc-6a0007c09e1c" containerID="a8e4d5fa49c5399922b9f80d667f43f9e2cb6f455c0f2614e716d19b04e9aa78" exitCode=0 Nov 24 11:24:38 crc kubenswrapper[5072]: I1124 11:24:38.145018 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-lhcvx" event={"ID":"44ffb5b8-a638-4300-a2dc-6a0007c09e1c","Type":"ContainerDied","Data":"a8e4d5fa49c5399922b9f80d667f43f9e2cb6f455c0f2614e716d19b04e9aa78"} Nov 24 11:24:38 crc kubenswrapper[5072]: I1124 11:24:38.145046 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-lhcvx" event={"ID":"44ffb5b8-a638-4300-a2dc-6a0007c09e1c","Type":"ContainerStarted","Data":"cfea06e8b042d08e02efb2eafe2009a56f8ffb3b92d385a4d8f33ea25a772b58"} Nov 24 11:24:38 crc kubenswrapper[5072]: I1124 11:24:38.192919 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 24 11:24:38 crc kubenswrapper[5072]: I1124 11:24:38.202949 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-dwffh" podStartSLOduration=2.202932053 podStartE2EDuration="2.202932053s" podCreationTimestamp="2025-11-24 11:24:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:24:38.189943476 +0000 UTC m=+929.901467972" watchObservedRunningTime="2025-11-24 11:24:38.202932053 +0000 UTC m=+929.914456529" Nov 24 11:24:38 crc kubenswrapper[5072]: I1124 11:24:38.287636 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-kphnt"] Nov 24 11:24:38 crc kubenswrapper[5072]: I1124 11:24:38.297168 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-kphnt"] Nov 24 11:24:39 crc kubenswrapper[5072]: I1124 11:24:39.039973 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02573658-0503-4bdb-81a8-21e289b8d886" path="/var/lib/kubelet/pods/02573658-0503-4bdb-81a8-21e289b8d886/volumes" Nov 24 11:24:39 crc kubenswrapper[5072]: I1124 11:24:39.045036 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="583d674d-7ef5-4897-9a08-e278ac090ee5" path="/var/lib/kubelet/pods/583d674d-7ef5-4897-9a08-e278ac090ee5/volumes" Nov 24 11:24:39 crc kubenswrapper[5072]: I1124 11:24:39.154554 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"354afe75-70d3-4c45-a990-0299f821b0af","Type":"ContainerStarted","Data":"50ed5bcf7b58686c9c39d2083331f2f908ec020f73f7ca7435cdf2c9fd7abe38"} Nov 24 11:24:39 crc kubenswrapper[5072]: I1124 11:24:39.157113 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-7wvdb" event={"ID":"d0f0d5b2-2676-4305-8072-10fce8aeb222","Type":"ContainerStarted","Data":"bcf821958a020716e02a3080425c5daa3cf9d92d26367cae002cd85d03166d35"} Nov 24 11:24:39 crc kubenswrapper[5072]: I1124 11:24:39.157314 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-7wvdb" Nov 24 11:24:39 crc kubenswrapper[5072]: I1124 11:24:39.158706 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"67176bb7-8d1f-453f-b403-7e2f323f41f8","Type":"ContainerStarted","Data":"d68ee09581124e76ab29c22395b4d442a7e2ceba9e971f87329e701cc4baf603"} Nov 24 11:24:39 crc kubenswrapper[5072]: I1124 11:24:39.160683 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-lhcvx" event={"ID":"44ffb5b8-a638-4300-a2dc-6a0007c09e1c","Type":"ContainerStarted","Data":"a4ea975bd314c636ada62f22221c59ca98911c388927a61ac43230ba415512f6"} Nov 24 11:24:39 crc kubenswrapper[5072]: I1124 11:24:39.218711 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7fd796d7df-lhcvx" podStartSLOduration=3.218681239 podStartE2EDuration="3.218681239s" podCreationTimestamp="2025-11-24 11:24:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:24:39.204914192 +0000 UTC m=+930.916438688" watchObservedRunningTime="2025-11-24 11:24:39.218681239 +0000 UTC m=+930.930205735" Nov 24 11:24:39 crc kubenswrapper[5072]: I1124 11:24:39.247162 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-7wvdb" podStartSLOduration=3.247141776 podStartE2EDuration="3.247141776s" podCreationTimestamp="2025-11-24 11:24:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:24:39.243909074 +0000 UTC m=+930.955433550" watchObservedRunningTime="2025-11-24 11:24:39.247141776 +0000 UTC m=+930.958666262" Nov 24 11:24:40 crc kubenswrapper[5072]: I1124 11:24:40.171564 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"67176bb7-8d1f-453f-b403-7e2f323f41f8","Type":"ContainerStarted","Data":"8355454b369a82a7188635b09e45d1029ba19eb20269d411644f101b45022e02"} Nov 24 11:24:40 crc kubenswrapper[5072]: I1124 11:24:40.171993 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"67176bb7-8d1f-453f-b403-7e2f323f41f8","Type":"ContainerStarted","Data":"2b230352e3756b4aa441614da6c66ea986d53b765e72af65bff68ec9f473b67d"} Nov 24 11:24:40 crc kubenswrapper[5072]: I1124 11:24:40.172659 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7fd796d7df-lhcvx" Nov 24 11:24:40 crc kubenswrapper[5072]: I1124 11:24:40.195365 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.234051047 podStartE2EDuration="3.19533968s" podCreationTimestamp="2025-11-24 11:24:37 +0000 UTC" firstStartedPulling="2025-11-24 11:24:38.227343458 +0000 UTC m=+929.938867934" lastFinishedPulling="2025-11-24 11:24:39.188632091 +0000 UTC m=+930.900156567" observedRunningTime="2025-11-24 11:24:40.191805631 +0000 UTC m=+931.903330157" watchObservedRunningTime="2025-11-24 11:24:40.19533968 +0000 UTC m=+931.906864156" Nov 24 11:24:40 crc kubenswrapper[5072]: I1124 11:24:40.585426 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Nov 24 11:24:40 crc kubenswrapper[5072]: I1124 11:24:40.585863 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Nov 24 11:24:41 crc kubenswrapper[5072]: I1124 11:24:41.181028 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Nov 24 11:24:42 crc kubenswrapper[5072]: I1124 11:24:42.018216 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Nov 24 11:24:42 crc kubenswrapper[5072]: I1124 11:24:42.018614 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Nov 24 11:24:42 crc kubenswrapper[5072]: I1124 11:24:42.849837 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Nov 24 11:24:42 crc kubenswrapper[5072]: I1124 11:24:42.925743 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Nov 24 11:24:43 crc kubenswrapper[5072]: I1124 11:24:43.645860 5072 patch_prober.go:28] interesting pod/machine-config-daemon-jfxnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 11:24:43 crc kubenswrapper[5072]: I1124 11:24:43.645906 5072 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 11:24:43 crc kubenswrapper[5072]: I1124 11:24:43.645944 5072 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" Nov 24 11:24:43 crc kubenswrapper[5072]: I1124 11:24:43.646540 5072 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8e2fafce48ed7d24bea410cc4a09f0aa29c5014f23ce7269a5e5cc3ebe7aa12f"} pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 11:24:43 crc kubenswrapper[5072]: I1124 11:24:43.646596 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" containerName="machine-config-daemon" containerID="cri-o://8e2fafce48ed7d24bea410cc4a09f0aa29c5014f23ce7269a5e5cc3ebe7aa12f" gracePeriod=600 Nov 24 11:24:43 crc kubenswrapper[5072]: I1124 11:24:43.957396 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 24 11:24:44 crc kubenswrapper[5072]: I1124 11:24:44.208696 5072 generic.go:334] "Generic (PLEG): container finished" podID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" containerID="8e2fafce48ed7d24bea410cc4a09f0aa29c5014f23ce7269a5e5cc3ebe7aa12f" exitCode=0 Nov 24 11:24:44 crc kubenswrapper[5072]: I1124 11:24:44.208871 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" event={"ID":"85ee6420-36f0-467c-acf4-ebea8b02c8d5","Type":"ContainerDied","Data":"8e2fafce48ed7d24bea410cc4a09f0aa29c5014f23ce7269a5e5cc3ebe7aa12f"} Nov 24 11:24:44 crc kubenswrapper[5072]: I1124 11:24:44.209105 5072 scope.go:117] "RemoveContainer" containerID="9acae0aae65eaa2777547c62fd161d329c111af7aec02efa5b970dc26ddc2ae7" Nov 24 11:24:44 crc kubenswrapper[5072]: I1124 11:24:44.218896 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Nov 24 11:24:44 crc kubenswrapper[5072]: I1124 11:24:44.304848 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Nov 24 11:24:46 crc kubenswrapper[5072]: I1124 11:24:46.842604 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7fd796d7df-lhcvx" Nov 24 11:24:47 crc kubenswrapper[5072]: I1124 11:24:47.102571 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-7wvdb" Nov 24 11:24:47 crc kubenswrapper[5072]: I1124 11:24:47.176759 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-lhcvx"] Nov 24 11:24:47 crc kubenswrapper[5072]: I1124 11:24:47.237190 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" event={"ID":"85ee6420-36f0-467c-acf4-ebea8b02c8d5","Type":"ContainerStarted","Data":"b030b14c475fa1e60935020fac8bbc582c34d80ebfa6d2f82381ce67034a5e50"} Nov 24 11:24:47 crc kubenswrapper[5072]: I1124 11:24:47.237333 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7fd796d7df-lhcvx" podUID="44ffb5b8-a638-4300-a2dc-6a0007c09e1c" containerName="dnsmasq-dns" containerID="cri-o://a4ea975bd314c636ada62f22221c59ca98911c388927a61ac43230ba415512f6" gracePeriod=10 Nov 24 11:24:47 crc kubenswrapper[5072]: I1124 11:24:47.546481 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-3c02-account-create-q2mpb"] Nov 24 11:24:47 crc kubenswrapper[5072]: E1124 11:24:47.547429 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02573658-0503-4bdb-81a8-21e289b8d886" containerName="init" Nov 24 11:24:47 crc kubenswrapper[5072]: I1124 11:24:47.547443 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="02573658-0503-4bdb-81a8-21e289b8d886" containerName="init" Nov 24 11:24:47 crc kubenswrapper[5072]: I1124 11:24:47.547792 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="02573658-0503-4bdb-81a8-21e289b8d886" containerName="init" Nov 24 11:24:47 crc kubenswrapper[5072]: I1124 11:24:47.548574 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-3c02-account-create-q2mpb" Nov 24 11:24:47 crc kubenswrapper[5072]: I1124 11:24:47.550775 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Nov 24 11:24:47 crc kubenswrapper[5072]: I1124 11:24:47.553163 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-3c02-account-create-q2mpb"] Nov 24 11:24:47 crc kubenswrapper[5072]: I1124 11:24:47.604607 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-zkhhj"] Nov 24 11:24:47 crc kubenswrapper[5072]: I1124 11:24:47.605580 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-zkhhj" Nov 24 11:24:47 crc kubenswrapper[5072]: I1124 11:24:47.612575 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-zkhhj"] Nov 24 11:24:47 crc kubenswrapper[5072]: I1124 11:24:47.628890 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsd9z\" (UniqueName: \"kubernetes.io/projected/3c72a93a-949b-4cdd-ba4f-fbd9371a4b1c-kube-api-access-qsd9z\") pod \"glance-3c02-account-create-q2mpb\" (UID: \"3c72a93a-949b-4cdd-ba4f-fbd9371a4b1c\") " pod="openstack/glance-3c02-account-create-q2mpb" Nov 24 11:24:47 crc kubenswrapper[5072]: I1124 11:24:47.628996 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c72a93a-949b-4cdd-ba4f-fbd9371a4b1c-operator-scripts\") pod \"glance-3c02-account-create-q2mpb\" (UID: \"3c72a93a-949b-4cdd-ba4f-fbd9371a4b1c\") " pod="openstack/glance-3c02-account-create-q2mpb" Nov 24 11:24:47 crc kubenswrapper[5072]: I1124 11:24:47.680232 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-lhcvx" Nov 24 11:24:47 crc kubenswrapper[5072]: I1124 11:24:47.729823 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44ffb5b8-a638-4300-a2dc-6a0007c09e1c-config\") pod \"44ffb5b8-a638-4300-a2dc-6a0007c09e1c\" (UID: \"44ffb5b8-a638-4300-a2dc-6a0007c09e1c\") " Nov 24 11:24:47 crc kubenswrapper[5072]: I1124 11:24:47.729906 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8q2z\" (UniqueName: \"kubernetes.io/projected/44ffb5b8-a638-4300-a2dc-6a0007c09e1c-kube-api-access-j8q2z\") pod \"44ffb5b8-a638-4300-a2dc-6a0007c09e1c\" (UID: \"44ffb5b8-a638-4300-a2dc-6a0007c09e1c\") " Nov 24 11:24:47 crc kubenswrapper[5072]: I1124 11:24:47.729929 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44ffb5b8-a638-4300-a2dc-6a0007c09e1c-dns-svc\") pod \"44ffb5b8-a638-4300-a2dc-6a0007c09e1c\" (UID: \"44ffb5b8-a638-4300-a2dc-6a0007c09e1c\") " Nov 24 11:24:47 crc kubenswrapper[5072]: I1124 11:24:47.730007 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/44ffb5b8-a638-4300-a2dc-6a0007c09e1c-ovsdbserver-nb\") pod \"44ffb5b8-a638-4300-a2dc-6a0007c09e1c\" (UID: \"44ffb5b8-a638-4300-a2dc-6a0007c09e1c\") " Nov 24 11:24:47 crc kubenswrapper[5072]: I1124 11:24:47.730231 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d740db71-09cb-4511-9491-34292bf95e8f-operator-scripts\") pod \"glance-db-create-zkhhj\" (UID: \"d740db71-09cb-4511-9491-34292bf95e8f\") " pod="openstack/glance-db-create-zkhhj" Nov 24 11:24:47 crc kubenswrapper[5072]: I1124 11:24:47.730304 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qsd9z\" (UniqueName: \"kubernetes.io/projected/3c72a93a-949b-4cdd-ba4f-fbd9371a4b1c-kube-api-access-qsd9z\") pod \"glance-3c02-account-create-q2mpb\" (UID: \"3c72a93a-949b-4cdd-ba4f-fbd9371a4b1c\") " pod="openstack/glance-3c02-account-create-q2mpb" Nov 24 11:24:47 crc kubenswrapper[5072]: I1124 11:24:47.730404 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c72a93a-949b-4cdd-ba4f-fbd9371a4b1c-operator-scripts\") pod \"glance-3c02-account-create-q2mpb\" (UID: \"3c72a93a-949b-4cdd-ba4f-fbd9371a4b1c\") " pod="openstack/glance-3c02-account-create-q2mpb" Nov 24 11:24:47 crc kubenswrapper[5072]: I1124 11:24:47.730473 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njxnr\" (UniqueName: \"kubernetes.io/projected/d740db71-09cb-4511-9491-34292bf95e8f-kube-api-access-njxnr\") pod \"glance-db-create-zkhhj\" (UID: \"d740db71-09cb-4511-9491-34292bf95e8f\") " pod="openstack/glance-db-create-zkhhj" Nov 24 11:24:47 crc kubenswrapper[5072]: I1124 11:24:47.731277 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c72a93a-949b-4cdd-ba4f-fbd9371a4b1c-operator-scripts\") pod \"glance-3c02-account-create-q2mpb\" (UID: \"3c72a93a-949b-4cdd-ba4f-fbd9371a4b1c\") " pod="openstack/glance-3c02-account-create-q2mpb" Nov 24 11:24:47 crc kubenswrapper[5072]: I1124 11:24:47.744728 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44ffb5b8-a638-4300-a2dc-6a0007c09e1c-kube-api-access-j8q2z" (OuterVolumeSpecName: "kube-api-access-j8q2z") pod "44ffb5b8-a638-4300-a2dc-6a0007c09e1c" (UID: "44ffb5b8-a638-4300-a2dc-6a0007c09e1c"). InnerVolumeSpecName "kube-api-access-j8q2z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:24:47 crc kubenswrapper[5072]: I1124 11:24:47.750755 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsd9z\" (UniqueName: \"kubernetes.io/projected/3c72a93a-949b-4cdd-ba4f-fbd9371a4b1c-kube-api-access-qsd9z\") pod \"glance-3c02-account-create-q2mpb\" (UID: \"3c72a93a-949b-4cdd-ba4f-fbd9371a4b1c\") " pod="openstack/glance-3c02-account-create-q2mpb" Nov 24 11:24:47 crc kubenswrapper[5072]: I1124 11:24:47.792067 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44ffb5b8-a638-4300-a2dc-6a0007c09e1c-config" (OuterVolumeSpecName: "config") pod "44ffb5b8-a638-4300-a2dc-6a0007c09e1c" (UID: "44ffb5b8-a638-4300-a2dc-6a0007c09e1c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:24:47 crc kubenswrapper[5072]: I1124 11:24:47.799799 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44ffb5b8-a638-4300-a2dc-6a0007c09e1c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "44ffb5b8-a638-4300-a2dc-6a0007c09e1c" (UID: "44ffb5b8-a638-4300-a2dc-6a0007c09e1c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:24:47 crc kubenswrapper[5072]: I1124 11:24:47.806830 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44ffb5b8-a638-4300-a2dc-6a0007c09e1c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "44ffb5b8-a638-4300-a2dc-6a0007c09e1c" (UID: "44ffb5b8-a638-4300-a2dc-6a0007c09e1c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:24:47 crc kubenswrapper[5072]: I1124 11:24:47.831763 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njxnr\" (UniqueName: \"kubernetes.io/projected/d740db71-09cb-4511-9491-34292bf95e8f-kube-api-access-njxnr\") pod \"glance-db-create-zkhhj\" (UID: \"d740db71-09cb-4511-9491-34292bf95e8f\") " pod="openstack/glance-db-create-zkhhj" Nov 24 11:24:47 crc kubenswrapper[5072]: I1124 11:24:47.831830 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d740db71-09cb-4511-9491-34292bf95e8f-operator-scripts\") pod \"glance-db-create-zkhhj\" (UID: \"d740db71-09cb-4511-9491-34292bf95e8f\") " pod="openstack/glance-db-create-zkhhj" Nov 24 11:24:47 crc kubenswrapper[5072]: I1124 11:24:47.831910 5072 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/44ffb5b8-a638-4300-a2dc-6a0007c09e1c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 11:24:47 crc kubenswrapper[5072]: I1124 11:24:47.831921 5072 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44ffb5b8-a638-4300-a2dc-6a0007c09e1c-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:24:47 crc kubenswrapper[5072]: I1124 11:24:47.831932 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8q2z\" (UniqueName: \"kubernetes.io/projected/44ffb5b8-a638-4300-a2dc-6a0007c09e1c-kube-api-access-j8q2z\") on node \"crc\" DevicePath \"\"" Nov 24 11:24:47 crc kubenswrapper[5072]: I1124 11:24:47.831941 5072 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44ffb5b8-a638-4300-a2dc-6a0007c09e1c-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 11:24:47 crc kubenswrapper[5072]: I1124 11:24:47.832463 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d740db71-09cb-4511-9491-34292bf95e8f-operator-scripts\") pod \"glance-db-create-zkhhj\" (UID: \"d740db71-09cb-4511-9491-34292bf95e8f\") " pod="openstack/glance-db-create-zkhhj" Nov 24 11:24:47 crc kubenswrapper[5072]: I1124 11:24:47.850230 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njxnr\" (UniqueName: \"kubernetes.io/projected/d740db71-09cb-4511-9491-34292bf95e8f-kube-api-access-njxnr\") pod \"glance-db-create-zkhhj\" (UID: \"d740db71-09cb-4511-9491-34292bf95e8f\") " pod="openstack/glance-db-create-zkhhj" Nov 24 11:24:47 crc kubenswrapper[5072]: I1124 11:24:47.870198 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-3c02-account-create-q2mpb" Nov 24 11:24:47 crc kubenswrapper[5072]: I1124 11:24:47.976443 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-zkhhj" Nov 24 11:24:48 crc kubenswrapper[5072]: I1124 11:24:48.248936 5072 generic.go:334] "Generic (PLEG): container finished" podID="44ffb5b8-a638-4300-a2dc-6a0007c09e1c" containerID="a4ea975bd314c636ada62f22221c59ca98911c388927a61ac43230ba415512f6" exitCode=0 Nov 24 11:24:48 crc kubenswrapper[5072]: I1124 11:24:48.248993 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-lhcvx" event={"ID":"44ffb5b8-a638-4300-a2dc-6a0007c09e1c","Type":"ContainerDied","Data":"a4ea975bd314c636ada62f22221c59ca98911c388927a61ac43230ba415512f6"} Nov 24 11:24:48 crc kubenswrapper[5072]: I1124 11:24:48.249305 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-lhcvx" event={"ID":"44ffb5b8-a638-4300-a2dc-6a0007c09e1c","Type":"ContainerDied","Data":"cfea06e8b042d08e02efb2eafe2009a56f8ffb3b92d385a4d8f33ea25a772b58"} Nov 24 11:24:48 crc kubenswrapper[5072]: I1124 11:24:48.249337 5072 scope.go:117] "RemoveContainer" containerID="a4ea975bd314c636ada62f22221c59ca98911c388927a61ac43230ba415512f6" Nov 24 11:24:48 crc kubenswrapper[5072]: I1124 11:24:48.249020 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-lhcvx" Nov 24 11:24:48 crc kubenswrapper[5072]: I1124 11:24:48.260363 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-zkhhj"] Nov 24 11:24:48 crc kubenswrapper[5072]: W1124 11:24:48.271906 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd740db71_09cb_4511_9491_34292bf95e8f.slice/crio-122187694e4c2ed63234e498f9626f6bb75d827fd98a89af7068361b7c7d09fb WatchSource:0}: Error finding container 122187694e4c2ed63234e498f9626f6bb75d827fd98a89af7068361b7c7d09fb: Status 404 returned error can't find the container with id 122187694e4c2ed63234e498f9626f6bb75d827fd98a89af7068361b7c7d09fb Nov 24 11:24:48 crc kubenswrapper[5072]: I1124 11:24:48.280607 5072 scope.go:117] "RemoveContainer" containerID="a8e4d5fa49c5399922b9f80d667f43f9e2cb6f455c0f2614e716d19b04e9aa78" Nov 24 11:24:48 crc kubenswrapper[5072]: I1124 11:24:48.289364 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-lhcvx"] Nov 24 11:24:48 crc kubenswrapper[5072]: I1124 11:24:48.294419 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-lhcvx"] Nov 24 11:24:48 crc kubenswrapper[5072]: I1124 11:24:48.322793 5072 scope.go:117] "RemoveContainer" containerID="a4ea975bd314c636ada62f22221c59ca98911c388927a61ac43230ba415512f6" Nov 24 11:24:48 crc kubenswrapper[5072]: E1124 11:24:48.323460 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4ea975bd314c636ada62f22221c59ca98911c388927a61ac43230ba415512f6\": container with ID starting with a4ea975bd314c636ada62f22221c59ca98911c388927a61ac43230ba415512f6 not found: ID does not exist" containerID="a4ea975bd314c636ada62f22221c59ca98911c388927a61ac43230ba415512f6" Nov 24 11:24:48 crc kubenswrapper[5072]: I1124 11:24:48.323486 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4ea975bd314c636ada62f22221c59ca98911c388927a61ac43230ba415512f6"} err="failed to get container status \"a4ea975bd314c636ada62f22221c59ca98911c388927a61ac43230ba415512f6\": rpc error: code = NotFound desc = could not find container \"a4ea975bd314c636ada62f22221c59ca98911c388927a61ac43230ba415512f6\": container with ID starting with a4ea975bd314c636ada62f22221c59ca98911c388927a61ac43230ba415512f6 not found: ID does not exist" Nov 24 11:24:48 crc kubenswrapper[5072]: I1124 11:24:48.323511 5072 scope.go:117] "RemoveContainer" containerID="a8e4d5fa49c5399922b9f80d667f43f9e2cb6f455c0f2614e716d19b04e9aa78" Nov 24 11:24:48 crc kubenswrapper[5072]: E1124 11:24:48.323857 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8e4d5fa49c5399922b9f80d667f43f9e2cb6f455c0f2614e716d19b04e9aa78\": container with ID starting with a8e4d5fa49c5399922b9f80d667f43f9e2cb6f455c0f2614e716d19b04e9aa78 not found: ID does not exist" containerID="a8e4d5fa49c5399922b9f80d667f43f9e2cb6f455c0f2614e716d19b04e9aa78" Nov 24 11:24:48 crc kubenswrapper[5072]: I1124 11:24:48.323912 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8e4d5fa49c5399922b9f80d667f43f9e2cb6f455c0f2614e716d19b04e9aa78"} err="failed to get container status \"a8e4d5fa49c5399922b9f80d667f43f9e2cb6f455c0f2614e716d19b04e9aa78\": rpc error: code = NotFound desc = could not find container \"a8e4d5fa49c5399922b9f80d667f43f9e2cb6f455c0f2614e716d19b04e9aa78\": container with ID starting with a8e4d5fa49c5399922b9f80d667f43f9e2cb6f455c0f2614e716d19b04e9aa78 not found: ID does not exist" Nov 24 11:24:48 crc kubenswrapper[5072]: I1124 11:24:48.337562 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-3c02-account-create-q2mpb"] Nov 24 11:24:48 crc kubenswrapper[5072]: W1124 11:24:48.339944 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3c72a93a_949b_4cdd_ba4f_fbd9371a4b1c.slice/crio-792573eff2d057887013d6cc4f0faa0b10d55b5f5ac0327054a45bb5c6406789 WatchSource:0}: Error finding container 792573eff2d057887013d6cc4f0faa0b10d55b5f5ac0327054a45bb5c6406789: Status 404 returned error can't find the container with id 792573eff2d057887013d6cc4f0faa0b10d55b5f5ac0327054a45bb5c6406789 Nov 24 11:24:49 crc kubenswrapper[5072]: I1124 11:24:49.025828 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44ffb5b8-a638-4300-a2dc-6a0007c09e1c" path="/var/lib/kubelet/pods/44ffb5b8-a638-4300-a2dc-6a0007c09e1c/volumes" Nov 24 11:24:49 crc kubenswrapper[5072]: I1124 11:24:49.257610 5072 generic.go:334] "Generic (PLEG): container finished" podID="d740db71-09cb-4511-9491-34292bf95e8f" containerID="38a91d6105e41bc4396681ef576d9b1524064107275d3860cf0d95485d50d468" exitCode=0 Nov 24 11:24:49 crc kubenswrapper[5072]: I1124 11:24:49.257694 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-zkhhj" event={"ID":"d740db71-09cb-4511-9491-34292bf95e8f","Type":"ContainerDied","Data":"38a91d6105e41bc4396681ef576d9b1524064107275d3860cf0d95485d50d468"} Nov 24 11:24:49 crc kubenswrapper[5072]: I1124 11:24:49.257742 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-zkhhj" event={"ID":"d740db71-09cb-4511-9491-34292bf95e8f","Type":"ContainerStarted","Data":"122187694e4c2ed63234e498f9626f6bb75d827fd98a89af7068361b7c7d09fb"} Nov 24 11:24:49 crc kubenswrapper[5072]: I1124 11:24:49.259704 5072 generic.go:334] "Generic (PLEG): container finished" podID="3c72a93a-949b-4cdd-ba4f-fbd9371a4b1c" containerID="30bd6a20ad532d4ca9c20ae128f77136b7b249a19b3b00ae583f9d48f4c04316" exitCode=0 Nov 24 11:24:49 crc kubenswrapper[5072]: I1124 11:24:49.259797 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-3c02-account-create-q2mpb" event={"ID":"3c72a93a-949b-4cdd-ba4f-fbd9371a4b1c","Type":"ContainerDied","Data":"30bd6a20ad532d4ca9c20ae128f77136b7b249a19b3b00ae583f9d48f4c04316"} Nov 24 11:24:49 crc kubenswrapper[5072]: I1124 11:24:49.259836 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-3c02-account-create-q2mpb" event={"ID":"3c72a93a-949b-4cdd-ba4f-fbd9371a4b1c","Type":"ContainerStarted","Data":"792573eff2d057887013d6cc4f0faa0b10d55b5f5ac0327054a45bb5c6406789"} Nov 24 11:24:50 crc kubenswrapper[5072]: I1124 11:24:50.675469 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-zkhhj" Nov 24 11:24:50 crc kubenswrapper[5072]: I1124 11:24:50.680658 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-3c02-account-create-q2mpb" Nov 24 11:24:50 crc kubenswrapper[5072]: I1124 11:24:50.781620 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njxnr\" (UniqueName: \"kubernetes.io/projected/d740db71-09cb-4511-9491-34292bf95e8f-kube-api-access-njxnr\") pod \"d740db71-09cb-4511-9491-34292bf95e8f\" (UID: \"d740db71-09cb-4511-9491-34292bf95e8f\") " Nov 24 11:24:50 crc kubenswrapper[5072]: I1124 11:24:50.781799 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c72a93a-949b-4cdd-ba4f-fbd9371a4b1c-operator-scripts\") pod \"3c72a93a-949b-4cdd-ba4f-fbd9371a4b1c\" (UID: \"3c72a93a-949b-4cdd-ba4f-fbd9371a4b1c\") " Nov 24 11:24:50 crc kubenswrapper[5072]: I1124 11:24:50.781879 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qsd9z\" (UniqueName: \"kubernetes.io/projected/3c72a93a-949b-4cdd-ba4f-fbd9371a4b1c-kube-api-access-qsd9z\") pod \"3c72a93a-949b-4cdd-ba4f-fbd9371a4b1c\" (UID: \"3c72a93a-949b-4cdd-ba4f-fbd9371a4b1c\") " Nov 24 11:24:50 crc kubenswrapper[5072]: I1124 11:24:50.781910 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d740db71-09cb-4511-9491-34292bf95e8f-operator-scripts\") pod \"d740db71-09cb-4511-9491-34292bf95e8f\" (UID: \"d740db71-09cb-4511-9491-34292bf95e8f\") " Nov 24 11:24:50 crc kubenswrapper[5072]: I1124 11:24:50.782428 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d740db71-09cb-4511-9491-34292bf95e8f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d740db71-09cb-4511-9491-34292bf95e8f" (UID: "d740db71-09cb-4511-9491-34292bf95e8f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:24:50 crc kubenswrapper[5072]: I1124 11:24:50.782751 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c72a93a-949b-4cdd-ba4f-fbd9371a4b1c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3c72a93a-949b-4cdd-ba4f-fbd9371a4b1c" (UID: "3c72a93a-949b-4cdd-ba4f-fbd9371a4b1c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:24:50 crc kubenswrapper[5072]: I1124 11:24:50.788428 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d740db71-09cb-4511-9491-34292bf95e8f-kube-api-access-njxnr" (OuterVolumeSpecName: "kube-api-access-njxnr") pod "d740db71-09cb-4511-9491-34292bf95e8f" (UID: "d740db71-09cb-4511-9491-34292bf95e8f"). InnerVolumeSpecName "kube-api-access-njxnr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:24:50 crc kubenswrapper[5072]: I1124 11:24:50.792673 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c72a93a-949b-4cdd-ba4f-fbd9371a4b1c-kube-api-access-qsd9z" (OuterVolumeSpecName: "kube-api-access-qsd9z") pod "3c72a93a-949b-4cdd-ba4f-fbd9371a4b1c" (UID: "3c72a93a-949b-4cdd-ba4f-fbd9371a4b1c"). InnerVolumeSpecName "kube-api-access-qsd9z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:24:50 crc kubenswrapper[5072]: I1124 11:24:50.883666 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qsd9z\" (UniqueName: \"kubernetes.io/projected/3c72a93a-949b-4cdd-ba4f-fbd9371a4b1c-kube-api-access-qsd9z\") on node \"crc\" DevicePath \"\"" Nov 24 11:24:50 crc kubenswrapper[5072]: I1124 11:24:50.883704 5072 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d740db71-09cb-4511-9491-34292bf95e8f-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:24:50 crc kubenswrapper[5072]: I1124 11:24:50.883716 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njxnr\" (UniqueName: \"kubernetes.io/projected/d740db71-09cb-4511-9491-34292bf95e8f-kube-api-access-njxnr\") on node \"crc\" DevicePath \"\"" Nov 24 11:24:50 crc kubenswrapper[5072]: I1124 11:24:50.883729 5072 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c72a93a-949b-4cdd-ba4f-fbd9371a4b1c-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:24:51 crc kubenswrapper[5072]: I1124 11:24:51.282256 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-zkhhj" event={"ID":"d740db71-09cb-4511-9491-34292bf95e8f","Type":"ContainerDied","Data":"122187694e4c2ed63234e498f9626f6bb75d827fd98a89af7068361b7c7d09fb"} Nov 24 11:24:51 crc kubenswrapper[5072]: I1124 11:24:51.282304 5072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="122187694e4c2ed63234e498f9626f6bb75d827fd98a89af7068361b7c7d09fb" Nov 24 11:24:51 crc kubenswrapper[5072]: I1124 11:24:51.282362 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-zkhhj" Nov 24 11:24:51 crc kubenswrapper[5072]: I1124 11:24:51.284888 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-3c02-account-create-q2mpb" event={"ID":"3c72a93a-949b-4cdd-ba4f-fbd9371a4b1c","Type":"ContainerDied","Data":"792573eff2d057887013d6cc4f0faa0b10d55b5f5ac0327054a45bb5c6406789"} Nov 24 11:24:51 crc kubenswrapper[5072]: I1124 11:24:51.284917 5072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="792573eff2d057887013d6cc4f0faa0b10d55b5f5ac0327054a45bb5c6406789" Nov 24 11:24:51 crc kubenswrapper[5072]: I1124 11:24:51.285020 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-3c02-account-create-q2mpb" Nov 24 11:24:52 crc kubenswrapper[5072]: I1124 11:24:52.003548 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-jq922"] Nov 24 11:24:52 crc kubenswrapper[5072]: E1124 11:24:52.004245 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44ffb5b8-a638-4300-a2dc-6a0007c09e1c" containerName="dnsmasq-dns" Nov 24 11:24:52 crc kubenswrapper[5072]: I1124 11:24:52.004275 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="44ffb5b8-a638-4300-a2dc-6a0007c09e1c" containerName="dnsmasq-dns" Nov 24 11:24:52 crc kubenswrapper[5072]: E1124 11:24:52.004339 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c72a93a-949b-4cdd-ba4f-fbd9371a4b1c" containerName="mariadb-account-create" Nov 24 11:24:52 crc kubenswrapper[5072]: I1124 11:24:52.004355 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c72a93a-949b-4cdd-ba4f-fbd9371a4b1c" containerName="mariadb-account-create" Nov 24 11:24:52 crc kubenswrapper[5072]: E1124 11:24:52.007760 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44ffb5b8-a638-4300-a2dc-6a0007c09e1c" containerName="init" Nov 24 11:24:52 crc kubenswrapper[5072]: I1124 11:24:52.007819 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="44ffb5b8-a638-4300-a2dc-6a0007c09e1c" containerName="init" Nov 24 11:24:52 crc kubenswrapper[5072]: E1124 11:24:52.007861 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d740db71-09cb-4511-9491-34292bf95e8f" containerName="mariadb-database-create" Nov 24 11:24:52 crc kubenswrapper[5072]: I1124 11:24:52.007886 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="d740db71-09cb-4511-9491-34292bf95e8f" containerName="mariadb-database-create" Nov 24 11:24:52 crc kubenswrapper[5072]: I1124 11:24:52.008416 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="44ffb5b8-a638-4300-a2dc-6a0007c09e1c" containerName="dnsmasq-dns" Nov 24 11:24:52 crc kubenswrapper[5072]: I1124 11:24:52.008467 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="d740db71-09cb-4511-9491-34292bf95e8f" containerName="mariadb-database-create" Nov 24 11:24:52 crc kubenswrapper[5072]: I1124 11:24:52.008491 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c72a93a-949b-4cdd-ba4f-fbd9371a4b1c" containerName="mariadb-account-create" Nov 24 11:24:52 crc kubenswrapper[5072]: I1124 11:24:52.009645 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-jq922" Nov 24 11:24:52 crc kubenswrapper[5072]: I1124 11:24:52.012331 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-jq922"] Nov 24 11:24:52 crc kubenswrapper[5072]: I1124 11:24:52.102050 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-5244-account-create-wtbzl"] Nov 24 11:24:52 crc kubenswrapper[5072]: I1124 11:24:52.103467 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5244-account-create-wtbzl" Nov 24 11:24:52 crc kubenswrapper[5072]: I1124 11:24:52.105111 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/195a7abe-4729-4b77-8198-3eca911c2d84-operator-scripts\") pod \"keystone-db-create-jq922\" (UID: \"195a7abe-4729-4b77-8198-3eca911c2d84\") " pod="openstack/keystone-db-create-jq922" Nov 24 11:24:52 crc kubenswrapper[5072]: I1124 11:24:52.105247 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vq4cz\" (UniqueName: \"kubernetes.io/projected/195a7abe-4729-4b77-8198-3eca911c2d84-kube-api-access-vq4cz\") pod \"keystone-db-create-jq922\" (UID: \"195a7abe-4729-4b77-8198-3eca911c2d84\") " pod="openstack/keystone-db-create-jq922" Nov 24 11:24:52 crc kubenswrapper[5072]: I1124 11:24:52.108836 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Nov 24 11:24:52 crc kubenswrapper[5072]: I1124 11:24:52.122119 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5244-account-create-wtbzl"] Nov 24 11:24:52 crc kubenswrapper[5072]: I1124 11:24:52.207778 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/195a7abe-4729-4b77-8198-3eca911c2d84-operator-scripts\") pod \"keystone-db-create-jq922\" (UID: \"195a7abe-4729-4b77-8198-3eca911c2d84\") " pod="openstack/keystone-db-create-jq922" Nov 24 11:24:52 crc kubenswrapper[5072]: I1124 11:24:52.207943 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dbjs\" (UniqueName: \"kubernetes.io/projected/e39b3a7c-db7f-4d96-bbb1-1293b0432659-kube-api-access-8dbjs\") pod \"keystone-5244-account-create-wtbzl\" (UID: \"e39b3a7c-db7f-4d96-bbb1-1293b0432659\") " pod="openstack/keystone-5244-account-create-wtbzl" Nov 24 11:24:52 crc kubenswrapper[5072]: I1124 11:24:52.208001 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e39b3a7c-db7f-4d96-bbb1-1293b0432659-operator-scripts\") pod \"keystone-5244-account-create-wtbzl\" (UID: \"e39b3a7c-db7f-4d96-bbb1-1293b0432659\") " pod="openstack/keystone-5244-account-create-wtbzl" Nov 24 11:24:52 crc kubenswrapper[5072]: I1124 11:24:52.208042 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vq4cz\" (UniqueName: \"kubernetes.io/projected/195a7abe-4729-4b77-8198-3eca911c2d84-kube-api-access-vq4cz\") pod \"keystone-db-create-jq922\" (UID: \"195a7abe-4729-4b77-8198-3eca911c2d84\") " pod="openstack/keystone-db-create-jq922" Nov 24 11:24:52 crc kubenswrapper[5072]: I1124 11:24:52.209594 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/195a7abe-4729-4b77-8198-3eca911c2d84-operator-scripts\") pod \"keystone-db-create-jq922\" (UID: \"195a7abe-4729-4b77-8198-3eca911c2d84\") " pod="openstack/keystone-db-create-jq922" Nov 24 11:24:52 crc kubenswrapper[5072]: I1124 11:24:52.249787 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vq4cz\" (UniqueName: \"kubernetes.io/projected/195a7abe-4729-4b77-8198-3eca911c2d84-kube-api-access-vq4cz\") pod \"keystone-db-create-jq922\" (UID: \"195a7abe-4729-4b77-8198-3eca911c2d84\") " pod="openstack/keystone-db-create-jq922" Nov 24 11:24:52 crc kubenswrapper[5072]: I1124 11:24:52.302344 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-c6np9"] Nov 24 11:24:52 crc kubenswrapper[5072]: I1124 11:24:52.303757 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-c6np9" Nov 24 11:24:52 crc kubenswrapper[5072]: I1124 11:24:52.309208 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dbjs\" (UniqueName: \"kubernetes.io/projected/e39b3a7c-db7f-4d96-bbb1-1293b0432659-kube-api-access-8dbjs\") pod \"keystone-5244-account-create-wtbzl\" (UID: \"e39b3a7c-db7f-4d96-bbb1-1293b0432659\") " pod="openstack/keystone-5244-account-create-wtbzl" Nov 24 11:24:52 crc kubenswrapper[5072]: I1124 11:24:52.309268 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e39b3a7c-db7f-4d96-bbb1-1293b0432659-operator-scripts\") pod \"keystone-5244-account-create-wtbzl\" (UID: \"e39b3a7c-db7f-4d96-bbb1-1293b0432659\") " pod="openstack/keystone-5244-account-create-wtbzl" Nov 24 11:24:52 crc kubenswrapper[5072]: I1124 11:24:52.310066 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e39b3a7c-db7f-4d96-bbb1-1293b0432659-operator-scripts\") pod \"keystone-5244-account-create-wtbzl\" (UID: \"e39b3a7c-db7f-4d96-bbb1-1293b0432659\") " pod="openstack/keystone-5244-account-create-wtbzl" Nov 24 11:24:52 crc kubenswrapper[5072]: I1124 11:24:52.310229 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-c6np9"] Nov 24 11:24:52 crc kubenswrapper[5072]: I1124 11:24:52.326256 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dbjs\" (UniqueName: \"kubernetes.io/projected/e39b3a7c-db7f-4d96-bbb1-1293b0432659-kube-api-access-8dbjs\") pod \"keystone-5244-account-create-wtbzl\" (UID: \"e39b3a7c-db7f-4d96-bbb1-1293b0432659\") " pod="openstack/keystone-5244-account-create-wtbzl" Nov 24 11:24:52 crc kubenswrapper[5072]: I1124 11:24:52.345872 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-jq922" Nov 24 11:24:52 crc kubenswrapper[5072]: I1124 11:24:52.411274 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/295f55cf-b9ac-454a-a715-b48c901a8f34-operator-scripts\") pod \"placement-db-create-c6np9\" (UID: \"295f55cf-b9ac-454a-a715-b48c901a8f34\") " pod="openstack/placement-db-create-c6np9" Nov 24 11:24:52 crc kubenswrapper[5072]: I1124 11:24:52.411338 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6p7k\" (UniqueName: \"kubernetes.io/projected/295f55cf-b9ac-454a-a715-b48c901a8f34-kube-api-access-h6p7k\") pod \"placement-db-create-c6np9\" (UID: \"295f55cf-b9ac-454a-a715-b48c901a8f34\") " pod="openstack/placement-db-create-c6np9" Nov 24 11:24:52 crc kubenswrapper[5072]: I1124 11:24:52.422212 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5244-account-create-wtbzl" Nov 24 11:24:52 crc kubenswrapper[5072]: I1124 11:24:52.431133 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-7d4a-account-create-vqdtq"] Nov 24 11:24:52 crc kubenswrapper[5072]: I1124 11:24:52.432179 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7d4a-account-create-vqdtq" Nov 24 11:24:52 crc kubenswrapper[5072]: I1124 11:24:52.440537 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Nov 24 11:24:52 crc kubenswrapper[5072]: I1124 11:24:52.443129 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7d4a-account-create-vqdtq"] Nov 24 11:24:52 crc kubenswrapper[5072]: I1124 11:24:52.513445 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6p7k\" (UniqueName: \"kubernetes.io/projected/295f55cf-b9ac-454a-a715-b48c901a8f34-kube-api-access-h6p7k\") pod \"placement-db-create-c6np9\" (UID: \"295f55cf-b9ac-454a-a715-b48c901a8f34\") " pod="openstack/placement-db-create-c6np9" Nov 24 11:24:52 crc kubenswrapper[5072]: I1124 11:24:52.513783 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdmlk\" (UniqueName: \"kubernetes.io/projected/0d72b502-87c9-475a-93b4-739816ea7f7e-kube-api-access-cdmlk\") pod \"placement-7d4a-account-create-vqdtq\" (UID: \"0d72b502-87c9-475a-93b4-739816ea7f7e\") " pod="openstack/placement-7d4a-account-create-vqdtq" Nov 24 11:24:52 crc kubenswrapper[5072]: I1124 11:24:52.513901 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d72b502-87c9-475a-93b4-739816ea7f7e-operator-scripts\") pod \"placement-7d4a-account-create-vqdtq\" (UID: \"0d72b502-87c9-475a-93b4-739816ea7f7e\") " pod="openstack/placement-7d4a-account-create-vqdtq" Nov 24 11:24:52 crc kubenswrapper[5072]: I1124 11:24:52.513985 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/295f55cf-b9ac-454a-a715-b48c901a8f34-operator-scripts\") pod \"placement-db-create-c6np9\" (UID: \"295f55cf-b9ac-454a-a715-b48c901a8f34\") " pod="openstack/placement-db-create-c6np9" Nov 24 11:24:52 crc kubenswrapper[5072]: I1124 11:24:52.515835 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/295f55cf-b9ac-454a-a715-b48c901a8f34-operator-scripts\") pod \"placement-db-create-c6np9\" (UID: \"295f55cf-b9ac-454a-a715-b48c901a8f34\") " pod="openstack/placement-db-create-c6np9" Nov 24 11:24:52 crc kubenswrapper[5072]: I1124 11:24:52.535023 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6p7k\" (UniqueName: \"kubernetes.io/projected/295f55cf-b9ac-454a-a715-b48c901a8f34-kube-api-access-h6p7k\") pod \"placement-db-create-c6np9\" (UID: \"295f55cf-b9ac-454a-a715-b48c901a8f34\") " pod="openstack/placement-db-create-c6np9" Nov 24 11:24:52 crc kubenswrapper[5072]: I1124 11:24:52.617087 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdmlk\" (UniqueName: \"kubernetes.io/projected/0d72b502-87c9-475a-93b4-739816ea7f7e-kube-api-access-cdmlk\") pod \"placement-7d4a-account-create-vqdtq\" (UID: \"0d72b502-87c9-475a-93b4-739816ea7f7e\") " pod="openstack/placement-7d4a-account-create-vqdtq" Nov 24 11:24:52 crc kubenswrapper[5072]: I1124 11:24:52.617172 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d72b502-87c9-475a-93b4-739816ea7f7e-operator-scripts\") pod \"placement-7d4a-account-create-vqdtq\" (UID: \"0d72b502-87c9-475a-93b4-739816ea7f7e\") " pod="openstack/placement-7d4a-account-create-vqdtq" Nov 24 11:24:52 crc kubenswrapper[5072]: I1124 11:24:52.618091 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d72b502-87c9-475a-93b4-739816ea7f7e-operator-scripts\") pod \"placement-7d4a-account-create-vqdtq\" (UID: \"0d72b502-87c9-475a-93b4-739816ea7f7e\") " pod="openstack/placement-7d4a-account-create-vqdtq" Nov 24 11:24:52 crc kubenswrapper[5072]: I1124 11:24:52.621580 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-c6np9" Nov 24 11:24:52 crc kubenswrapper[5072]: I1124 11:24:52.633633 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdmlk\" (UniqueName: \"kubernetes.io/projected/0d72b502-87c9-475a-93b4-739816ea7f7e-kube-api-access-cdmlk\") pod \"placement-7d4a-account-create-vqdtq\" (UID: \"0d72b502-87c9-475a-93b4-739816ea7f7e\") " pod="openstack/placement-7d4a-account-create-vqdtq" Nov 24 11:24:52 crc kubenswrapper[5072]: I1124 11:24:52.760917 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-hdh5p"] Nov 24 11:24:52 crc kubenswrapper[5072]: I1124 11:24:52.762159 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-hdh5p" Nov 24 11:24:52 crc kubenswrapper[5072]: I1124 11:24:52.766222 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Nov 24 11:24:52 crc kubenswrapper[5072]: I1124 11:24:52.768581 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-bb4tx" Nov 24 11:24:52 crc kubenswrapper[5072]: I1124 11:24:52.773729 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-hdh5p"] Nov 24 11:24:52 crc kubenswrapper[5072]: I1124 11:24:52.782167 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7d4a-account-create-vqdtq" Nov 24 11:24:52 crc kubenswrapper[5072]: I1124 11:24:52.842346 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-jq922"] Nov 24 11:24:52 crc kubenswrapper[5072]: I1124 11:24:52.845700 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76bdb5be-3864-4599-9ac5-7475f63290a3-combined-ca-bundle\") pod \"glance-db-sync-hdh5p\" (UID: \"76bdb5be-3864-4599-9ac5-7475f63290a3\") " pod="openstack/glance-db-sync-hdh5p" Nov 24 11:24:52 crc kubenswrapper[5072]: I1124 11:24:52.845738 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76bdb5be-3864-4599-9ac5-7475f63290a3-config-data\") pod \"glance-db-sync-hdh5p\" (UID: \"76bdb5be-3864-4599-9ac5-7475f63290a3\") " pod="openstack/glance-db-sync-hdh5p" Nov 24 11:24:52 crc kubenswrapper[5072]: I1124 11:24:52.845760 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/76bdb5be-3864-4599-9ac5-7475f63290a3-db-sync-config-data\") pod \"glance-db-sync-hdh5p\" (UID: \"76bdb5be-3864-4599-9ac5-7475f63290a3\") " pod="openstack/glance-db-sync-hdh5p" Nov 24 11:24:52 crc kubenswrapper[5072]: I1124 11:24:52.845800 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssqz7\" (UniqueName: \"kubernetes.io/projected/76bdb5be-3864-4599-9ac5-7475f63290a3-kube-api-access-ssqz7\") pod \"glance-db-sync-hdh5p\" (UID: \"76bdb5be-3864-4599-9ac5-7475f63290a3\") " pod="openstack/glance-db-sync-hdh5p" Nov 24 11:24:52 crc kubenswrapper[5072]: I1124 11:24:52.854668 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Nov 24 11:24:52 crc kubenswrapper[5072]: I1124 11:24:52.875298 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5244-account-create-wtbzl"] Nov 24 11:24:52 crc kubenswrapper[5072]: I1124 11:24:52.947017 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssqz7\" (UniqueName: \"kubernetes.io/projected/76bdb5be-3864-4599-9ac5-7475f63290a3-kube-api-access-ssqz7\") pod \"glance-db-sync-hdh5p\" (UID: \"76bdb5be-3864-4599-9ac5-7475f63290a3\") " pod="openstack/glance-db-sync-hdh5p" Nov 24 11:24:52 crc kubenswrapper[5072]: I1124 11:24:52.947609 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76bdb5be-3864-4599-9ac5-7475f63290a3-combined-ca-bundle\") pod \"glance-db-sync-hdh5p\" (UID: \"76bdb5be-3864-4599-9ac5-7475f63290a3\") " pod="openstack/glance-db-sync-hdh5p" Nov 24 11:24:52 crc kubenswrapper[5072]: I1124 11:24:52.947728 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76bdb5be-3864-4599-9ac5-7475f63290a3-config-data\") pod \"glance-db-sync-hdh5p\" (UID: \"76bdb5be-3864-4599-9ac5-7475f63290a3\") " pod="openstack/glance-db-sync-hdh5p" Nov 24 11:24:52 crc kubenswrapper[5072]: I1124 11:24:52.947800 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/76bdb5be-3864-4599-9ac5-7475f63290a3-db-sync-config-data\") pod \"glance-db-sync-hdh5p\" (UID: \"76bdb5be-3864-4599-9ac5-7475f63290a3\") " pod="openstack/glance-db-sync-hdh5p" Nov 24 11:24:52 crc kubenswrapper[5072]: I1124 11:24:52.952053 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76bdb5be-3864-4599-9ac5-7475f63290a3-config-data\") pod \"glance-db-sync-hdh5p\" (UID: \"76bdb5be-3864-4599-9ac5-7475f63290a3\") " pod="openstack/glance-db-sync-hdh5p" Nov 24 11:24:52 crc kubenswrapper[5072]: I1124 11:24:52.952209 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76bdb5be-3864-4599-9ac5-7475f63290a3-combined-ca-bundle\") pod \"glance-db-sync-hdh5p\" (UID: \"76bdb5be-3864-4599-9ac5-7475f63290a3\") " pod="openstack/glance-db-sync-hdh5p" Nov 24 11:24:52 crc kubenswrapper[5072]: I1124 11:24:52.957589 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/76bdb5be-3864-4599-9ac5-7475f63290a3-db-sync-config-data\") pod \"glance-db-sync-hdh5p\" (UID: \"76bdb5be-3864-4599-9ac5-7475f63290a3\") " pod="openstack/glance-db-sync-hdh5p" Nov 24 11:24:52 crc kubenswrapper[5072]: I1124 11:24:52.963323 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssqz7\" (UniqueName: \"kubernetes.io/projected/76bdb5be-3864-4599-9ac5-7475f63290a3-kube-api-access-ssqz7\") pod \"glance-db-sync-hdh5p\" (UID: \"76bdb5be-3864-4599-9ac5-7475f63290a3\") " pod="openstack/glance-db-sync-hdh5p" Nov 24 11:24:53 crc kubenswrapper[5072]: I1124 11:24:53.088688 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-hdh5p" Nov 24 11:24:53 crc kubenswrapper[5072]: I1124 11:24:53.129720 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-c6np9"] Nov 24 11:24:53 crc kubenswrapper[5072]: W1124 11:24:53.132966 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod295f55cf_b9ac_454a_a715_b48c901a8f34.slice/crio-e87c5b357597eebcb4ea524f37b562f84236961c2bd6e26800993be7000696ca WatchSource:0}: Error finding container e87c5b357597eebcb4ea524f37b562f84236961c2bd6e26800993be7000696ca: Status 404 returned error can't find the container with id e87c5b357597eebcb4ea524f37b562f84236961c2bd6e26800993be7000696ca Nov 24 11:24:53 crc kubenswrapper[5072]: I1124 11:24:53.249064 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7d4a-account-create-vqdtq"] Nov 24 11:24:53 crc kubenswrapper[5072]: I1124 11:24:53.308677 5072 generic.go:334] "Generic (PLEG): container finished" podID="195a7abe-4729-4b77-8198-3eca911c2d84" containerID="3ffb029530f8c0960bdb88fdef4fee7e32a9264d54b36eae6daf9c001e91b67c" exitCode=0 Nov 24 11:24:53 crc kubenswrapper[5072]: I1124 11:24:53.308741 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-jq922" event={"ID":"195a7abe-4729-4b77-8198-3eca911c2d84","Type":"ContainerDied","Data":"3ffb029530f8c0960bdb88fdef4fee7e32a9264d54b36eae6daf9c001e91b67c"} Nov 24 11:24:53 crc kubenswrapper[5072]: I1124 11:24:53.308766 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-jq922" event={"ID":"195a7abe-4729-4b77-8198-3eca911c2d84","Type":"ContainerStarted","Data":"d0982ad7c51b4ce3182d0fc68a209c2c0734a672bc1aabf90f627ef9269415ca"} Nov 24 11:24:53 crc kubenswrapper[5072]: I1124 11:24:53.314408 5072 generic.go:334] "Generic (PLEG): container finished" podID="e39b3a7c-db7f-4d96-bbb1-1293b0432659" containerID="9f500e95440ace3eec82f42e3fa443b276bb52c188ece8717e9c03a4315994d4" exitCode=0 Nov 24 11:24:53 crc kubenswrapper[5072]: I1124 11:24:53.314563 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5244-account-create-wtbzl" event={"ID":"e39b3a7c-db7f-4d96-bbb1-1293b0432659","Type":"ContainerDied","Data":"9f500e95440ace3eec82f42e3fa443b276bb52c188ece8717e9c03a4315994d4"} Nov 24 11:24:53 crc kubenswrapper[5072]: I1124 11:24:53.314586 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5244-account-create-wtbzl" event={"ID":"e39b3a7c-db7f-4d96-bbb1-1293b0432659","Type":"ContainerStarted","Data":"5a496627f5da4a64c3bc25ac5f0429fd251dc576d3dc26cc4e771645fc91d409"} Nov 24 11:24:53 crc kubenswrapper[5072]: I1124 11:24:53.318847 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-c6np9" event={"ID":"295f55cf-b9ac-454a-a715-b48c901a8f34","Type":"ContainerStarted","Data":"141c95e10db41e165a541f4d33e3fb431d956449fd956ad2cbd8c4930ff2f384"} Nov 24 11:24:53 crc kubenswrapper[5072]: I1124 11:24:53.318871 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-c6np9" event={"ID":"295f55cf-b9ac-454a-a715-b48c901a8f34","Type":"ContainerStarted","Data":"e87c5b357597eebcb4ea524f37b562f84236961c2bd6e26800993be7000696ca"} Nov 24 11:24:53 crc kubenswrapper[5072]: I1124 11:24:53.328840 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7d4a-account-create-vqdtq" event={"ID":"0d72b502-87c9-475a-93b4-739816ea7f7e","Type":"ContainerStarted","Data":"584b6a9fb3d4cce157752b98cd032a5b6e58425b3aef754a2fc9593829f29472"} Nov 24 11:24:53 crc kubenswrapper[5072]: I1124 11:24:53.362824 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-c6np9" podStartSLOduration=1.362803261 podStartE2EDuration="1.362803261s" podCreationTimestamp="2025-11-24 11:24:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:24:53.355977709 +0000 UTC m=+945.067502185" watchObservedRunningTime="2025-11-24 11:24:53.362803261 +0000 UTC m=+945.074327737" Nov 24 11:24:53 crc kubenswrapper[5072]: I1124 11:24:53.381665 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-hdh5p"] Nov 24 11:24:53 crc kubenswrapper[5072]: W1124 11:24:53.418969 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod76bdb5be_3864_4599_9ac5_7475f63290a3.slice/crio-a2c9ea108cb412de556d40d687a9a7a2d5873f1d48a4612eb3a94c70fb98a2cd WatchSource:0}: Error finding container a2c9ea108cb412de556d40d687a9a7a2d5873f1d48a4612eb3a94c70fb98a2cd: Status 404 returned error can't find the container with id a2c9ea108cb412de556d40d687a9a7a2d5873f1d48a4612eb3a94c70fb98a2cd Nov 24 11:24:54 crc kubenswrapper[5072]: I1124 11:24:54.338180 5072 generic.go:334] "Generic (PLEG): container finished" podID="0d72b502-87c9-475a-93b4-739816ea7f7e" containerID="c6ae9fdf337c178e542c8bc87178d1fbfe2dc0bd1fce6fc30fa1181524b456a8" exitCode=0 Nov 24 11:24:54 crc kubenswrapper[5072]: I1124 11:24:54.338236 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7d4a-account-create-vqdtq" event={"ID":"0d72b502-87c9-475a-93b4-739816ea7f7e","Type":"ContainerDied","Data":"c6ae9fdf337c178e542c8bc87178d1fbfe2dc0bd1fce6fc30fa1181524b456a8"} Nov 24 11:24:54 crc kubenswrapper[5072]: I1124 11:24:54.341214 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-hdh5p" event={"ID":"76bdb5be-3864-4599-9ac5-7475f63290a3","Type":"ContainerStarted","Data":"a2c9ea108cb412de556d40d687a9a7a2d5873f1d48a4612eb3a94c70fb98a2cd"} Nov 24 11:24:54 crc kubenswrapper[5072]: I1124 11:24:54.343492 5072 generic.go:334] "Generic (PLEG): container finished" podID="295f55cf-b9ac-454a-a715-b48c901a8f34" containerID="141c95e10db41e165a541f4d33e3fb431d956449fd956ad2cbd8c4930ff2f384" exitCode=0 Nov 24 11:24:54 crc kubenswrapper[5072]: I1124 11:24:54.343583 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-c6np9" event={"ID":"295f55cf-b9ac-454a-a715-b48c901a8f34","Type":"ContainerDied","Data":"141c95e10db41e165a541f4d33e3fb431d956449fd956ad2cbd8c4930ff2f384"} Nov 24 11:24:54 crc kubenswrapper[5072]: I1124 11:24:54.666411 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5244-account-create-wtbzl" Nov 24 11:24:54 crc kubenswrapper[5072]: I1124 11:24:54.764801 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-jq922" Nov 24 11:24:54 crc kubenswrapper[5072]: I1124 11:24:54.783034 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8dbjs\" (UniqueName: \"kubernetes.io/projected/e39b3a7c-db7f-4d96-bbb1-1293b0432659-kube-api-access-8dbjs\") pod \"e39b3a7c-db7f-4d96-bbb1-1293b0432659\" (UID: \"e39b3a7c-db7f-4d96-bbb1-1293b0432659\") " Nov 24 11:24:54 crc kubenswrapper[5072]: I1124 11:24:54.783093 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e39b3a7c-db7f-4d96-bbb1-1293b0432659-operator-scripts\") pod \"e39b3a7c-db7f-4d96-bbb1-1293b0432659\" (UID: \"e39b3a7c-db7f-4d96-bbb1-1293b0432659\") " Nov 24 11:24:54 crc kubenswrapper[5072]: I1124 11:24:54.784157 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e39b3a7c-db7f-4d96-bbb1-1293b0432659-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e39b3a7c-db7f-4d96-bbb1-1293b0432659" (UID: "e39b3a7c-db7f-4d96-bbb1-1293b0432659"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:24:54 crc kubenswrapper[5072]: I1124 11:24:54.789600 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e39b3a7c-db7f-4d96-bbb1-1293b0432659-kube-api-access-8dbjs" (OuterVolumeSpecName: "kube-api-access-8dbjs") pod "e39b3a7c-db7f-4d96-bbb1-1293b0432659" (UID: "e39b3a7c-db7f-4d96-bbb1-1293b0432659"). InnerVolumeSpecName "kube-api-access-8dbjs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:24:54 crc kubenswrapper[5072]: I1124 11:24:54.884573 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vq4cz\" (UniqueName: \"kubernetes.io/projected/195a7abe-4729-4b77-8198-3eca911c2d84-kube-api-access-vq4cz\") pod \"195a7abe-4729-4b77-8198-3eca911c2d84\" (UID: \"195a7abe-4729-4b77-8198-3eca911c2d84\") " Nov 24 11:24:54 crc kubenswrapper[5072]: I1124 11:24:54.884680 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/195a7abe-4729-4b77-8198-3eca911c2d84-operator-scripts\") pod \"195a7abe-4729-4b77-8198-3eca911c2d84\" (UID: \"195a7abe-4729-4b77-8198-3eca911c2d84\") " Nov 24 11:24:54 crc kubenswrapper[5072]: I1124 11:24:54.885030 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8dbjs\" (UniqueName: \"kubernetes.io/projected/e39b3a7c-db7f-4d96-bbb1-1293b0432659-kube-api-access-8dbjs\") on node \"crc\" DevicePath \"\"" Nov 24 11:24:54 crc kubenswrapper[5072]: I1124 11:24:54.885048 5072 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e39b3a7c-db7f-4d96-bbb1-1293b0432659-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:24:54 crc kubenswrapper[5072]: I1124 11:24:54.885092 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/195a7abe-4729-4b77-8198-3eca911c2d84-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "195a7abe-4729-4b77-8198-3eca911c2d84" (UID: "195a7abe-4729-4b77-8198-3eca911c2d84"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:24:54 crc kubenswrapper[5072]: I1124 11:24:54.887544 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/195a7abe-4729-4b77-8198-3eca911c2d84-kube-api-access-vq4cz" (OuterVolumeSpecName: "kube-api-access-vq4cz") pod "195a7abe-4729-4b77-8198-3eca911c2d84" (UID: "195a7abe-4729-4b77-8198-3eca911c2d84"). InnerVolumeSpecName "kube-api-access-vq4cz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:24:54 crc kubenswrapper[5072]: I1124 11:24:54.986347 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vq4cz\" (UniqueName: \"kubernetes.io/projected/195a7abe-4729-4b77-8198-3eca911c2d84-kube-api-access-vq4cz\") on node \"crc\" DevicePath \"\"" Nov 24 11:24:54 crc kubenswrapper[5072]: I1124 11:24:54.986769 5072 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/195a7abe-4729-4b77-8198-3eca911c2d84-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:24:55 crc kubenswrapper[5072]: I1124 11:24:55.352996 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5244-account-create-wtbzl" Nov 24 11:24:55 crc kubenswrapper[5072]: I1124 11:24:55.353617 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5244-account-create-wtbzl" event={"ID":"e39b3a7c-db7f-4d96-bbb1-1293b0432659","Type":"ContainerDied","Data":"5a496627f5da4a64c3bc25ac5f0429fd251dc576d3dc26cc4e771645fc91d409"} Nov 24 11:24:55 crc kubenswrapper[5072]: I1124 11:24:55.353640 5072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a496627f5da4a64c3bc25ac5f0429fd251dc576d3dc26cc4e771645fc91d409" Nov 24 11:24:55 crc kubenswrapper[5072]: I1124 11:24:55.354766 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-jq922" Nov 24 11:24:55 crc kubenswrapper[5072]: I1124 11:24:55.355064 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-jq922" event={"ID":"195a7abe-4729-4b77-8198-3eca911c2d84","Type":"ContainerDied","Data":"d0982ad7c51b4ce3182d0fc68a209c2c0734a672bc1aabf90f627ef9269415ca"} Nov 24 11:24:55 crc kubenswrapper[5072]: I1124 11:24:55.355084 5072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0982ad7c51b4ce3182d0fc68a209c2c0734a672bc1aabf90f627ef9269415ca" Nov 24 11:24:55 crc kubenswrapper[5072]: I1124 11:24:55.719177 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-c6np9" Nov 24 11:24:55 crc kubenswrapper[5072]: I1124 11:24:55.723894 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7d4a-account-create-vqdtq" Nov 24 11:24:55 crc kubenswrapper[5072]: I1124 11:24:55.799353 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h6p7k\" (UniqueName: \"kubernetes.io/projected/295f55cf-b9ac-454a-a715-b48c901a8f34-kube-api-access-h6p7k\") pod \"295f55cf-b9ac-454a-a715-b48c901a8f34\" (UID: \"295f55cf-b9ac-454a-a715-b48c901a8f34\") " Nov 24 11:24:55 crc kubenswrapper[5072]: I1124 11:24:55.799417 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/295f55cf-b9ac-454a-a715-b48c901a8f34-operator-scripts\") pod \"295f55cf-b9ac-454a-a715-b48c901a8f34\" (UID: \"295f55cf-b9ac-454a-a715-b48c901a8f34\") " Nov 24 11:24:55 crc kubenswrapper[5072]: I1124 11:24:55.799569 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d72b502-87c9-475a-93b4-739816ea7f7e-operator-scripts\") pod \"0d72b502-87c9-475a-93b4-739816ea7f7e\" (UID: \"0d72b502-87c9-475a-93b4-739816ea7f7e\") " Nov 24 11:24:55 crc kubenswrapper[5072]: I1124 11:24:55.799667 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cdmlk\" (UniqueName: \"kubernetes.io/projected/0d72b502-87c9-475a-93b4-739816ea7f7e-kube-api-access-cdmlk\") pod \"0d72b502-87c9-475a-93b4-739816ea7f7e\" (UID: \"0d72b502-87c9-475a-93b4-739816ea7f7e\") " Nov 24 11:24:55 crc kubenswrapper[5072]: I1124 11:24:55.800188 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/295f55cf-b9ac-454a-a715-b48c901a8f34-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "295f55cf-b9ac-454a-a715-b48c901a8f34" (UID: "295f55cf-b9ac-454a-a715-b48c901a8f34"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:24:55 crc kubenswrapper[5072]: I1124 11:24:55.800273 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d72b502-87c9-475a-93b4-739816ea7f7e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0d72b502-87c9-475a-93b4-739816ea7f7e" (UID: "0d72b502-87c9-475a-93b4-739816ea7f7e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:24:55 crc kubenswrapper[5072]: I1124 11:24:55.804926 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d72b502-87c9-475a-93b4-739816ea7f7e-kube-api-access-cdmlk" (OuterVolumeSpecName: "kube-api-access-cdmlk") pod "0d72b502-87c9-475a-93b4-739816ea7f7e" (UID: "0d72b502-87c9-475a-93b4-739816ea7f7e"). InnerVolumeSpecName "kube-api-access-cdmlk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:24:55 crc kubenswrapper[5072]: I1124 11:24:55.814941 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/295f55cf-b9ac-454a-a715-b48c901a8f34-kube-api-access-h6p7k" (OuterVolumeSpecName: "kube-api-access-h6p7k") pod "295f55cf-b9ac-454a-a715-b48c901a8f34" (UID: "295f55cf-b9ac-454a-a715-b48c901a8f34"). InnerVolumeSpecName "kube-api-access-h6p7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:24:55 crc kubenswrapper[5072]: I1124 11:24:55.901608 5072 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d72b502-87c9-475a-93b4-739816ea7f7e-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:24:55 crc kubenswrapper[5072]: I1124 11:24:55.901839 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cdmlk\" (UniqueName: \"kubernetes.io/projected/0d72b502-87c9-475a-93b4-739816ea7f7e-kube-api-access-cdmlk\") on node \"crc\" DevicePath \"\"" Nov 24 11:24:55 crc kubenswrapper[5072]: I1124 11:24:55.901851 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h6p7k\" (UniqueName: \"kubernetes.io/projected/295f55cf-b9ac-454a-a715-b48c901a8f34-kube-api-access-h6p7k\") on node \"crc\" DevicePath \"\"" Nov 24 11:24:55 crc kubenswrapper[5072]: I1124 11:24:55.901861 5072 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/295f55cf-b9ac-454a-a715-b48c901a8f34-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:24:56 crc kubenswrapper[5072]: I1124 11:24:56.367410 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-c6np9" Nov 24 11:24:56 crc kubenswrapper[5072]: I1124 11:24:56.367625 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-c6np9" event={"ID":"295f55cf-b9ac-454a-a715-b48c901a8f34","Type":"ContainerDied","Data":"e87c5b357597eebcb4ea524f37b562f84236961c2bd6e26800993be7000696ca"} Nov 24 11:24:56 crc kubenswrapper[5072]: I1124 11:24:56.367672 5072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e87c5b357597eebcb4ea524f37b562f84236961c2bd6e26800993be7000696ca" Nov 24 11:24:56 crc kubenswrapper[5072]: I1124 11:24:56.369292 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7d4a-account-create-vqdtq" event={"ID":"0d72b502-87c9-475a-93b4-739816ea7f7e","Type":"ContainerDied","Data":"584b6a9fb3d4cce157752b98cd032a5b6e58425b3aef754a2fc9593829f29472"} Nov 24 11:24:56 crc kubenswrapper[5072]: I1124 11:24:56.369342 5072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="584b6a9fb3d4cce157752b98cd032a5b6e58425b3aef754a2fc9593829f29472" Nov 24 11:24:56 crc kubenswrapper[5072]: I1124 11:24:56.369407 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7d4a-account-create-vqdtq" Nov 24 11:24:57 crc kubenswrapper[5072]: I1124 11:24:57.377822 5072 generic.go:334] "Generic (PLEG): container finished" podID="224cff60-3d72-478d-9788-926bbca42ad2" containerID="2e81d597c043ecd78e584bee1d8d13ad13881786d38a4fbb7fe5f5e65775c121" exitCode=0 Nov 24 11:24:57 crc kubenswrapper[5072]: I1124 11:24:57.377867 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"224cff60-3d72-478d-9788-926bbca42ad2","Type":"ContainerDied","Data":"2e81d597c043ecd78e584bee1d8d13ad13881786d38a4fbb7fe5f5e65775c121"} Nov 24 11:24:58 crc kubenswrapper[5072]: I1124 11:24:58.386692 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"224cff60-3d72-478d-9788-926bbca42ad2","Type":"ContainerStarted","Data":"7632bd7692c742dde61619c49b4b4c3df75f9dab1b21043cfeb0c078e48057b5"} Nov 24 11:24:58 crc kubenswrapper[5072]: I1124 11:24:58.387170 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:24:58 crc kubenswrapper[5072]: I1124 11:24:58.405112 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.456833047 podStartE2EDuration="51.405097074s" podCreationTimestamp="2025-11-24 11:24:07 +0000 UTC" firstStartedPulling="2025-11-24 11:24:09.622027352 +0000 UTC m=+901.333551828" lastFinishedPulling="2025-11-24 11:24:23.570291379 +0000 UTC m=+915.281815855" observedRunningTime="2025-11-24 11:24:58.40454523 +0000 UTC m=+950.116069706" watchObservedRunningTime="2025-11-24 11:24:58.405097074 +0000 UTC m=+950.116621550" Nov 24 11:25:03 crc kubenswrapper[5072]: I1124 11:25:03.206394 5072 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-ltkhm" podUID="d1f48ba7-b537-4282-9eef-aee78410afcb" containerName="ovn-controller" probeResult="failure" output=< Nov 24 11:25:03 crc kubenswrapper[5072]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 24 11:25:03 crc kubenswrapper[5072]: > Nov 24 11:25:03 crc kubenswrapper[5072]: I1124 11:25:03.216142 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-7tcxz" Nov 24 11:25:03 crc kubenswrapper[5072]: I1124 11:25:03.260104 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-7tcxz" Nov 24 11:25:03 crc kubenswrapper[5072]: I1124 11:25:03.461218 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ltkhm-config-628bc"] Nov 24 11:25:03 crc kubenswrapper[5072]: E1124 11:25:03.461558 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="195a7abe-4729-4b77-8198-3eca911c2d84" containerName="mariadb-database-create" Nov 24 11:25:03 crc kubenswrapper[5072]: I1124 11:25:03.461569 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="195a7abe-4729-4b77-8198-3eca911c2d84" containerName="mariadb-database-create" Nov 24 11:25:03 crc kubenswrapper[5072]: E1124 11:25:03.461585 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e39b3a7c-db7f-4d96-bbb1-1293b0432659" containerName="mariadb-account-create" Nov 24 11:25:03 crc kubenswrapper[5072]: I1124 11:25:03.461591 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="e39b3a7c-db7f-4d96-bbb1-1293b0432659" containerName="mariadb-account-create" Nov 24 11:25:03 crc kubenswrapper[5072]: E1124 11:25:03.461620 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d72b502-87c9-475a-93b4-739816ea7f7e" containerName="mariadb-account-create" Nov 24 11:25:03 crc kubenswrapper[5072]: I1124 11:25:03.461625 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d72b502-87c9-475a-93b4-739816ea7f7e" containerName="mariadb-account-create" Nov 24 11:25:03 crc kubenswrapper[5072]: E1124 11:25:03.461638 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="295f55cf-b9ac-454a-a715-b48c901a8f34" containerName="mariadb-database-create" Nov 24 11:25:03 crc kubenswrapper[5072]: I1124 11:25:03.461644 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="295f55cf-b9ac-454a-a715-b48c901a8f34" containerName="mariadb-database-create" Nov 24 11:25:03 crc kubenswrapper[5072]: I1124 11:25:03.461792 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="295f55cf-b9ac-454a-a715-b48c901a8f34" containerName="mariadb-database-create" Nov 24 11:25:03 crc kubenswrapper[5072]: I1124 11:25:03.461804 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d72b502-87c9-475a-93b4-739816ea7f7e" containerName="mariadb-account-create" Nov 24 11:25:03 crc kubenswrapper[5072]: I1124 11:25:03.461819 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="e39b3a7c-db7f-4d96-bbb1-1293b0432659" containerName="mariadb-account-create" Nov 24 11:25:03 crc kubenswrapper[5072]: I1124 11:25:03.461827 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="195a7abe-4729-4b77-8198-3eca911c2d84" containerName="mariadb-database-create" Nov 24 11:25:03 crc kubenswrapper[5072]: I1124 11:25:03.462306 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ltkhm-config-628bc" Nov 24 11:25:03 crc kubenswrapper[5072]: I1124 11:25:03.465651 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Nov 24 11:25:03 crc kubenswrapper[5072]: I1124 11:25:03.479933 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ltkhm-config-628bc"] Nov 24 11:25:03 crc kubenswrapper[5072]: I1124 11:25:03.567721 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0360518b-381a-4d27-b4fe-9a25bb76024f-additional-scripts\") pod \"ovn-controller-ltkhm-config-628bc\" (UID: \"0360518b-381a-4d27-b4fe-9a25bb76024f\") " pod="openstack/ovn-controller-ltkhm-config-628bc" Nov 24 11:25:03 crc kubenswrapper[5072]: I1124 11:25:03.567793 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0360518b-381a-4d27-b4fe-9a25bb76024f-var-run-ovn\") pod \"ovn-controller-ltkhm-config-628bc\" (UID: \"0360518b-381a-4d27-b4fe-9a25bb76024f\") " pod="openstack/ovn-controller-ltkhm-config-628bc" Nov 24 11:25:03 crc kubenswrapper[5072]: I1124 11:25:03.567826 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0360518b-381a-4d27-b4fe-9a25bb76024f-var-run\") pod \"ovn-controller-ltkhm-config-628bc\" (UID: \"0360518b-381a-4d27-b4fe-9a25bb76024f\") " pod="openstack/ovn-controller-ltkhm-config-628bc" Nov 24 11:25:03 crc kubenswrapper[5072]: I1124 11:25:03.567853 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjbch\" (UniqueName: \"kubernetes.io/projected/0360518b-381a-4d27-b4fe-9a25bb76024f-kube-api-access-mjbch\") pod \"ovn-controller-ltkhm-config-628bc\" (UID: \"0360518b-381a-4d27-b4fe-9a25bb76024f\") " pod="openstack/ovn-controller-ltkhm-config-628bc" Nov 24 11:25:03 crc kubenswrapper[5072]: I1124 11:25:03.567877 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0360518b-381a-4d27-b4fe-9a25bb76024f-scripts\") pod \"ovn-controller-ltkhm-config-628bc\" (UID: \"0360518b-381a-4d27-b4fe-9a25bb76024f\") " pod="openstack/ovn-controller-ltkhm-config-628bc" Nov 24 11:25:03 crc kubenswrapper[5072]: I1124 11:25:03.567935 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0360518b-381a-4d27-b4fe-9a25bb76024f-var-log-ovn\") pod \"ovn-controller-ltkhm-config-628bc\" (UID: \"0360518b-381a-4d27-b4fe-9a25bb76024f\") " pod="openstack/ovn-controller-ltkhm-config-628bc" Nov 24 11:25:03 crc kubenswrapper[5072]: I1124 11:25:03.669467 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0360518b-381a-4d27-b4fe-9a25bb76024f-var-run-ovn\") pod \"ovn-controller-ltkhm-config-628bc\" (UID: \"0360518b-381a-4d27-b4fe-9a25bb76024f\") " pod="openstack/ovn-controller-ltkhm-config-628bc" Nov 24 11:25:03 crc kubenswrapper[5072]: I1124 11:25:03.669534 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0360518b-381a-4d27-b4fe-9a25bb76024f-var-run\") pod \"ovn-controller-ltkhm-config-628bc\" (UID: \"0360518b-381a-4d27-b4fe-9a25bb76024f\") " pod="openstack/ovn-controller-ltkhm-config-628bc" Nov 24 11:25:03 crc kubenswrapper[5072]: I1124 11:25:03.669577 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjbch\" (UniqueName: \"kubernetes.io/projected/0360518b-381a-4d27-b4fe-9a25bb76024f-kube-api-access-mjbch\") pod \"ovn-controller-ltkhm-config-628bc\" (UID: \"0360518b-381a-4d27-b4fe-9a25bb76024f\") " pod="openstack/ovn-controller-ltkhm-config-628bc" Nov 24 11:25:03 crc kubenswrapper[5072]: I1124 11:25:03.669615 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0360518b-381a-4d27-b4fe-9a25bb76024f-scripts\") pod \"ovn-controller-ltkhm-config-628bc\" (UID: \"0360518b-381a-4d27-b4fe-9a25bb76024f\") " pod="openstack/ovn-controller-ltkhm-config-628bc" Nov 24 11:25:03 crc kubenswrapper[5072]: I1124 11:25:03.669677 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0360518b-381a-4d27-b4fe-9a25bb76024f-var-log-ovn\") pod \"ovn-controller-ltkhm-config-628bc\" (UID: \"0360518b-381a-4d27-b4fe-9a25bb76024f\") " pod="openstack/ovn-controller-ltkhm-config-628bc" Nov 24 11:25:03 crc kubenswrapper[5072]: I1124 11:25:03.669857 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0360518b-381a-4d27-b4fe-9a25bb76024f-additional-scripts\") pod \"ovn-controller-ltkhm-config-628bc\" (UID: \"0360518b-381a-4d27-b4fe-9a25bb76024f\") " pod="openstack/ovn-controller-ltkhm-config-628bc" Nov 24 11:25:03 crc kubenswrapper[5072]: I1124 11:25:03.671437 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0360518b-381a-4d27-b4fe-9a25bb76024f-additional-scripts\") pod \"ovn-controller-ltkhm-config-628bc\" (UID: \"0360518b-381a-4d27-b4fe-9a25bb76024f\") " pod="openstack/ovn-controller-ltkhm-config-628bc" Nov 24 11:25:03 crc kubenswrapper[5072]: I1124 11:25:03.671796 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0360518b-381a-4d27-b4fe-9a25bb76024f-var-run-ovn\") pod \"ovn-controller-ltkhm-config-628bc\" (UID: \"0360518b-381a-4d27-b4fe-9a25bb76024f\") " pod="openstack/ovn-controller-ltkhm-config-628bc" Nov 24 11:25:03 crc kubenswrapper[5072]: I1124 11:25:03.671873 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0360518b-381a-4d27-b4fe-9a25bb76024f-var-run\") pod \"ovn-controller-ltkhm-config-628bc\" (UID: \"0360518b-381a-4d27-b4fe-9a25bb76024f\") " pod="openstack/ovn-controller-ltkhm-config-628bc" Nov 24 11:25:03 crc kubenswrapper[5072]: I1124 11:25:03.672871 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0360518b-381a-4d27-b4fe-9a25bb76024f-var-log-ovn\") pod \"ovn-controller-ltkhm-config-628bc\" (UID: \"0360518b-381a-4d27-b4fe-9a25bb76024f\") " pod="openstack/ovn-controller-ltkhm-config-628bc" Nov 24 11:25:03 crc kubenswrapper[5072]: I1124 11:25:03.675824 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0360518b-381a-4d27-b4fe-9a25bb76024f-scripts\") pod \"ovn-controller-ltkhm-config-628bc\" (UID: \"0360518b-381a-4d27-b4fe-9a25bb76024f\") " pod="openstack/ovn-controller-ltkhm-config-628bc" Nov 24 11:25:03 crc kubenswrapper[5072]: I1124 11:25:03.695396 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjbch\" (UniqueName: \"kubernetes.io/projected/0360518b-381a-4d27-b4fe-9a25bb76024f-kube-api-access-mjbch\") pod \"ovn-controller-ltkhm-config-628bc\" (UID: \"0360518b-381a-4d27-b4fe-9a25bb76024f\") " pod="openstack/ovn-controller-ltkhm-config-628bc" Nov 24 11:25:03 crc kubenswrapper[5072]: I1124 11:25:03.786125 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ltkhm-config-628bc" Nov 24 11:25:05 crc kubenswrapper[5072]: I1124 11:25:05.183456 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ltkhm-config-628bc"] Nov 24 11:25:05 crc kubenswrapper[5072]: I1124 11:25:05.441737 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ltkhm-config-628bc" event={"ID":"0360518b-381a-4d27-b4fe-9a25bb76024f","Type":"ContainerStarted","Data":"2290d475fe122a82a3ce561fe5e114d463c53f7e287a33368437d0aef21b063f"} Nov 24 11:25:06 crc kubenswrapper[5072]: I1124 11:25:06.453654 5072 generic.go:334] "Generic (PLEG): container finished" podID="0360518b-381a-4d27-b4fe-9a25bb76024f" containerID="2bcce05c4b56d34202a761419d3cefa1ec23b24d985c80289439bbbeb44bab15" exitCode=0 Nov 24 11:25:06 crc kubenswrapper[5072]: I1124 11:25:06.453757 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ltkhm-config-628bc" event={"ID":"0360518b-381a-4d27-b4fe-9a25bb76024f","Type":"ContainerDied","Data":"2bcce05c4b56d34202a761419d3cefa1ec23b24d985c80289439bbbeb44bab15"} Nov 24 11:25:06 crc kubenswrapper[5072]: I1124 11:25:06.458538 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-hdh5p" event={"ID":"76bdb5be-3864-4599-9ac5-7475f63290a3","Type":"ContainerStarted","Data":"6f0aee7456017afe4c9bdab4835d829de82ab09d8479737a6f5ff3ba41e709f2"} Nov 24 11:25:06 crc kubenswrapper[5072]: I1124 11:25:06.513296 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-hdh5p" podStartSLOduration=2.893570345 podStartE2EDuration="14.513270354s" podCreationTimestamp="2025-11-24 11:24:52 +0000 UTC" firstStartedPulling="2025-11-24 11:24:53.421458029 +0000 UTC m=+945.132982505" lastFinishedPulling="2025-11-24 11:25:05.041158038 +0000 UTC m=+956.752682514" observedRunningTime="2025-11-24 11:25:06.504278268 +0000 UTC m=+958.215802814" watchObservedRunningTime="2025-11-24 11:25:06.513270354 +0000 UTC m=+958.224794840" Nov 24 11:25:07 crc kubenswrapper[5072]: I1124 11:25:07.863575 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ltkhm-config-628bc" Nov 24 11:25:07 crc kubenswrapper[5072]: I1124 11:25:07.961939 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0360518b-381a-4d27-b4fe-9a25bb76024f-var-run-ovn\") pod \"0360518b-381a-4d27-b4fe-9a25bb76024f\" (UID: \"0360518b-381a-4d27-b4fe-9a25bb76024f\") " Nov 24 11:25:07 crc kubenswrapper[5072]: I1124 11:25:07.962086 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0360518b-381a-4d27-b4fe-9a25bb76024f-additional-scripts\") pod \"0360518b-381a-4d27-b4fe-9a25bb76024f\" (UID: \"0360518b-381a-4d27-b4fe-9a25bb76024f\") " Nov 24 11:25:07 crc kubenswrapper[5072]: I1124 11:25:07.962089 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0360518b-381a-4d27-b4fe-9a25bb76024f-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "0360518b-381a-4d27-b4fe-9a25bb76024f" (UID: "0360518b-381a-4d27-b4fe-9a25bb76024f"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 11:25:07 crc kubenswrapper[5072]: I1124 11:25:07.962119 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0360518b-381a-4d27-b4fe-9a25bb76024f-var-run\") pod \"0360518b-381a-4d27-b4fe-9a25bb76024f\" (UID: \"0360518b-381a-4d27-b4fe-9a25bb76024f\") " Nov 24 11:25:07 crc kubenswrapper[5072]: I1124 11:25:07.962207 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0360518b-381a-4d27-b4fe-9a25bb76024f-scripts\") pod \"0360518b-381a-4d27-b4fe-9a25bb76024f\" (UID: \"0360518b-381a-4d27-b4fe-9a25bb76024f\") " Nov 24 11:25:07 crc kubenswrapper[5072]: I1124 11:25:07.962255 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjbch\" (UniqueName: \"kubernetes.io/projected/0360518b-381a-4d27-b4fe-9a25bb76024f-kube-api-access-mjbch\") pod \"0360518b-381a-4d27-b4fe-9a25bb76024f\" (UID: \"0360518b-381a-4d27-b4fe-9a25bb76024f\") " Nov 24 11:25:07 crc kubenswrapper[5072]: I1124 11:25:07.962282 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0360518b-381a-4d27-b4fe-9a25bb76024f-var-log-ovn\") pod \"0360518b-381a-4d27-b4fe-9a25bb76024f\" (UID: \"0360518b-381a-4d27-b4fe-9a25bb76024f\") " Nov 24 11:25:07 crc kubenswrapper[5072]: I1124 11:25:07.962333 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0360518b-381a-4d27-b4fe-9a25bb76024f-var-run" (OuterVolumeSpecName: "var-run") pod "0360518b-381a-4d27-b4fe-9a25bb76024f" (UID: "0360518b-381a-4d27-b4fe-9a25bb76024f"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 11:25:07 crc kubenswrapper[5072]: I1124 11:25:07.962679 5072 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0360518b-381a-4d27-b4fe-9a25bb76024f-var-run\") on node \"crc\" DevicePath \"\"" Nov 24 11:25:07 crc kubenswrapper[5072]: I1124 11:25:07.962696 5072 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0360518b-381a-4d27-b4fe-9a25bb76024f-var-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 24 11:25:07 crc kubenswrapper[5072]: I1124 11:25:07.962735 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0360518b-381a-4d27-b4fe-9a25bb76024f-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "0360518b-381a-4d27-b4fe-9a25bb76024f" (UID: "0360518b-381a-4d27-b4fe-9a25bb76024f"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 11:25:07 crc kubenswrapper[5072]: I1124 11:25:07.962878 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0360518b-381a-4d27-b4fe-9a25bb76024f-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "0360518b-381a-4d27-b4fe-9a25bb76024f" (UID: "0360518b-381a-4d27-b4fe-9a25bb76024f"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:25:07 crc kubenswrapper[5072]: I1124 11:25:07.963710 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0360518b-381a-4d27-b4fe-9a25bb76024f-scripts" (OuterVolumeSpecName: "scripts") pod "0360518b-381a-4d27-b4fe-9a25bb76024f" (UID: "0360518b-381a-4d27-b4fe-9a25bb76024f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:25:07 crc kubenswrapper[5072]: I1124 11:25:07.971704 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0360518b-381a-4d27-b4fe-9a25bb76024f-kube-api-access-mjbch" (OuterVolumeSpecName: "kube-api-access-mjbch") pod "0360518b-381a-4d27-b4fe-9a25bb76024f" (UID: "0360518b-381a-4d27-b4fe-9a25bb76024f"). InnerVolumeSpecName "kube-api-access-mjbch". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:25:08 crc kubenswrapper[5072]: I1124 11:25:08.064132 5072 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0360518b-381a-4d27-b4fe-9a25bb76024f-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:25:08 crc kubenswrapper[5072]: I1124 11:25:08.064178 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mjbch\" (UniqueName: \"kubernetes.io/projected/0360518b-381a-4d27-b4fe-9a25bb76024f-kube-api-access-mjbch\") on node \"crc\" DevicePath \"\"" Nov 24 11:25:08 crc kubenswrapper[5072]: I1124 11:25:08.064197 5072 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0360518b-381a-4d27-b4fe-9a25bb76024f-var-log-ovn\") on node \"crc\" DevicePath \"\"" Nov 24 11:25:08 crc kubenswrapper[5072]: I1124 11:25:08.064214 5072 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0360518b-381a-4d27-b4fe-9a25bb76024f-additional-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:25:08 crc kubenswrapper[5072]: I1124 11:25:08.201186 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ltkhm" Nov 24 11:25:08 crc kubenswrapper[5072]: I1124 11:25:08.480534 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ltkhm-config-628bc" event={"ID":"0360518b-381a-4d27-b4fe-9a25bb76024f","Type":"ContainerDied","Data":"2290d475fe122a82a3ce561fe5e114d463c53f7e287a33368437d0aef21b063f"} Nov 24 11:25:08 crc kubenswrapper[5072]: I1124 11:25:08.480578 5072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2290d475fe122a82a3ce561fe5e114d463c53f7e287a33368437d0aef21b063f" Nov 24 11:25:08 crc kubenswrapper[5072]: I1124 11:25:08.480631 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ltkhm-config-628bc" Nov 24 11:25:08 crc kubenswrapper[5072]: I1124 11:25:08.973769 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ltkhm-config-628bc"] Nov 24 11:25:08 crc kubenswrapper[5072]: I1124 11:25:08.978919 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-ltkhm-config-628bc"] Nov 24 11:25:09 crc kubenswrapper[5072]: I1124 11:25:09.032232 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0360518b-381a-4d27-b4fe-9a25bb76024f" path="/var/lib/kubelet/pods/0360518b-381a-4d27-b4fe-9a25bb76024f/volumes" Nov 24 11:25:09 crc kubenswrapper[5072]: I1124 11:25:09.058461 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:25:10 crc kubenswrapper[5072]: I1124 11:25:10.496269 5072 generic.go:334] "Generic (PLEG): container finished" podID="354afe75-70d3-4c45-a990-0299f821b0af" containerID="50ed5bcf7b58686c9c39d2083331f2f908ec020f73f7ca7435cdf2c9fd7abe38" exitCode=0 Nov 24 11:25:10 crc kubenswrapper[5072]: I1124 11:25:10.496430 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"354afe75-70d3-4c45-a990-0299f821b0af","Type":"ContainerDied","Data":"50ed5bcf7b58686c9c39d2083331f2f908ec020f73f7ca7435cdf2c9fd7abe38"} Nov 24 11:25:11 crc kubenswrapper[5072]: I1124 11:25:11.511533 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"354afe75-70d3-4c45-a990-0299f821b0af","Type":"ContainerStarted","Data":"5289899340a01a653ec7ac1b228e516c26a5e7582db802a8b49f051bfabe2c2f"} Nov 24 11:25:11 crc kubenswrapper[5072]: I1124 11:25:11.512116 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 24 11:25:11 crc kubenswrapper[5072]: I1124 11:25:11.540037 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=-9223371972.314758 podStartE2EDuration="1m4.540017515s" podCreationTimestamp="2025-11-24 11:24:07 +0000 UTC" firstStartedPulling="2025-11-24 11:24:09.184740093 +0000 UTC m=+900.896264569" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:25:11.537353978 +0000 UTC m=+963.248878454" watchObservedRunningTime="2025-11-24 11:25:11.540017515 +0000 UTC m=+963.251542001" Nov 24 11:25:12 crc kubenswrapper[5072]: I1124 11:25:12.541489 5072 generic.go:334] "Generic (PLEG): container finished" podID="76bdb5be-3864-4599-9ac5-7475f63290a3" containerID="6f0aee7456017afe4c9bdab4835d829de82ab09d8479737a6f5ff3ba41e709f2" exitCode=0 Nov 24 11:25:12 crc kubenswrapper[5072]: I1124 11:25:12.541571 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-hdh5p" event={"ID":"76bdb5be-3864-4599-9ac5-7475f63290a3","Type":"ContainerDied","Data":"6f0aee7456017afe4c9bdab4835d829de82ab09d8479737a6f5ff3ba41e709f2"} Nov 24 11:25:14 crc kubenswrapper[5072]: I1124 11:25:14.023791 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-hdh5p" Nov 24 11:25:14 crc kubenswrapper[5072]: I1124 11:25:14.166850 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/76bdb5be-3864-4599-9ac5-7475f63290a3-db-sync-config-data\") pod \"76bdb5be-3864-4599-9ac5-7475f63290a3\" (UID: \"76bdb5be-3864-4599-9ac5-7475f63290a3\") " Nov 24 11:25:14 crc kubenswrapper[5072]: I1124 11:25:14.166995 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ssqz7\" (UniqueName: \"kubernetes.io/projected/76bdb5be-3864-4599-9ac5-7475f63290a3-kube-api-access-ssqz7\") pod \"76bdb5be-3864-4599-9ac5-7475f63290a3\" (UID: \"76bdb5be-3864-4599-9ac5-7475f63290a3\") " Nov 24 11:25:14 crc kubenswrapper[5072]: I1124 11:25:14.167068 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76bdb5be-3864-4599-9ac5-7475f63290a3-combined-ca-bundle\") pod \"76bdb5be-3864-4599-9ac5-7475f63290a3\" (UID: \"76bdb5be-3864-4599-9ac5-7475f63290a3\") " Nov 24 11:25:14 crc kubenswrapper[5072]: I1124 11:25:14.167139 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76bdb5be-3864-4599-9ac5-7475f63290a3-config-data\") pod \"76bdb5be-3864-4599-9ac5-7475f63290a3\" (UID: \"76bdb5be-3864-4599-9ac5-7475f63290a3\") " Nov 24 11:25:14 crc kubenswrapper[5072]: I1124 11:25:14.172946 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76bdb5be-3864-4599-9ac5-7475f63290a3-kube-api-access-ssqz7" (OuterVolumeSpecName: "kube-api-access-ssqz7") pod "76bdb5be-3864-4599-9ac5-7475f63290a3" (UID: "76bdb5be-3864-4599-9ac5-7475f63290a3"). InnerVolumeSpecName "kube-api-access-ssqz7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:25:14 crc kubenswrapper[5072]: I1124 11:25:14.176141 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76bdb5be-3864-4599-9ac5-7475f63290a3-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "76bdb5be-3864-4599-9ac5-7475f63290a3" (UID: "76bdb5be-3864-4599-9ac5-7475f63290a3"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:25:14 crc kubenswrapper[5072]: I1124 11:25:14.209591 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76bdb5be-3864-4599-9ac5-7475f63290a3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "76bdb5be-3864-4599-9ac5-7475f63290a3" (UID: "76bdb5be-3864-4599-9ac5-7475f63290a3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:25:14 crc kubenswrapper[5072]: I1124 11:25:14.231481 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76bdb5be-3864-4599-9ac5-7475f63290a3-config-data" (OuterVolumeSpecName: "config-data") pod "76bdb5be-3864-4599-9ac5-7475f63290a3" (UID: "76bdb5be-3864-4599-9ac5-7475f63290a3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:25:14 crc kubenswrapper[5072]: I1124 11:25:14.268992 5072 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76bdb5be-3864-4599-9ac5-7475f63290a3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:25:14 crc kubenswrapper[5072]: I1124 11:25:14.269041 5072 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76bdb5be-3864-4599-9ac5-7475f63290a3-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:25:14 crc kubenswrapper[5072]: I1124 11:25:14.269061 5072 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/76bdb5be-3864-4599-9ac5-7475f63290a3-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:25:14 crc kubenswrapper[5072]: I1124 11:25:14.269079 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ssqz7\" (UniqueName: \"kubernetes.io/projected/76bdb5be-3864-4599-9ac5-7475f63290a3-kube-api-access-ssqz7\") on node \"crc\" DevicePath \"\"" Nov 24 11:25:14 crc kubenswrapper[5072]: I1124 11:25:14.562764 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-hdh5p" event={"ID":"76bdb5be-3864-4599-9ac5-7475f63290a3","Type":"ContainerDied","Data":"a2c9ea108cb412de556d40d687a9a7a2d5873f1d48a4612eb3a94c70fb98a2cd"} Nov 24 11:25:14 crc kubenswrapper[5072]: I1124 11:25:14.562818 5072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2c9ea108cb412de556d40d687a9a7a2d5873f1d48a4612eb3a94c70fb98a2cd" Nov 24 11:25:14 crc kubenswrapper[5072]: I1124 11:25:14.562827 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-hdh5p" Nov 24 11:25:15 crc kubenswrapper[5072]: I1124 11:25:15.030599 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-54f9b7b8d9-w56kf"] Nov 24 11:25:15 crc kubenswrapper[5072]: E1124 11:25:15.030873 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0360518b-381a-4d27-b4fe-9a25bb76024f" containerName="ovn-config" Nov 24 11:25:15 crc kubenswrapper[5072]: I1124 11:25:15.030885 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="0360518b-381a-4d27-b4fe-9a25bb76024f" containerName="ovn-config" Nov 24 11:25:15 crc kubenswrapper[5072]: E1124 11:25:15.030915 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76bdb5be-3864-4599-9ac5-7475f63290a3" containerName="glance-db-sync" Nov 24 11:25:15 crc kubenswrapper[5072]: I1124 11:25:15.030922 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="76bdb5be-3864-4599-9ac5-7475f63290a3" containerName="glance-db-sync" Nov 24 11:25:15 crc kubenswrapper[5072]: I1124 11:25:15.031053 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="0360518b-381a-4d27-b4fe-9a25bb76024f" containerName="ovn-config" Nov 24 11:25:15 crc kubenswrapper[5072]: I1124 11:25:15.031066 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="76bdb5be-3864-4599-9ac5-7475f63290a3" containerName="glance-db-sync" Nov 24 11:25:15 crc kubenswrapper[5072]: I1124 11:25:15.031814 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54f9b7b8d9-w56kf" Nov 24 11:25:15 crc kubenswrapper[5072]: I1124 11:25:15.047990 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54f9b7b8d9-w56kf"] Nov 24 11:25:15 crc kubenswrapper[5072]: I1124 11:25:15.185512 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/65c4aeb0-5394-4ff2-b993-449041d6ba77-ovsdbserver-sb\") pod \"dnsmasq-dns-54f9b7b8d9-w56kf\" (UID: \"65c4aeb0-5394-4ff2-b993-449041d6ba77\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-w56kf" Nov 24 11:25:15 crc kubenswrapper[5072]: I1124 11:25:15.185696 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnsv2\" (UniqueName: \"kubernetes.io/projected/65c4aeb0-5394-4ff2-b993-449041d6ba77-kube-api-access-wnsv2\") pod \"dnsmasq-dns-54f9b7b8d9-w56kf\" (UID: \"65c4aeb0-5394-4ff2-b993-449041d6ba77\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-w56kf" Nov 24 11:25:15 crc kubenswrapper[5072]: I1124 11:25:15.185883 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/65c4aeb0-5394-4ff2-b993-449041d6ba77-ovsdbserver-nb\") pod \"dnsmasq-dns-54f9b7b8d9-w56kf\" (UID: \"65c4aeb0-5394-4ff2-b993-449041d6ba77\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-w56kf" Nov 24 11:25:15 crc kubenswrapper[5072]: I1124 11:25:15.186006 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65c4aeb0-5394-4ff2-b993-449041d6ba77-config\") pod \"dnsmasq-dns-54f9b7b8d9-w56kf\" (UID: \"65c4aeb0-5394-4ff2-b993-449041d6ba77\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-w56kf" Nov 24 11:25:15 crc kubenswrapper[5072]: I1124 11:25:15.186042 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/65c4aeb0-5394-4ff2-b993-449041d6ba77-dns-svc\") pod \"dnsmasq-dns-54f9b7b8d9-w56kf\" (UID: \"65c4aeb0-5394-4ff2-b993-449041d6ba77\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-w56kf" Nov 24 11:25:15 crc kubenswrapper[5072]: I1124 11:25:15.287922 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnsv2\" (UniqueName: \"kubernetes.io/projected/65c4aeb0-5394-4ff2-b993-449041d6ba77-kube-api-access-wnsv2\") pod \"dnsmasq-dns-54f9b7b8d9-w56kf\" (UID: \"65c4aeb0-5394-4ff2-b993-449041d6ba77\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-w56kf" Nov 24 11:25:15 crc kubenswrapper[5072]: I1124 11:25:15.288076 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/65c4aeb0-5394-4ff2-b993-449041d6ba77-ovsdbserver-nb\") pod \"dnsmasq-dns-54f9b7b8d9-w56kf\" (UID: \"65c4aeb0-5394-4ff2-b993-449041d6ba77\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-w56kf" Nov 24 11:25:15 crc kubenswrapper[5072]: I1124 11:25:15.288164 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65c4aeb0-5394-4ff2-b993-449041d6ba77-config\") pod \"dnsmasq-dns-54f9b7b8d9-w56kf\" (UID: \"65c4aeb0-5394-4ff2-b993-449041d6ba77\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-w56kf" Nov 24 11:25:15 crc kubenswrapper[5072]: I1124 11:25:15.288210 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/65c4aeb0-5394-4ff2-b993-449041d6ba77-dns-svc\") pod \"dnsmasq-dns-54f9b7b8d9-w56kf\" (UID: \"65c4aeb0-5394-4ff2-b993-449041d6ba77\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-w56kf" Nov 24 11:25:15 crc kubenswrapper[5072]: I1124 11:25:15.288260 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/65c4aeb0-5394-4ff2-b993-449041d6ba77-ovsdbserver-sb\") pod \"dnsmasq-dns-54f9b7b8d9-w56kf\" (UID: \"65c4aeb0-5394-4ff2-b993-449041d6ba77\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-w56kf" Nov 24 11:25:15 crc kubenswrapper[5072]: I1124 11:25:15.289281 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/65c4aeb0-5394-4ff2-b993-449041d6ba77-ovsdbserver-nb\") pod \"dnsmasq-dns-54f9b7b8d9-w56kf\" (UID: \"65c4aeb0-5394-4ff2-b993-449041d6ba77\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-w56kf" Nov 24 11:25:15 crc kubenswrapper[5072]: I1124 11:25:15.289752 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/65c4aeb0-5394-4ff2-b993-449041d6ba77-dns-svc\") pod \"dnsmasq-dns-54f9b7b8d9-w56kf\" (UID: \"65c4aeb0-5394-4ff2-b993-449041d6ba77\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-w56kf" Nov 24 11:25:15 crc kubenswrapper[5072]: I1124 11:25:15.290215 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/65c4aeb0-5394-4ff2-b993-449041d6ba77-ovsdbserver-sb\") pod \"dnsmasq-dns-54f9b7b8d9-w56kf\" (UID: \"65c4aeb0-5394-4ff2-b993-449041d6ba77\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-w56kf" Nov 24 11:25:15 crc kubenswrapper[5072]: I1124 11:25:15.290975 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65c4aeb0-5394-4ff2-b993-449041d6ba77-config\") pod \"dnsmasq-dns-54f9b7b8d9-w56kf\" (UID: \"65c4aeb0-5394-4ff2-b993-449041d6ba77\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-w56kf" Nov 24 11:25:15 crc kubenswrapper[5072]: I1124 11:25:15.307727 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnsv2\" (UniqueName: \"kubernetes.io/projected/65c4aeb0-5394-4ff2-b993-449041d6ba77-kube-api-access-wnsv2\") pod \"dnsmasq-dns-54f9b7b8d9-w56kf\" (UID: \"65c4aeb0-5394-4ff2-b993-449041d6ba77\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-w56kf" Nov 24 11:25:15 crc kubenswrapper[5072]: I1124 11:25:15.348799 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54f9b7b8d9-w56kf" Nov 24 11:25:15 crc kubenswrapper[5072]: I1124 11:25:15.627830 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54f9b7b8d9-w56kf"] Nov 24 11:25:15 crc kubenswrapper[5072]: W1124 11:25:15.631112 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod65c4aeb0_5394_4ff2_b993_449041d6ba77.slice/crio-6ccdb1a5e1a5d38d9960866157aa4333c206525d7abde7bb8b8b2f86220a5a1d WatchSource:0}: Error finding container 6ccdb1a5e1a5d38d9960866157aa4333c206525d7abde7bb8b8b2f86220a5a1d: Status 404 returned error can't find the container with id 6ccdb1a5e1a5d38d9960866157aa4333c206525d7abde7bb8b8b2f86220a5a1d Nov 24 11:25:16 crc kubenswrapper[5072]: I1124 11:25:16.584042 5072 generic.go:334] "Generic (PLEG): container finished" podID="65c4aeb0-5394-4ff2-b993-449041d6ba77" containerID="8ec854b2cfbd331db577cc2df1c111b686beff0b181fb0a7c05bba54a207a5ee" exitCode=0 Nov 24 11:25:16 crc kubenswrapper[5072]: I1124 11:25:16.584343 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54f9b7b8d9-w56kf" event={"ID":"65c4aeb0-5394-4ff2-b993-449041d6ba77","Type":"ContainerDied","Data":"8ec854b2cfbd331db577cc2df1c111b686beff0b181fb0a7c05bba54a207a5ee"} Nov 24 11:25:16 crc kubenswrapper[5072]: I1124 11:25:16.584396 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54f9b7b8d9-w56kf" event={"ID":"65c4aeb0-5394-4ff2-b993-449041d6ba77","Type":"ContainerStarted","Data":"6ccdb1a5e1a5d38d9960866157aa4333c206525d7abde7bb8b8b2f86220a5a1d"} Nov 24 11:25:17 crc kubenswrapper[5072]: I1124 11:25:17.598597 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54f9b7b8d9-w56kf" event={"ID":"65c4aeb0-5394-4ff2-b993-449041d6ba77","Type":"ContainerStarted","Data":"a6afe5388d692db48c23ec636539320874fa9385f06e96c71c08f8277c15fdf3"} Nov 24 11:25:17 crc kubenswrapper[5072]: I1124 11:25:17.598865 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-54f9b7b8d9-w56kf" Nov 24 11:25:17 crc kubenswrapper[5072]: I1124 11:25:17.618143 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-54f9b7b8d9-w56kf" podStartSLOduration=2.618116085 podStartE2EDuration="2.618116085s" podCreationTimestamp="2025-11-24 11:25:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:25:17.615655202 +0000 UTC m=+969.327179718" watchObservedRunningTime="2025-11-24 11:25:17.618116085 +0000 UTC m=+969.329640591" Nov 24 11:25:25 crc kubenswrapper[5072]: I1124 11:25:25.350669 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-54f9b7b8d9-w56kf" Nov 24 11:25:25 crc kubenswrapper[5072]: I1124 11:25:25.449025 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-7wvdb"] Nov 24 11:25:25 crc kubenswrapper[5072]: I1124 11:25:25.449494 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-7wvdb" podUID="d0f0d5b2-2676-4305-8072-10fce8aeb222" containerName="dnsmasq-dns" containerID="cri-o://bcf821958a020716e02a3080425c5daa3cf9d92d26367cae002cd85d03166d35" gracePeriod=10 Nov 24 11:25:25 crc kubenswrapper[5072]: I1124 11:25:25.673227 5072 generic.go:334] "Generic (PLEG): container finished" podID="d0f0d5b2-2676-4305-8072-10fce8aeb222" containerID="bcf821958a020716e02a3080425c5daa3cf9d92d26367cae002cd85d03166d35" exitCode=0 Nov 24 11:25:25 crc kubenswrapper[5072]: I1124 11:25:25.673310 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-7wvdb" event={"ID":"d0f0d5b2-2676-4305-8072-10fce8aeb222","Type":"ContainerDied","Data":"bcf821958a020716e02a3080425c5daa3cf9d92d26367cae002cd85d03166d35"} Nov 24 11:25:25 crc kubenswrapper[5072]: I1124 11:25:25.912935 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-7wvdb" Nov 24 11:25:26 crc kubenswrapper[5072]: I1124 11:25:26.073595 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d0f0d5b2-2676-4305-8072-10fce8aeb222-dns-svc\") pod \"d0f0d5b2-2676-4305-8072-10fce8aeb222\" (UID: \"d0f0d5b2-2676-4305-8072-10fce8aeb222\") " Nov 24 11:25:26 crc kubenswrapper[5072]: I1124 11:25:26.073682 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d0f0d5b2-2676-4305-8072-10fce8aeb222-ovsdbserver-sb\") pod \"d0f0d5b2-2676-4305-8072-10fce8aeb222\" (UID: \"d0f0d5b2-2676-4305-8072-10fce8aeb222\") " Nov 24 11:25:26 crc kubenswrapper[5072]: I1124 11:25:26.073712 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rn6sq\" (UniqueName: \"kubernetes.io/projected/d0f0d5b2-2676-4305-8072-10fce8aeb222-kube-api-access-rn6sq\") pod \"d0f0d5b2-2676-4305-8072-10fce8aeb222\" (UID: \"d0f0d5b2-2676-4305-8072-10fce8aeb222\") " Nov 24 11:25:26 crc kubenswrapper[5072]: I1124 11:25:26.073781 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0f0d5b2-2676-4305-8072-10fce8aeb222-config\") pod \"d0f0d5b2-2676-4305-8072-10fce8aeb222\" (UID: \"d0f0d5b2-2676-4305-8072-10fce8aeb222\") " Nov 24 11:25:26 crc kubenswrapper[5072]: I1124 11:25:26.073839 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d0f0d5b2-2676-4305-8072-10fce8aeb222-ovsdbserver-nb\") pod \"d0f0d5b2-2676-4305-8072-10fce8aeb222\" (UID: \"d0f0d5b2-2676-4305-8072-10fce8aeb222\") " Nov 24 11:25:26 crc kubenswrapper[5072]: I1124 11:25:26.079337 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0f0d5b2-2676-4305-8072-10fce8aeb222-kube-api-access-rn6sq" (OuterVolumeSpecName: "kube-api-access-rn6sq") pod "d0f0d5b2-2676-4305-8072-10fce8aeb222" (UID: "d0f0d5b2-2676-4305-8072-10fce8aeb222"). InnerVolumeSpecName "kube-api-access-rn6sq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:25:26 crc kubenswrapper[5072]: I1124 11:25:26.115943 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0f0d5b2-2676-4305-8072-10fce8aeb222-config" (OuterVolumeSpecName: "config") pod "d0f0d5b2-2676-4305-8072-10fce8aeb222" (UID: "d0f0d5b2-2676-4305-8072-10fce8aeb222"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:25:26 crc kubenswrapper[5072]: I1124 11:25:26.121200 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0f0d5b2-2676-4305-8072-10fce8aeb222-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d0f0d5b2-2676-4305-8072-10fce8aeb222" (UID: "d0f0d5b2-2676-4305-8072-10fce8aeb222"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:25:26 crc kubenswrapper[5072]: I1124 11:25:26.123608 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0f0d5b2-2676-4305-8072-10fce8aeb222-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d0f0d5b2-2676-4305-8072-10fce8aeb222" (UID: "d0f0d5b2-2676-4305-8072-10fce8aeb222"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:25:26 crc kubenswrapper[5072]: I1124 11:25:26.123937 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0f0d5b2-2676-4305-8072-10fce8aeb222-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d0f0d5b2-2676-4305-8072-10fce8aeb222" (UID: "d0f0d5b2-2676-4305-8072-10fce8aeb222"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:25:26 crc kubenswrapper[5072]: I1124 11:25:26.175472 5072 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d0f0d5b2-2676-4305-8072-10fce8aeb222-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 11:25:26 crc kubenswrapper[5072]: I1124 11:25:26.175503 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rn6sq\" (UniqueName: \"kubernetes.io/projected/d0f0d5b2-2676-4305-8072-10fce8aeb222-kube-api-access-rn6sq\") on node \"crc\" DevicePath \"\"" Nov 24 11:25:26 crc kubenswrapper[5072]: I1124 11:25:26.175516 5072 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0f0d5b2-2676-4305-8072-10fce8aeb222-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:25:26 crc kubenswrapper[5072]: I1124 11:25:26.175524 5072 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d0f0d5b2-2676-4305-8072-10fce8aeb222-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 11:25:26 crc kubenswrapper[5072]: I1124 11:25:26.175550 5072 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d0f0d5b2-2676-4305-8072-10fce8aeb222-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 11:25:26 crc kubenswrapper[5072]: I1124 11:25:26.687153 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-7wvdb" event={"ID":"d0f0d5b2-2676-4305-8072-10fce8aeb222","Type":"ContainerDied","Data":"36b8257379145b0813bc1becfb8bb5ddc9ec4bd3f06bba01edf4ac90f3c467c8"} Nov 24 11:25:26 crc kubenswrapper[5072]: I1124 11:25:26.687213 5072 scope.go:117] "RemoveContainer" containerID="bcf821958a020716e02a3080425c5daa3cf9d92d26367cae002cd85d03166d35" Nov 24 11:25:26 crc kubenswrapper[5072]: I1124 11:25:26.687245 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-7wvdb" Nov 24 11:25:26 crc kubenswrapper[5072]: I1124 11:25:26.722517 5072 scope.go:117] "RemoveContainer" containerID="a70e1e2dd4d7bb256024f237e7927abfac9c32bc27e0ac8bda31ff2b80a34be9" Nov 24 11:25:26 crc kubenswrapper[5072]: I1124 11:25:26.724585 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-7wvdb"] Nov 24 11:25:26 crc kubenswrapper[5072]: I1124 11:25:26.738412 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-7wvdb"] Nov 24 11:25:27 crc kubenswrapper[5072]: I1124 11:25:27.035720 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0f0d5b2-2676-4305-8072-10fce8aeb222" path="/var/lib/kubelet/pods/d0f0d5b2-2676-4305-8072-10fce8aeb222/volumes" Nov 24 11:25:28 crc kubenswrapper[5072]: I1124 11:25:28.753763 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 24 11:25:29 crc kubenswrapper[5072]: I1124 11:25:29.242236 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-wp6ws"] Nov 24 11:25:29 crc kubenswrapper[5072]: E1124 11:25:29.242535 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0f0d5b2-2676-4305-8072-10fce8aeb222" containerName="dnsmasq-dns" Nov 24 11:25:29 crc kubenswrapper[5072]: I1124 11:25:29.242553 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0f0d5b2-2676-4305-8072-10fce8aeb222" containerName="dnsmasq-dns" Nov 24 11:25:29 crc kubenswrapper[5072]: E1124 11:25:29.242573 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0f0d5b2-2676-4305-8072-10fce8aeb222" containerName="init" Nov 24 11:25:29 crc kubenswrapper[5072]: I1124 11:25:29.242580 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0f0d5b2-2676-4305-8072-10fce8aeb222" containerName="init" Nov 24 11:25:29 crc kubenswrapper[5072]: I1124 11:25:29.242717 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0f0d5b2-2676-4305-8072-10fce8aeb222" containerName="dnsmasq-dns" Nov 24 11:25:29 crc kubenswrapper[5072]: I1124 11:25:29.243221 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-wp6ws" Nov 24 11:25:29 crc kubenswrapper[5072]: I1124 11:25:29.254071 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-a3c3-account-create-24pwx"] Nov 24 11:25:29 crc kubenswrapper[5072]: I1124 11:25:29.255091 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-a3c3-account-create-24pwx" Nov 24 11:25:29 crc kubenswrapper[5072]: I1124 11:25:29.256658 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Nov 24 11:25:29 crc kubenswrapper[5072]: I1124 11:25:29.263321 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-wp6ws"] Nov 24 11:25:29 crc kubenswrapper[5072]: I1124 11:25:29.271027 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-a3c3-account-create-24pwx"] Nov 24 11:25:29 crc kubenswrapper[5072]: I1124 11:25:29.334940 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f64dc57b-2fb4-4ad8-99a9-f9756664b3c4-operator-scripts\") pod \"cinder-db-create-wp6ws\" (UID: \"f64dc57b-2fb4-4ad8-99a9-f9756664b3c4\") " pod="openstack/cinder-db-create-wp6ws" Nov 24 11:25:29 crc kubenswrapper[5072]: I1124 11:25:29.335024 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69xg4\" (UniqueName: \"kubernetes.io/projected/f64dc57b-2fb4-4ad8-99a9-f9756664b3c4-kube-api-access-69xg4\") pod \"cinder-db-create-wp6ws\" (UID: \"f64dc57b-2fb4-4ad8-99a9-f9756664b3c4\") " pod="openstack/cinder-db-create-wp6ws" Nov 24 11:25:29 crc kubenswrapper[5072]: I1124 11:25:29.335046 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/79a97b6f-0aa6-4059-8495-23ceff788793-operator-scripts\") pod \"cinder-a3c3-account-create-24pwx\" (UID: \"79a97b6f-0aa6-4059-8495-23ceff788793\") " pod="openstack/cinder-a3c3-account-create-24pwx" Nov 24 11:25:29 crc kubenswrapper[5072]: I1124 11:25:29.335073 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqd4z\" (UniqueName: \"kubernetes.io/projected/79a97b6f-0aa6-4059-8495-23ceff788793-kube-api-access-pqd4z\") pod \"cinder-a3c3-account-create-24pwx\" (UID: \"79a97b6f-0aa6-4059-8495-23ceff788793\") " pod="openstack/cinder-a3c3-account-create-24pwx" Nov 24 11:25:29 crc kubenswrapper[5072]: I1124 11:25:29.344304 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-a502-account-create-z6jg6"] Nov 24 11:25:29 crc kubenswrapper[5072]: I1124 11:25:29.345227 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-a502-account-create-z6jg6" Nov 24 11:25:29 crc kubenswrapper[5072]: I1124 11:25:29.347469 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Nov 24 11:25:29 crc kubenswrapper[5072]: I1124 11:25:29.363534 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-a502-account-create-z6jg6"] Nov 24 11:25:29 crc kubenswrapper[5072]: I1124 11:25:29.368606 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-h4ncm"] Nov 24 11:25:29 crc kubenswrapper[5072]: I1124 11:25:29.369620 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-h4ncm" Nov 24 11:25:29 crc kubenswrapper[5072]: I1124 11:25:29.383355 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-h4ncm"] Nov 24 11:25:29 crc kubenswrapper[5072]: I1124 11:25:29.436660 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69xg4\" (UniqueName: \"kubernetes.io/projected/f64dc57b-2fb4-4ad8-99a9-f9756664b3c4-kube-api-access-69xg4\") pod \"cinder-db-create-wp6ws\" (UID: \"f64dc57b-2fb4-4ad8-99a9-f9756664b3c4\") " pod="openstack/cinder-db-create-wp6ws" Nov 24 11:25:29 crc kubenswrapper[5072]: I1124 11:25:29.436696 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/79a97b6f-0aa6-4059-8495-23ceff788793-operator-scripts\") pod \"cinder-a3c3-account-create-24pwx\" (UID: \"79a97b6f-0aa6-4059-8495-23ceff788793\") " pod="openstack/cinder-a3c3-account-create-24pwx" Nov 24 11:25:29 crc kubenswrapper[5072]: I1124 11:25:29.436723 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqd4z\" (UniqueName: \"kubernetes.io/projected/79a97b6f-0aa6-4059-8495-23ceff788793-kube-api-access-pqd4z\") pod \"cinder-a3c3-account-create-24pwx\" (UID: \"79a97b6f-0aa6-4059-8495-23ceff788793\") " pod="openstack/cinder-a3c3-account-create-24pwx" Nov 24 11:25:29 crc kubenswrapper[5072]: I1124 11:25:29.436747 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/647daeca-7489-478d-930c-3a780336be49-operator-scripts\") pod \"barbican-a502-account-create-z6jg6\" (UID: \"647daeca-7489-478d-930c-3a780336be49\") " pod="openstack/barbican-a502-account-create-z6jg6" Nov 24 11:25:29 crc kubenswrapper[5072]: I1124 11:25:29.436816 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzn5m\" (UniqueName: \"kubernetes.io/projected/bffbb2ab-3908-425a-ba38-80a69a37a16a-kube-api-access-gzn5m\") pod \"barbican-db-create-h4ncm\" (UID: \"bffbb2ab-3908-425a-ba38-80a69a37a16a\") " pod="openstack/barbican-db-create-h4ncm" Nov 24 11:25:29 crc kubenswrapper[5072]: I1124 11:25:29.436840 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzt4m\" (UniqueName: \"kubernetes.io/projected/647daeca-7489-478d-930c-3a780336be49-kube-api-access-vzt4m\") pod \"barbican-a502-account-create-z6jg6\" (UID: \"647daeca-7489-478d-930c-3a780336be49\") " pod="openstack/barbican-a502-account-create-z6jg6" Nov 24 11:25:29 crc kubenswrapper[5072]: I1124 11:25:29.436864 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f64dc57b-2fb4-4ad8-99a9-f9756664b3c4-operator-scripts\") pod \"cinder-db-create-wp6ws\" (UID: \"f64dc57b-2fb4-4ad8-99a9-f9756664b3c4\") " pod="openstack/cinder-db-create-wp6ws" Nov 24 11:25:29 crc kubenswrapper[5072]: I1124 11:25:29.436883 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bffbb2ab-3908-425a-ba38-80a69a37a16a-operator-scripts\") pod \"barbican-db-create-h4ncm\" (UID: \"bffbb2ab-3908-425a-ba38-80a69a37a16a\") " pod="openstack/barbican-db-create-h4ncm" Nov 24 11:25:29 crc kubenswrapper[5072]: I1124 11:25:29.437764 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/79a97b6f-0aa6-4059-8495-23ceff788793-operator-scripts\") pod \"cinder-a3c3-account-create-24pwx\" (UID: \"79a97b6f-0aa6-4059-8495-23ceff788793\") " pod="openstack/cinder-a3c3-account-create-24pwx" Nov 24 11:25:29 crc kubenswrapper[5072]: I1124 11:25:29.438419 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f64dc57b-2fb4-4ad8-99a9-f9756664b3c4-operator-scripts\") pod \"cinder-db-create-wp6ws\" (UID: \"f64dc57b-2fb4-4ad8-99a9-f9756664b3c4\") " pod="openstack/cinder-db-create-wp6ws" Nov 24 11:25:29 crc kubenswrapper[5072]: I1124 11:25:29.438591 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-mj6kc"] Nov 24 11:25:29 crc kubenswrapper[5072]: I1124 11:25:29.439523 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-mj6kc" Nov 24 11:25:29 crc kubenswrapper[5072]: I1124 11:25:29.448724 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-mj6kc"] Nov 24 11:25:29 crc kubenswrapper[5072]: I1124 11:25:29.456925 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqd4z\" (UniqueName: \"kubernetes.io/projected/79a97b6f-0aa6-4059-8495-23ceff788793-kube-api-access-pqd4z\") pod \"cinder-a3c3-account-create-24pwx\" (UID: \"79a97b6f-0aa6-4059-8495-23ceff788793\") " pod="openstack/cinder-a3c3-account-create-24pwx" Nov 24 11:25:29 crc kubenswrapper[5072]: I1124 11:25:29.461342 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69xg4\" (UniqueName: \"kubernetes.io/projected/f64dc57b-2fb4-4ad8-99a9-f9756664b3c4-kube-api-access-69xg4\") pod \"cinder-db-create-wp6ws\" (UID: \"f64dc57b-2fb4-4ad8-99a9-f9756664b3c4\") " pod="openstack/cinder-db-create-wp6ws" Nov 24 11:25:29 crc kubenswrapper[5072]: I1124 11:25:29.508220 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-sh9kr"] Nov 24 11:25:29 crc kubenswrapper[5072]: I1124 11:25:29.509108 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-sh9kr" Nov 24 11:25:29 crc kubenswrapper[5072]: I1124 11:25:29.512278 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 24 11:25:29 crc kubenswrapper[5072]: I1124 11:25:29.512475 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 24 11:25:29 crc kubenswrapper[5072]: I1124 11:25:29.512680 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 24 11:25:29 crc kubenswrapper[5072]: I1124 11:25:29.512805 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-lc8qn" Nov 24 11:25:29 crc kubenswrapper[5072]: I1124 11:25:29.526405 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-sh9kr"] Nov 24 11:25:29 crc kubenswrapper[5072]: I1124 11:25:29.538489 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72v2d\" (UniqueName: \"kubernetes.io/projected/bc652e4a-54d1-43f7-b547-d86b30ae0797-kube-api-access-72v2d\") pod \"neutron-db-create-mj6kc\" (UID: \"bc652e4a-54d1-43f7-b547-d86b30ae0797\") " pod="openstack/neutron-db-create-mj6kc" Nov 24 11:25:29 crc kubenswrapper[5072]: I1124 11:25:29.538562 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/647daeca-7489-478d-930c-3a780336be49-operator-scripts\") pod \"barbican-a502-account-create-z6jg6\" (UID: \"647daeca-7489-478d-930c-3a780336be49\") " pod="openstack/barbican-a502-account-create-z6jg6" Nov 24 11:25:29 crc kubenswrapper[5072]: I1124 11:25:29.538646 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc652e4a-54d1-43f7-b547-d86b30ae0797-operator-scripts\") pod \"neutron-db-create-mj6kc\" (UID: \"bc652e4a-54d1-43f7-b547-d86b30ae0797\") " pod="openstack/neutron-db-create-mj6kc" Nov 24 11:25:29 crc kubenswrapper[5072]: I1124 11:25:29.538711 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzn5m\" (UniqueName: \"kubernetes.io/projected/bffbb2ab-3908-425a-ba38-80a69a37a16a-kube-api-access-gzn5m\") pod \"barbican-db-create-h4ncm\" (UID: \"bffbb2ab-3908-425a-ba38-80a69a37a16a\") " pod="openstack/barbican-db-create-h4ncm" Nov 24 11:25:29 crc kubenswrapper[5072]: I1124 11:25:29.538752 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzt4m\" (UniqueName: \"kubernetes.io/projected/647daeca-7489-478d-930c-3a780336be49-kube-api-access-vzt4m\") pod \"barbican-a502-account-create-z6jg6\" (UID: \"647daeca-7489-478d-930c-3a780336be49\") " pod="openstack/barbican-a502-account-create-z6jg6" Nov 24 11:25:29 crc kubenswrapper[5072]: I1124 11:25:29.538781 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bffbb2ab-3908-425a-ba38-80a69a37a16a-operator-scripts\") pod \"barbican-db-create-h4ncm\" (UID: \"bffbb2ab-3908-425a-ba38-80a69a37a16a\") " pod="openstack/barbican-db-create-h4ncm" Nov 24 11:25:29 crc kubenswrapper[5072]: I1124 11:25:29.539181 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/647daeca-7489-478d-930c-3a780336be49-operator-scripts\") pod \"barbican-a502-account-create-z6jg6\" (UID: \"647daeca-7489-478d-930c-3a780336be49\") " pod="openstack/barbican-a502-account-create-z6jg6" Nov 24 11:25:29 crc kubenswrapper[5072]: I1124 11:25:29.539475 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bffbb2ab-3908-425a-ba38-80a69a37a16a-operator-scripts\") pod \"barbican-db-create-h4ncm\" (UID: \"bffbb2ab-3908-425a-ba38-80a69a37a16a\") " pod="openstack/barbican-db-create-h4ncm" Nov 24 11:25:29 crc kubenswrapper[5072]: I1124 11:25:29.558299 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-wp6ws" Nov 24 11:25:29 crc kubenswrapper[5072]: I1124 11:25:29.563536 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-95b4-account-create-x4sc7"] Nov 24 11:25:29 crc kubenswrapper[5072]: I1124 11:25:29.565022 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-95b4-account-create-x4sc7" Nov 24 11:25:29 crc kubenswrapper[5072]: I1124 11:25:29.568615 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-a3c3-account-create-24pwx" Nov 24 11:25:29 crc kubenswrapper[5072]: I1124 11:25:29.573247 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzn5m\" (UniqueName: \"kubernetes.io/projected/bffbb2ab-3908-425a-ba38-80a69a37a16a-kube-api-access-gzn5m\") pod \"barbican-db-create-h4ncm\" (UID: \"bffbb2ab-3908-425a-ba38-80a69a37a16a\") " pod="openstack/barbican-db-create-h4ncm" Nov 24 11:25:29 crc kubenswrapper[5072]: I1124 11:25:29.576593 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Nov 24 11:25:29 crc kubenswrapper[5072]: I1124 11:25:29.581599 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzt4m\" (UniqueName: \"kubernetes.io/projected/647daeca-7489-478d-930c-3a780336be49-kube-api-access-vzt4m\") pod \"barbican-a502-account-create-z6jg6\" (UID: \"647daeca-7489-478d-930c-3a780336be49\") " pod="openstack/barbican-a502-account-create-z6jg6" Nov 24 11:25:29 crc kubenswrapper[5072]: I1124 11:25:29.602066 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-95b4-account-create-x4sc7"] Nov 24 11:25:29 crc kubenswrapper[5072]: I1124 11:25:29.640589 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72v2d\" (UniqueName: \"kubernetes.io/projected/bc652e4a-54d1-43f7-b547-d86b30ae0797-kube-api-access-72v2d\") pod \"neutron-db-create-mj6kc\" (UID: \"bc652e4a-54d1-43f7-b547-d86b30ae0797\") " pod="openstack/neutron-db-create-mj6kc" Nov 24 11:25:29 crc kubenswrapper[5072]: I1124 11:25:29.640634 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0b8deb9-6451-4091-bc77-884a3581af75-operator-scripts\") pod \"neutron-95b4-account-create-x4sc7\" (UID: \"d0b8deb9-6451-4091-bc77-884a3581af75\") " pod="openstack/neutron-95b4-account-create-x4sc7" Nov 24 11:25:29 crc kubenswrapper[5072]: I1124 11:25:29.640683 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4f41a09-fa7a-4077-8502-58295771132e-combined-ca-bundle\") pod \"keystone-db-sync-sh9kr\" (UID: \"d4f41a09-fa7a-4077-8502-58295771132e\") " pod="openstack/keystone-db-sync-sh9kr" Nov 24 11:25:29 crc kubenswrapper[5072]: I1124 11:25:29.640708 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc652e4a-54d1-43f7-b547-d86b30ae0797-operator-scripts\") pod \"neutron-db-create-mj6kc\" (UID: \"bc652e4a-54d1-43f7-b547-d86b30ae0797\") " pod="openstack/neutron-db-create-mj6kc" Nov 24 11:25:29 crc kubenswrapper[5072]: I1124 11:25:29.640735 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjkms\" (UniqueName: \"kubernetes.io/projected/d4f41a09-fa7a-4077-8502-58295771132e-kube-api-access-wjkms\") pod \"keystone-db-sync-sh9kr\" (UID: \"d4f41a09-fa7a-4077-8502-58295771132e\") " pod="openstack/keystone-db-sync-sh9kr" Nov 24 11:25:29 crc kubenswrapper[5072]: I1124 11:25:29.640771 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29qhr\" (UniqueName: \"kubernetes.io/projected/d0b8deb9-6451-4091-bc77-884a3581af75-kube-api-access-29qhr\") pod \"neutron-95b4-account-create-x4sc7\" (UID: \"d0b8deb9-6451-4091-bc77-884a3581af75\") " pod="openstack/neutron-95b4-account-create-x4sc7" Nov 24 11:25:29 crc kubenswrapper[5072]: I1124 11:25:29.640793 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4f41a09-fa7a-4077-8502-58295771132e-config-data\") pod \"keystone-db-sync-sh9kr\" (UID: \"d4f41a09-fa7a-4077-8502-58295771132e\") " pod="openstack/keystone-db-sync-sh9kr" Nov 24 11:25:29 crc kubenswrapper[5072]: I1124 11:25:29.641752 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc652e4a-54d1-43f7-b547-d86b30ae0797-operator-scripts\") pod \"neutron-db-create-mj6kc\" (UID: \"bc652e4a-54d1-43f7-b547-d86b30ae0797\") " pod="openstack/neutron-db-create-mj6kc" Nov 24 11:25:29 crc kubenswrapper[5072]: I1124 11:25:29.655889 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72v2d\" (UniqueName: \"kubernetes.io/projected/bc652e4a-54d1-43f7-b547-d86b30ae0797-kube-api-access-72v2d\") pod \"neutron-db-create-mj6kc\" (UID: \"bc652e4a-54d1-43f7-b547-d86b30ae0797\") " pod="openstack/neutron-db-create-mj6kc" Nov 24 11:25:29 crc kubenswrapper[5072]: I1124 11:25:29.662651 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-a502-account-create-z6jg6" Nov 24 11:25:29 crc kubenswrapper[5072]: I1124 11:25:29.699127 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-h4ncm" Nov 24 11:25:29 crc kubenswrapper[5072]: I1124 11:25:29.741907 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29qhr\" (UniqueName: \"kubernetes.io/projected/d0b8deb9-6451-4091-bc77-884a3581af75-kube-api-access-29qhr\") pod \"neutron-95b4-account-create-x4sc7\" (UID: \"d0b8deb9-6451-4091-bc77-884a3581af75\") " pod="openstack/neutron-95b4-account-create-x4sc7" Nov 24 11:25:29 crc kubenswrapper[5072]: I1124 11:25:29.741950 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4f41a09-fa7a-4077-8502-58295771132e-config-data\") pod \"keystone-db-sync-sh9kr\" (UID: \"d4f41a09-fa7a-4077-8502-58295771132e\") " pod="openstack/keystone-db-sync-sh9kr" Nov 24 11:25:29 crc kubenswrapper[5072]: I1124 11:25:29.742012 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0b8deb9-6451-4091-bc77-884a3581af75-operator-scripts\") pod \"neutron-95b4-account-create-x4sc7\" (UID: \"d0b8deb9-6451-4091-bc77-884a3581af75\") " pod="openstack/neutron-95b4-account-create-x4sc7" Nov 24 11:25:29 crc kubenswrapper[5072]: I1124 11:25:29.742057 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4f41a09-fa7a-4077-8502-58295771132e-combined-ca-bundle\") pod \"keystone-db-sync-sh9kr\" (UID: \"d4f41a09-fa7a-4077-8502-58295771132e\") " pod="openstack/keystone-db-sync-sh9kr" Nov 24 11:25:29 crc kubenswrapper[5072]: I1124 11:25:29.742089 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjkms\" (UniqueName: \"kubernetes.io/projected/d4f41a09-fa7a-4077-8502-58295771132e-kube-api-access-wjkms\") pod \"keystone-db-sync-sh9kr\" (UID: \"d4f41a09-fa7a-4077-8502-58295771132e\") " pod="openstack/keystone-db-sync-sh9kr" Nov 24 11:25:29 crc kubenswrapper[5072]: I1124 11:25:29.742956 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0b8deb9-6451-4091-bc77-884a3581af75-operator-scripts\") pod \"neutron-95b4-account-create-x4sc7\" (UID: \"d0b8deb9-6451-4091-bc77-884a3581af75\") " pod="openstack/neutron-95b4-account-create-x4sc7" Nov 24 11:25:29 crc kubenswrapper[5072]: I1124 11:25:29.746784 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4f41a09-fa7a-4077-8502-58295771132e-combined-ca-bundle\") pod \"keystone-db-sync-sh9kr\" (UID: \"d4f41a09-fa7a-4077-8502-58295771132e\") " pod="openstack/keystone-db-sync-sh9kr" Nov 24 11:25:29 crc kubenswrapper[5072]: I1124 11:25:29.749533 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4f41a09-fa7a-4077-8502-58295771132e-config-data\") pod \"keystone-db-sync-sh9kr\" (UID: \"d4f41a09-fa7a-4077-8502-58295771132e\") " pod="openstack/keystone-db-sync-sh9kr" Nov 24 11:25:29 crc kubenswrapper[5072]: I1124 11:25:29.755219 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-mj6kc" Nov 24 11:25:29 crc kubenswrapper[5072]: I1124 11:25:29.759909 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29qhr\" (UniqueName: \"kubernetes.io/projected/d0b8deb9-6451-4091-bc77-884a3581af75-kube-api-access-29qhr\") pod \"neutron-95b4-account-create-x4sc7\" (UID: \"d0b8deb9-6451-4091-bc77-884a3581af75\") " pod="openstack/neutron-95b4-account-create-x4sc7" Nov 24 11:25:29 crc kubenswrapper[5072]: I1124 11:25:29.760326 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjkms\" (UniqueName: \"kubernetes.io/projected/d4f41a09-fa7a-4077-8502-58295771132e-kube-api-access-wjkms\") pod \"keystone-db-sync-sh9kr\" (UID: \"d4f41a09-fa7a-4077-8502-58295771132e\") " pod="openstack/keystone-db-sync-sh9kr" Nov 24 11:25:29 crc kubenswrapper[5072]: I1124 11:25:29.831137 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-sh9kr" Nov 24 11:25:29 crc kubenswrapper[5072]: I1124 11:25:29.924669 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-a3c3-account-create-24pwx"] Nov 24 11:25:29 crc kubenswrapper[5072]: I1124 11:25:29.971980 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-95b4-account-create-x4sc7" Nov 24 11:25:30 crc kubenswrapper[5072]: I1124 11:25:30.053809 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-wp6ws"] Nov 24 11:25:30 crc kubenswrapper[5072]: W1124 11:25:30.066838 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf64dc57b_2fb4_4ad8_99a9_f9756664b3c4.slice/crio-5147af817f9b6b9dcd63daa78c282c7ddc2d367fb03bcdd89898bfc6d486ff39 WatchSource:0}: Error finding container 5147af817f9b6b9dcd63daa78c282c7ddc2d367fb03bcdd89898bfc6d486ff39: Status 404 returned error can't find the container with id 5147af817f9b6b9dcd63daa78c282c7ddc2d367fb03bcdd89898bfc6d486ff39 Nov 24 11:25:30 crc kubenswrapper[5072]: I1124 11:25:30.240807 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-h4ncm"] Nov 24 11:25:30 crc kubenswrapper[5072]: I1124 11:25:30.255457 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-a502-account-create-z6jg6"] Nov 24 11:25:30 crc kubenswrapper[5072]: I1124 11:25:30.348058 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-mj6kc"] Nov 24 11:25:30 crc kubenswrapper[5072]: I1124 11:25:30.416847 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-sh9kr"] Nov 24 11:25:30 crc kubenswrapper[5072]: W1124 11:25:30.425447 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd4f41a09_fa7a_4077_8502_58295771132e.slice/crio-9a0f049a88b10b9bcfb8d37d017de38982b2ac7a12f7efd91579de4851c134f9 WatchSource:0}: Error finding container 9a0f049a88b10b9bcfb8d37d017de38982b2ac7a12f7efd91579de4851c134f9: Status 404 returned error can't find the container with id 9a0f049a88b10b9bcfb8d37d017de38982b2ac7a12f7efd91579de4851c134f9 Nov 24 11:25:30 crc kubenswrapper[5072]: I1124 11:25:30.522429 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-95b4-account-create-x4sc7"] Nov 24 11:25:30 crc kubenswrapper[5072]: W1124 11:25:30.540446 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0b8deb9_6451_4091_bc77_884a3581af75.slice/crio-b49af41df1b22b96a07d25f59265a37ce79747c8fdffd1b590faa84c46a37080 WatchSource:0}: Error finding container b49af41df1b22b96a07d25f59265a37ce79747c8fdffd1b590faa84c46a37080: Status 404 returned error can't find the container with id b49af41df1b22b96a07d25f59265a37ce79747c8fdffd1b590faa84c46a37080 Nov 24 11:25:30 crc kubenswrapper[5072]: I1124 11:25:30.738225 5072 generic.go:334] "Generic (PLEG): container finished" podID="f64dc57b-2fb4-4ad8-99a9-f9756664b3c4" containerID="4936f31cc6e34607b415a33f58a9dd3596dd27fc84aacd1c3707abf92fcca017" exitCode=0 Nov 24 11:25:30 crc kubenswrapper[5072]: I1124 11:25:30.738305 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-wp6ws" event={"ID":"f64dc57b-2fb4-4ad8-99a9-f9756664b3c4","Type":"ContainerDied","Data":"4936f31cc6e34607b415a33f58a9dd3596dd27fc84aacd1c3707abf92fcca017"} Nov 24 11:25:30 crc kubenswrapper[5072]: I1124 11:25:30.738340 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-wp6ws" event={"ID":"f64dc57b-2fb4-4ad8-99a9-f9756664b3c4","Type":"ContainerStarted","Data":"5147af817f9b6b9dcd63daa78c282c7ddc2d367fb03bcdd89898bfc6d486ff39"} Nov 24 11:25:30 crc kubenswrapper[5072]: I1124 11:25:30.741903 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-sh9kr" event={"ID":"d4f41a09-fa7a-4077-8502-58295771132e","Type":"ContainerStarted","Data":"9a0f049a88b10b9bcfb8d37d017de38982b2ac7a12f7efd91579de4851c134f9"} Nov 24 11:25:30 crc kubenswrapper[5072]: I1124 11:25:30.744333 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-mj6kc" event={"ID":"bc652e4a-54d1-43f7-b547-d86b30ae0797","Type":"ContainerStarted","Data":"79bcd35dd6d76a99b90dfd2d188142a7036a6a9bf0d2ee9b43a613e8080e0c46"} Nov 24 11:25:30 crc kubenswrapper[5072]: I1124 11:25:30.744554 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-mj6kc" event={"ID":"bc652e4a-54d1-43f7-b547-d86b30ae0797","Type":"ContainerStarted","Data":"15a61575cc5aa5a1dc79aeba1e988adf97456180e007f4bc628270f1a3a081c6"} Nov 24 11:25:30 crc kubenswrapper[5072]: I1124 11:25:30.745825 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-95b4-account-create-x4sc7" event={"ID":"d0b8deb9-6451-4091-bc77-884a3581af75","Type":"ContainerStarted","Data":"d2c1dbb6da557058d66a82d8c7443c22025921dd8c1281cc02d33575ed58d7a9"} Nov 24 11:25:30 crc kubenswrapper[5072]: I1124 11:25:30.745962 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-95b4-account-create-x4sc7" event={"ID":"d0b8deb9-6451-4091-bc77-884a3581af75","Type":"ContainerStarted","Data":"b49af41df1b22b96a07d25f59265a37ce79747c8fdffd1b590faa84c46a37080"} Nov 24 11:25:30 crc kubenswrapper[5072]: I1124 11:25:30.747123 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-h4ncm" event={"ID":"bffbb2ab-3908-425a-ba38-80a69a37a16a","Type":"ContainerStarted","Data":"f0564c23ecc9f7d6844b1de314693700c94f1744400d7a1f1d3ca65508eadd4c"} Nov 24 11:25:30 crc kubenswrapper[5072]: I1124 11:25:30.747188 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-h4ncm" event={"ID":"bffbb2ab-3908-425a-ba38-80a69a37a16a","Type":"ContainerStarted","Data":"2d1f3cac88d0f24b6f632434a35b2af693fb62d29433825ada6a79dc4be618ec"} Nov 24 11:25:30 crc kubenswrapper[5072]: I1124 11:25:30.749732 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-a502-account-create-z6jg6" event={"ID":"647daeca-7489-478d-930c-3a780336be49","Type":"ContainerStarted","Data":"515c2d277fdb1783a233f9ecda35204f257df0e932af496a6631c73337ca0924"} Nov 24 11:25:30 crc kubenswrapper[5072]: I1124 11:25:30.749764 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-a502-account-create-z6jg6" event={"ID":"647daeca-7489-478d-930c-3a780336be49","Type":"ContainerStarted","Data":"780020b6f65a1cbc03044aa6b6b1dd4b8a4a197ab627597c9c62366b0a6bfb7f"} Nov 24 11:25:30 crc kubenswrapper[5072]: I1124 11:25:30.755025 5072 generic.go:334] "Generic (PLEG): container finished" podID="79a97b6f-0aa6-4059-8495-23ceff788793" containerID="7661cbea52672967aab7f54dd6d29e802a68ce4065f8db181b7e3e2de73f8240" exitCode=0 Nov 24 11:25:30 crc kubenswrapper[5072]: I1124 11:25:30.755072 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-a3c3-account-create-24pwx" event={"ID":"79a97b6f-0aa6-4059-8495-23ceff788793","Type":"ContainerDied","Data":"7661cbea52672967aab7f54dd6d29e802a68ce4065f8db181b7e3e2de73f8240"} Nov 24 11:25:30 crc kubenswrapper[5072]: I1124 11:25:30.755096 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-a3c3-account-create-24pwx" event={"ID":"79a97b6f-0aa6-4059-8495-23ceff788793","Type":"ContainerStarted","Data":"9cde0ea0cffc471a3b8cddb9bbe92221d04f767cc0624c89a0423c4266cc7e84"} Nov 24 11:25:30 crc kubenswrapper[5072]: I1124 11:25:30.809472 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-95b4-account-create-x4sc7" podStartSLOduration=1.809454941 podStartE2EDuration="1.809454941s" podCreationTimestamp="2025-11-24 11:25:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:25:30.801462388 +0000 UTC m=+982.512986864" watchObservedRunningTime="2025-11-24 11:25:30.809454941 +0000 UTC m=+982.520979417" Nov 24 11:25:30 crc kubenswrapper[5072]: I1124 11:25:30.824801 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-a502-account-create-z6jg6" podStartSLOduration=1.824780209 podStartE2EDuration="1.824780209s" podCreationTimestamp="2025-11-24 11:25:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:25:30.815305179 +0000 UTC m=+982.526829645" watchObservedRunningTime="2025-11-24 11:25:30.824780209 +0000 UTC m=+982.536304675" Nov 24 11:25:31 crc kubenswrapper[5072]: I1124 11:25:31.767928 5072 generic.go:334] "Generic (PLEG): container finished" podID="d0b8deb9-6451-4091-bc77-884a3581af75" containerID="d2c1dbb6da557058d66a82d8c7443c22025921dd8c1281cc02d33575ed58d7a9" exitCode=0 Nov 24 11:25:31 crc kubenswrapper[5072]: I1124 11:25:31.768147 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-95b4-account-create-x4sc7" event={"ID":"d0b8deb9-6451-4091-bc77-884a3581af75","Type":"ContainerDied","Data":"d2c1dbb6da557058d66a82d8c7443c22025921dd8c1281cc02d33575ed58d7a9"} Nov 24 11:25:31 crc kubenswrapper[5072]: I1124 11:25:31.774749 5072 generic.go:334] "Generic (PLEG): container finished" podID="bffbb2ab-3908-425a-ba38-80a69a37a16a" containerID="f0564c23ecc9f7d6844b1de314693700c94f1744400d7a1f1d3ca65508eadd4c" exitCode=0 Nov 24 11:25:31 crc kubenswrapper[5072]: I1124 11:25:31.774840 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-h4ncm" event={"ID":"bffbb2ab-3908-425a-ba38-80a69a37a16a","Type":"ContainerDied","Data":"f0564c23ecc9f7d6844b1de314693700c94f1744400d7a1f1d3ca65508eadd4c"} Nov 24 11:25:31 crc kubenswrapper[5072]: I1124 11:25:31.778031 5072 generic.go:334] "Generic (PLEG): container finished" podID="647daeca-7489-478d-930c-3a780336be49" containerID="515c2d277fdb1783a233f9ecda35204f257df0e932af496a6631c73337ca0924" exitCode=0 Nov 24 11:25:31 crc kubenswrapper[5072]: I1124 11:25:31.778086 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-a502-account-create-z6jg6" event={"ID":"647daeca-7489-478d-930c-3a780336be49","Type":"ContainerDied","Data":"515c2d277fdb1783a233f9ecda35204f257df0e932af496a6631c73337ca0924"} Nov 24 11:25:31 crc kubenswrapper[5072]: I1124 11:25:31.784893 5072 generic.go:334] "Generic (PLEG): container finished" podID="bc652e4a-54d1-43f7-b547-d86b30ae0797" containerID="79bcd35dd6d76a99b90dfd2d188142a7036a6a9bf0d2ee9b43a613e8080e0c46" exitCode=0 Nov 24 11:25:31 crc kubenswrapper[5072]: I1124 11:25:31.784989 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-mj6kc" event={"ID":"bc652e4a-54d1-43f7-b547-d86b30ae0797","Type":"ContainerDied","Data":"79bcd35dd6d76a99b90dfd2d188142a7036a6a9bf0d2ee9b43a613e8080e0c46"} Nov 24 11:25:32 crc kubenswrapper[5072]: I1124 11:25:32.243448 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-h4ncm" Nov 24 11:25:32 crc kubenswrapper[5072]: I1124 11:25:32.339256 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-wp6ws" Nov 24 11:25:32 crc kubenswrapper[5072]: I1124 11:25:32.344911 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-mj6kc" Nov 24 11:25:32 crc kubenswrapper[5072]: I1124 11:25:32.357331 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-a3c3-account-create-24pwx" Nov 24 11:25:32 crc kubenswrapper[5072]: I1124 11:25:32.382935 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bffbb2ab-3908-425a-ba38-80a69a37a16a-operator-scripts\") pod \"bffbb2ab-3908-425a-ba38-80a69a37a16a\" (UID: \"bffbb2ab-3908-425a-ba38-80a69a37a16a\") " Nov 24 11:25:32 crc kubenswrapper[5072]: I1124 11:25:32.383000 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f64dc57b-2fb4-4ad8-99a9-f9756664b3c4-operator-scripts\") pod \"f64dc57b-2fb4-4ad8-99a9-f9756664b3c4\" (UID: \"f64dc57b-2fb4-4ad8-99a9-f9756664b3c4\") " Nov 24 11:25:32 crc kubenswrapper[5072]: I1124 11:25:32.383056 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzn5m\" (UniqueName: \"kubernetes.io/projected/bffbb2ab-3908-425a-ba38-80a69a37a16a-kube-api-access-gzn5m\") pod \"bffbb2ab-3908-425a-ba38-80a69a37a16a\" (UID: \"bffbb2ab-3908-425a-ba38-80a69a37a16a\") " Nov 24 11:25:32 crc kubenswrapper[5072]: I1124 11:25:32.383159 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69xg4\" (UniqueName: \"kubernetes.io/projected/f64dc57b-2fb4-4ad8-99a9-f9756664b3c4-kube-api-access-69xg4\") pod \"f64dc57b-2fb4-4ad8-99a9-f9756664b3c4\" (UID: \"f64dc57b-2fb4-4ad8-99a9-f9756664b3c4\") " Nov 24 11:25:32 crc kubenswrapper[5072]: I1124 11:25:32.383807 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bffbb2ab-3908-425a-ba38-80a69a37a16a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bffbb2ab-3908-425a-ba38-80a69a37a16a" (UID: "bffbb2ab-3908-425a-ba38-80a69a37a16a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:25:32 crc kubenswrapper[5072]: I1124 11:25:32.383810 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f64dc57b-2fb4-4ad8-99a9-f9756664b3c4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f64dc57b-2fb4-4ad8-99a9-f9756664b3c4" (UID: "f64dc57b-2fb4-4ad8-99a9-f9756664b3c4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:25:32 crc kubenswrapper[5072]: I1124 11:25:32.388619 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bffbb2ab-3908-425a-ba38-80a69a37a16a-kube-api-access-gzn5m" (OuterVolumeSpecName: "kube-api-access-gzn5m") pod "bffbb2ab-3908-425a-ba38-80a69a37a16a" (UID: "bffbb2ab-3908-425a-ba38-80a69a37a16a"). InnerVolumeSpecName "kube-api-access-gzn5m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:25:32 crc kubenswrapper[5072]: I1124 11:25:32.388840 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f64dc57b-2fb4-4ad8-99a9-f9756664b3c4-kube-api-access-69xg4" (OuterVolumeSpecName: "kube-api-access-69xg4") pod "f64dc57b-2fb4-4ad8-99a9-f9756664b3c4" (UID: "f64dc57b-2fb4-4ad8-99a9-f9756664b3c4"). InnerVolumeSpecName "kube-api-access-69xg4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:25:32 crc kubenswrapper[5072]: I1124 11:25:32.484246 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pqd4z\" (UniqueName: \"kubernetes.io/projected/79a97b6f-0aa6-4059-8495-23ceff788793-kube-api-access-pqd4z\") pod \"79a97b6f-0aa6-4059-8495-23ceff788793\" (UID: \"79a97b6f-0aa6-4059-8495-23ceff788793\") " Nov 24 11:25:32 crc kubenswrapper[5072]: I1124 11:25:32.484319 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc652e4a-54d1-43f7-b547-d86b30ae0797-operator-scripts\") pod \"bc652e4a-54d1-43f7-b547-d86b30ae0797\" (UID: \"bc652e4a-54d1-43f7-b547-d86b30ae0797\") " Nov 24 11:25:32 crc kubenswrapper[5072]: I1124 11:25:32.484412 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72v2d\" (UniqueName: \"kubernetes.io/projected/bc652e4a-54d1-43f7-b547-d86b30ae0797-kube-api-access-72v2d\") pod \"bc652e4a-54d1-43f7-b547-d86b30ae0797\" (UID: \"bc652e4a-54d1-43f7-b547-d86b30ae0797\") " Nov 24 11:25:32 crc kubenswrapper[5072]: I1124 11:25:32.484433 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/79a97b6f-0aa6-4059-8495-23ceff788793-operator-scripts\") pod \"79a97b6f-0aa6-4059-8495-23ceff788793\" (UID: \"79a97b6f-0aa6-4059-8495-23ceff788793\") " Nov 24 11:25:32 crc kubenswrapper[5072]: I1124 11:25:32.484806 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc652e4a-54d1-43f7-b547-d86b30ae0797-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bc652e4a-54d1-43f7-b547-d86b30ae0797" (UID: "bc652e4a-54d1-43f7-b547-d86b30ae0797"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:25:32 crc kubenswrapper[5072]: I1124 11:25:32.484824 5072 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bffbb2ab-3908-425a-ba38-80a69a37a16a-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:25:32 crc kubenswrapper[5072]: I1124 11:25:32.484889 5072 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f64dc57b-2fb4-4ad8-99a9-f9756664b3c4-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:25:32 crc kubenswrapper[5072]: I1124 11:25:32.484904 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gzn5m\" (UniqueName: \"kubernetes.io/projected/bffbb2ab-3908-425a-ba38-80a69a37a16a-kube-api-access-gzn5m\") on node \"crc\" DevicePath \"\"" Nov 24 11:25:32 crc kubenswrapper[5072]: I1124 11:25:32.484917 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69xg4\" (UniqueName: \"kubernetes.io/projected/f64dc57b-2fb4-4ad8-99a9-f9756664b3c4-kube-api-access-69xg4\") on node \"crc\" DevicePath \"\"" Nov 24 11:25:32 crc kubenswrapper[5072]: I1124 11:25:32.484980 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79a97b6f-0aa6-4059-8495-23ceff788793-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "79a97b6f-0aa6-4059-8495-23ceff788793" (UID: "79a97b6f-0aa6-4059-8495-23ceff788793"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:25:32 crc kubenswrapper[5072]: I1124 11:25:32.487366 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc652e4a-54d1-43f7-b547-d86b30ae0797-kube-api-access-72v2d" (OuterVolumeSpecName: "kube-api-access-72v2d") pod "bc652e4a-54d1-43f7-b547-d86b30ae0797" (UID: "bc652e4a-54d1-43f7-b547-d86b30ae0797"). InnerVolumeSpecName "kube-api-access-72v2d". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:25:32 crc kubenswrapper[5072]: I1124 11:25:32.488286 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79a97b6f-0aa6-4059-8495-23ceff788793-kube-api-access-pqd4z" (OuterVolumeSpecName: "kube-api-access-pqd4z") pod "79a97b6f-0aa6-4059-8495-23ceff788793" (UID: "79a97b6f-0aa6-4059-8495-23ceff788793"). InnerVolumeSpecName "kube-api-access-pqd4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:25:32 crc kubenswrapper[5072]: I1124 11:25:32.586096 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pqd4z\" (UniqueName: \"kubernetes.io/projected/79a97b6f-0aa6-4059-8495-23ceff788793-kube-api-access-pqd4z\") on node \"crc\" DevicePath \"\"" Nov 24 11:25:32 crc kubenswrapper[5072]: I1124 11:25:32.586121 5072 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc652e4a-54d1-43f7-b547-d86b30ae0797-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:25:32 crc kubenswrapper[5072]: I1124 11:25:32.586132 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72v2d\" (UniqueName: \"kubernetes.io/projected/bc652e4a-54d1-43f7-b547-d86b30ae0797-kube-api-access-72v2d\") on node \"crc\" DevicePath \"\"" Nov 24 11:25:32 crc kubenswrapper[5072]: I1124 11:25:32.586141 5072 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/79a97b6f-0aa6-4059-8495-23ceff788793-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:25:32 crc kubenswrapper[5072]: I1124 11:25:32.791674 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-a3c3-account-create-24pwx" event={"ID":"79a97b6f-0aa6-4059-8495-23ceff788793","Type":"ContainerDied","Data":"9cde0ea0cffc471a3b8cddb9bbe92221d04f767cc0624c89a0423c4266cc7e84"} Nov 24 11:25:32 crc kubenswrapper[5072]: I1124 11:25:32.791724 5072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9cde0ea0cffc471a3b8cddb9bbe92221d04f767cc0624c89a0423c4266cc7e84" Nov 24 11:25:32 crc kubenswrapper[5072]: I1124 11:25:32.791689 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-a3c3-account-create-24pwx" Nov 24 11:25:32 crc kubenswrapper[5072]: I1124 11:25:32.792875 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-wp6ws" event={"ID":"f64dc57b-2fb4-4ad8-99a9-f9756664b3c4","Type":"ContainerDied","Data":"5147af817f9b6b9dcd63daa78c282c7ddc2d367fb03bcdd89898bfc6d486ff39"} Nov 24 11:25:32 crc kubenswrapper[5072]: I1124 11:25:32.792913 5072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5147af817f9b6b9dcd63daa78c282c7ddc2d367fb03bcdd89898bfc6d486ff39" Nov 24 11:25:32 crc kubenswrapper[5072]: I1124 11:25:32.792968 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-wp6ws" Nov 24 11:25:32 crc kubenswrapper[5072]: I1124 11:25:32.804615 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-mj6kc" Nov 24 11:25:32 crc kubenswrapper[5072]: I1124 11:25:32.804609 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-mj6kc" event={"ID":"bc652e4a-54d1-43f7-b547-d86b30ae0797","Type":"ContainerDied","Data":"15a61575cc5aa5a1dc79aeba1e988adf97456180e007f4bc628270f1a3a081c6"} Nov 24 11:25:32 crc kubenswrapper[5072]: I1124 11:25:32.804822 5072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15a61575cc5aa5a1dc79aeba1e988adf97456180e007f4bc628270f1a3a081c6" Nov 24 11:25:32 crc kubenswrapper[5072]: I1124 11:25:32.811500 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-h4ncm" event={"ID":"bffbb2ab-3908-425a-ba38-80a69a37a16a","Type":"ContainerDied","Data":"2d1f3cac88d0f24b6f632434a35b2af693fb62d29433825ada6a79dc4be618ec"} Nov 24 11:25:32 crc kubenswrapper[5072]: I1124 11:25:32.811532 5072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d1f3cac88d0f24b6f632434a35b2af693fb62d29433825ada6a79dc4be618ec" Nov 24 11:25:32 crc kubenswrapper[5072]: I1124 11:25:32.811590 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-h4ncm" Nov 24 11:25:35 crc kubenswrapper[5072]: I1124 11:25:35.065863 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-95b4-account-create-x4sc7" Nov 24 11:25:35 crc kubenswrapper[5072]: I1124 11:25:35.070775 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-a502-account-create-z6jg6" Nov 24 11:25:35 crc kubenswrapper[5072]: I1124 11:25:35.126513 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vzt4m\" (UniqueName: \"kubernetes.io/projected/647daeca-7489-478d-930c-3a780336be49-kube-api-access-vzt4m\") pod \"647daeca-7489-478d-930c-3a780336be49\" (UID: \"647daeca-7489-478d-930c-3a780336be49\") " Nov 24 11:25:35 crc kubenswrapper[5072]: I1124 11:25:35.126555 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/647daeca-7489-478d-930c-3a780336be49-operator-scripts\") pod \"647daeca-7489-478d-930c-3a780336be49\" (UID: \"647daeca-7489-478d-930c-3a780336be49\") " Nov 24 11:25:35 crc kubenswrapper[5072]: I1124 11:25:35.126612 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29qhr\" (UniqueName: \"kubernetes.io/projected/d0b8deb9-6451-4091-bc77-884a3581af75-kube-api-access-29qhr\") pod \"d0b8deb9-6451-4091-bc77-884a3581af75\" (UID: \"d0b8deb9-6451-4091-bc77-884a3581af75\") " Nov 24 11:25:35 crc kubenswrapper[5072]: I1124 11:25:35.126682 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0b8deb9-6451-4091-bc77-884a3581af75-operator-scripts\") pod \"d0b8deb9-6451-4091-bc77-884a3581af75\" (UID: \"d0b8deb9-6451-4091-bc77-884a3581af75\") " Nov 24 11:25:35 crc kubenswrapper[5072]: I1124 11:25:35.127420 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0b8deb9-6451-4091-bc77-884a3581af75-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d0b8deb9-6451-4091-bc77-884a3581af75" (UID: "d0b8deb9-6451-4091-bc77-884a3581af75"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:25:35 crc kubenswrapper[5072]: I1124 11:25:35.127584 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/647daeca-7489-478d-930c-3a780336be49-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "647daeca-7489-478d-930c-3a780336be49" (UID: "647daeca-7489-478d-930c-3a780336be49"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:25:35 crc kubenswrapper[5072]: I1124 11:25:35.130393 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0b8deb9-6451-4091-bc77-884a3581af75-kube-api-access-29qhr" (OuterVolumeSpecName: "kube-api-access-29qhr") pod "d0b8deb9-6451-4091-bc77-884a3581af75" (UID: "d0b8deb9-6451-4091-bc77-884a3581af75"). InnerVolumeSpecName "kube-api-access-29qhr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:25:35 crc kubenswrapper[5072]: I1124 11:25:35.132332 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/647daeca-7489-478d-930c-3a780336be49-kube-api-access-vzt4m" (OuterVolumeSpecName: "kube-api-access-vzt4m") pod "647daeca-7489-478d-930c-3a780336be49" (UID: "647daeca-7489-478d-930c-3a780336be49"). InnerVolumeSpecName "kube-api-access-vzt4m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:25:35 crc kubenswrapper[5072]: I1124 11:25:35.227980 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vzt4m\" (UniqueName: \"kubernetes.io/projected/647daeca-7489-478d-930c-3a780336be49-kube-api-access-vzt4m\") on node \"crc\" DevicePath \"\"" Nov 24 11:25:35 crc kubenswrapper[5072]: I1124 11:25:35.228015 5072 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/647daeca-7489-478d-930c-3a780336be49-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:25:35 crc kubenswrapper[5072]: I1124 11:25:35.228028 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29qhr\" (UniqueName: \"kubernetes.io/projected/d0b8deb9-6451-4091-bc77-884a3581af75-kube-api-access-29qhr\") on node \"crc\" DevicePath \"\"" Nov 24 11:25:35 crc kubenswrapper[5072]: I1124 11:25:35.228039 5072 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0b8deb9-6451-4091-bc77-884a3581af75-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:25:35 crc kubenswrapper[5072]: I1124 11:25:35.842814 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-95b4-account-create-x4sc7" Nov 24 11:25:35 crc kubenswrapper[5072]: I1124 11:25:35.842834 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-95b4-account-create-x4sc7" event={"ID":"d0b8deb9-6451-4091-bc77-884a3581af75","Type":"ContainerDied","Data":"b49af41df1b22b96a07d25f59265a37ce79747c8fdffd1b590faa84c46a37080"} Nov 24 11:25:35 crc kubenswrapper[5072]: I1124 11:25:35.842879 5072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b49af41df1b22b96a07d25f59265a37ce79747c8fdffd1b590faa84c46a37080" Nov 24 11:25:35 crc kubenswrapper[5072]: I1124 11:25:35.845565 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-a502-account-create-z6jg6" event={"ID":"647daeca-7489-478d-930c-3a780336be49","Type":"ContainerDied","Data":"780020b6f65a1cbc03044aa6b6b1dd4b8a4a197ab627597c9c62366b0a6bfb7f"} Nov 24 11:25:35 crc kubenswrapper[5072]: I1124 11:25:35.845605 5072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="780020b6f65a1cbc03044aa6b6b1dd4b8a4a197ab627597c9c62366b0a6bfb7f" Nov 24 11:25:35 crc kubenswrapper[5072]: I1124 11:25:35.845621 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-a502-account-create-z6jg6" Nov 24 11:25:35 crc kubenswrapper[5072]: I1124 11:25:35.848365 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-sh9kr" event={"ID":"d4f41a09-fa7a-4077-8502-58295771132e","Type":"ContainerStarted","Data":"f6344617c92e0a271ec3297865b802c61af6300042ac6404db0c92e563bbc952"} Nov 24 11:25:35 crc kubenswrapper[5072]: I1124 11:25:35.880687 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-sh9kr" podStartSLOduration=2.250768567 podStartE2EDuration="6.880668592s" podCreationTimestamp="2025-11-24 11:25:29 +0000 UTC" firstStartedPulling="2025-11-24 11:25:30.427878516 +0000 UTC m=+982.139402992" lastFinishedPulling="2025-11-24 11:25:35.057778531 +0000 UTC m=+986.769303017" observedRunningTime="2025-11-24 11:25:35.875693206 +0000 UTC m=+987.587217692" watchObservedRunningTime="2025-11-24 11:25:35.880668592 +0000 UTC m=+987.592193078" Nov 24 11:25:38 crc kubenswrapper[5072]: I1124 11:25:38.884517 5072 generic.go:334] "Generic (PLEG): container finished" podID="d4f41a09-fa7a-4077-8502-58295771132e" containerID="f6344617c92e0a271ec3297865b802c61af6300042ac6404db0c92e563bbc952" exitCode=0 Nov 24 11:25:38 crc kubenswrapper[5072]: I1124 11:25:38.884618 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-sh9kr" event={"ID":"d4f41a09-fa7a-4077-8502-58295771132e","Type":"ContainerDied","Data":"f6344617c92e0a271ec3297865b802c61af6300042ac6404db0c92e563bbc952"} Nov 24 11:25:40 crc kubenswrapper[5072]: I1124 11:25:40.219786 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-sh9kr" Nov 24 11:25:40 crc kubenswrapper[5072]: I1124 11:25:40.317839 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4f41a09-fa7a-4077-8502-58295771132e-combined-ca-bundle\") pod \"d4f41a09-fa7a-4077-8502-58295771132e\" (UID: \"d4f41a09-fa7a-4077-8502-58295771132e\") " Nov 24 11:25:40 crc kubenswrapper[5072]: I1124 11:25:40.317903 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4f41a09-fa7a-4077-8502-58295771132e-config-data\") pod \"d4f41a09-fa7a-4077-8502-58295771132e\" (UID: \"d4f41a09-fa7a-4077-8502-58295771132e\") " Nov 24 11:25:40 crc kubenswrapper[5072]: I1124 11:25:40.317979 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjkms\" (UniqueName: \"kubernetes.io/projected/d4f41a09-fa7a-4077-8502-58295771132e-kube-api-access-wjkms\") pod \"d4f41a09-fa7a-4077-8502-58295771132e\" (UID: \"d4f41a09-fa7a-4077-8502-58295771132e\") " Nov 24 11:25:40 crc kubenswrapper[5072]: I1124 11:25:40.329157 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4f41a09-fa7a-4077-8502-58295771132e-kube-api-access-wjkms" (OuterVolumeSpecName: "kube-api-access-wjkms") pod "d4f41a09-fa7a-4077-8502-58295771132e" (UID: "d4f41a09-fa7a-4077-8502-58295771132e"). InnerVolumeSpecName "kube-api-access-wjkms". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:25:40 crc kubenswrapper[5072]: I1124 11:25:40.346319 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4f41a09-fa7a-4077-8502-58295771132e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d4f41a09-fa7a-4077-8502-58295771132e" (UID: "d4f41a09-fa7a-4077-8502-58295771132e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:25:40 crc kubenswrapper[5072]: I1124 11:25:40.383933 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4f41a09-fa7a-4077-8502-58295771132e-config-data" (OuterVolumeSpecName: "config-data") pod "d4f41a09-fa7a-4077-8502-58295771132e" (UID: "d4f41a09-fa7a-4077-8502-58295771132e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:25:40 crc kubenswrapper[5072]: I1124 11:25:40.420112 5072 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4f41a09-fa7a-4077-8502-58295771132e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:25:40 crc kubenswrapper[5072]: I1124 11:25:40.420168 5072 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4f41a09-fa7a-4077-8502-58295771132e-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:25:40 crc kubenswrapper[5072]: I1124 11:25:40.420189 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wjkms\" (UniqueName: \"kubernetes.io/projected/d4f41a09-fa7a-4077-8502-58295771132e-kube-api-access-wjkms\") on node \"crc\" DevicePath \"\"" Nov 24 11:25:40 crc kubenswrapper[5072]: I1124 11:25:40.905745 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-sh9kr" event={"ID":"d4f41a09-fa7a-4077-8502-58295771132e","Type":"ContainerDied","Data":"9a0f049a88b10b9bcfb8d37d017de38982b2ac7a12f7efd91579de4851c134f9"} Nov 24 11:25:40 crc kubenswrapper[5072]: I1124 11:25:40.905806 5072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a0f049a88b10b9bcfb8d37d017de38982b2ac7a12f7efd91579de4851c134f9" Nov 24 11:25:40 crc kubenswrapper[5072]: I1124 11:25:40.906305 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-sh9kr" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.214713 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6546db6db7-9gmnp"] Nov 24 11:25:41 crc kubenswrapper[5072]: E1124 11:25:41.215733 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="647daeca-7489-478d-930c-3a780336be49" containerName="mariadb-account-create" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.215832 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="647daeca-7489-478d-930c-3a780336be49" containerName="mariadb-account-create" Nov 24 11:25:41 crc kubenswrapper[5072]: E1124 11:25:41.215930 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0b8deb9-6451-4091-bc77-884a3581af75" containerName="mariadb-account-create" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.215992 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0b8deb9-6451-4091-bc77-884a3581af75" containerName="mariadb-account-create" Nov 24 11:25:41 crc kubenswrapper[5072]: E1124 11:25:41.216058 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc652e4a-54d1-43f7-b547-d86b30ae0797" containerName="mariadb-database-create" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.216125 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc652e4a-54d1-43f7-b547-d86b30ae0797" containerName="mariadb-database-create" Nov 24 11:25:41 crc kubenswrapper[5072]: E1124 11:25:41.216212 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4f41a09-fa7a-4077-8502-58295771132e" containerName="keystone-db-sync" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.216293 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4f41a09-fa7a-4077-8502-58295771132e" containerName="keystone-db-sync" Nov 24 11:25:41 crc kubenswrapper[5072]: E1124 11:25:41.216434 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79a97b6f-0aa6-4059-8495-23ceff788793" containerName="mariadb-account-create" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.216523 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="79a97b6f-0aa6-4059-8495-23ceff788793" containerName="mariadb-account-create" Nov 24 11:25:41 crc kubenswrapper[5072]: E1124 11:25:41.216593 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bffbb2ab-3908-425a-ba38-80a69a37a16a" containerName="mariadb-database-create" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.216651 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="bffbb2ab-3908-425a-ba38-80a69a37a16a" containerName="mariadb-database-create" Nov 24 11:25:41 crc kubenswrapper[5072]: E1124 11:25:41.216714 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f64dc57b-2fb4-4ad8-99a9-f9756664b3c4" containerName="mariadb-database-create" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.216765 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="f64dc57b-2fb4-4ad8-99a9-f9756664b3c4" containerName="mariadb-database-create" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.216975 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="f64dc57b-2fb4-4ad8-99a9-f9756664b3c4" containerName="mariadb-database-create" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.217037 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4f41a09-fa7a-4077-8502-58295771132e" containerName="keystone-db-sync" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.217100 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="bffbb2ab-3908-425a-ba38-80a69a37a16a" containerName="mariadb-database-create" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.217163 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="79a97b6f-0aa6-4059-8495-23ceff788793" containerName="mariadb-account-create" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.217221 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="647daeca-7489-478d-930c-3a780336be49" containerName="mariadb-account-create" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.217282 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0b8deb9-6451-4091-bc77-884a3581af75" containerName="mariadb-account-create" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.217345 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc652e4a-54d1-43f7-b547-d86b30ae0797" containerName="mariadb-database-create" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.224634 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6546db6db7-9gmnp" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.232233 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6546db6db7-9gmnp"] Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.304007 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-lzk69"] Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.307298 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-lzk69" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.313209 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.313735 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-lc8qn" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.313897 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.314122 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.315855 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.327342 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-lzk69"] Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.335316 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/37957a65-eea2-46e5-8aca-52d6d7a4681c-ovsdbserver-nb\") pod \"dnsmasq-dns-6546db6db7-9gmnp\" (UID: \"37957a65-eea2-46e5-8aca-52d6d7a4681c\") " pod="openstack/dnsmasq-dns-6546db6db7-9gmnp" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.335394 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n28fx\" (UniqueName: \"kubernetes.io/projected/75768a7e-65f0-498e-8f4e-5e178de8110e-kube-api-access-n28fx\") pod \"keystone-bootstrap-lzk69\" (UID: \"75768a7e-65f0-498e-8f4e-5e178de8110e\") " pod="openstack/keystone-bootstrap-lzk69" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.335434 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75768a7e-65f0-498e-8f4e-5e178de8110e-config-data\") pod \"keystone-bootstrap-lzk69\" (UID: \"75768a7e-65f0-498e-8f4e-5e178de8110e\") " pod="openstack/keystone-bootstrap-lzk69" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.335499 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckftv\" (UniqueName: \"kubernetes.io/projected/37957a65-eea2-46e5-8aca-52d6d7a4681c-kube-api-access-ckftv\") pod \"dnsmasq-dns-6546db6db7-9gmnp\" (UID: \"37957a65-eea2-46e5-8aca-52d6d7a4681c\") " pod="openstack/dnsmasq-dns-6546db6db7-9gmnp" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.335534 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37957a65-eea2-46e5-8aca-52d6d7a4681c-config\") pod \"dnsmasq-dns-6546db6db7-9gmnp\" (UID: \"37957a65-eea2-46e5-8aca-52d6d7a4681c\") " pod="openstack/dnsmasq-dns-6546db6db7-9gmnp" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.335561 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75768a7e-65f0-498e-8f4e-5e178de8110e-combined-ca-bundle\") pod \"keystone-bootstrap-lzk69\" (UID: \"75768a7e-65f0-498e-8f4e-5e178de8110e\") " pod="openstack/keystone-bootstrap-lzk69" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.335586 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/37957a65-eea2-46e5-8aca-52d6d7a4681c-dns-svc\") pod \"dnsmasq-dns-6546db6db7-9gmnp\" (UID: \"37957a65-eea2-46e5-8aca-52d6d7a4681c\") " pod="openstack/dnsmasq-dns-6546db6db7-9gmnp" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.335619 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75768a7e-65f0-498e-8f4e-5e178de8110e-scripts\") pod \"keystone-bootstrap-lzk69\" (UID: \"75768a7e-65f0-498e-8f4e-5e178de8110e\") " pod="openstack/keystone-bootstrap-lzk69" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.335643 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/75768a7e-65f0-498e-8f4e-5e178de8110e-credential-keys\") pod \"keystone-bootstrap-lzk69\" (UID: \"75768a7e-65f0-498e-8f4e-5e178de8110e\") " pod="openstack/keystone-bootstrap-lzk69" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.335750 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/75768a7e-65f0-498e-8f4e-5e178de8110e-fernet-keys\") pod \"keystone-bootstrap-lzk69\" (UID: \"75768a7e-65f0-498e-8f4e-5e178de8110e\") " pod="openstack/keystone-bootstrap-lzk69" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.335799 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/37957a65-eea2-46e5-8aca-52d6d7a4681c-ovsdbserver-sb\") pod \"dnsmasq-dns-6546db6db7-9gmnp\" (UID: \"37957a65-eea2-46e5-8aca-52d6d7a4681c\") " pod="openstack/dnsmasq-dns-6546db6db7-9gmnp" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.436946 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/37957a65-eea2-46e5-8aca-52d6d7a4681c-ovsdbserver-nb\") pod \"dnsmasq-dns-6546db6db7-9gmnp\" (UID: \"37957a65-eea2-46e5-8aca-52d6d7a4681c\") " pod="openstack/dnsmasq-dns-6546db6db7-9gmnp" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.436995 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n28fx\" (UniqueName: \"kubernetes.io/projected/75768a7e-65f0-498e-8f4e-5e178de8110e-kube-api-access-n28fx\") pod \"keystone-bootstrap-lzk69\" (UID: \"75768a7e-65f0-498e-8f4e-5e178de8110e\") " pod="openstack/keystone-bootstrap-lzk69" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.437016 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75768a7e-65f0-498e-8f4e-5e178de8110e-config-data\") pod \"keystone-bootstrap-lzk69\" (UID: \"75768a7e-65f0-498e-8f4e-5e178de8110e\") " pod="openstack/keystone-bootstrap-lzk69" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.437065 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckftv\" (UniqueName: \"kubernetes.io/projected/37957a65-eea2-46e5-8aca-52d6d7a4681c-kube-api-access-ckftv\") pod \"dnsmasq-dns-6546db6db7-9gmnp\" (UID: \"37957a65-eea2-46e5-8aca-52d6d7a4681c\") " pod="openstack/dnsmasq-dns-6546db6db7-9gmnp" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.437093 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37957a65-eea2-46e5-8aca-52d6d7a4681c-config\") pod \"dnsmasq-dns-6546db6db7-9gmnp\" (UID: \"37957a65-eea2-46e5-8aca-52d6d7a4681c\") " pod="openstack/dnsmasq-dns-6546db6db7-9gmnp" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.437108 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75768a7e-65f0-498e-8f4e-5e178de8110e-combined-ca-bundle\") pod \"keystone-bootstrap-lzk69\" (UID: \"75768a7e-65f0-498e-8f4e-5e178de8110e\") " pod="openstack/keystone-bootstrap-lzk69" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.437127 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/37957a65-eea2-46e5-8aca-52d6d7a4681c-dns-svc\") pod \"dnsmasq-dns-6546db6db7-9gmnp\" (UID: \"37957a65-eea2-46e5-8aca-52d6d7a4681c\") " pod="openstack/dnsmasq-dns-6546db6db7-9gmnp" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.437154 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75768a7e-65f0-498e-8f4e-5e178de8110e-scripts\") pod \"keystone-bootstrap-lzk69\" (UID: \"75768a7e-65f0-498e-8f4e-5e178de8110e\") " pod="openstack/keystone-bootstrap-lzk69" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.437176 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/75768a7e-65f0-498e-8f4e-5e178de8110e-credential-keys\") pod \"keystone-bootstrap-lzk69\" (UID: \"75768a7e-65f0-498e-8f4e-5e178de8110e\") " pod="openstack/keystone-bootstrap-lzk69" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.437219 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/75768a7e-65f0-498e-8f4e-5e178de8110e-fernet-keys\") pod \"keystone-bootstrap-lzk69\" (UID: \"75768a7e-65f0-498e-8f4e-5e178de8110e\") " pod="openstack/keystone-bootstrap-lzk69" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.437256 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/37957a65-eea2-46e5-8aca-52d6d7a4681c-ovsdbserver-sb\") pod \"dnsmasq-dns-6546db6db7-9gmnp\" (UID: \"37957a65-eea2-46e5-8aca-52d6d7a4681c\") " pod="openstack/dnsmasq-dns-6546db6db7-9gmnp" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.438830 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/37957a65-eea2-46e5-8aca-52d6d7a4681c-ovsdbserver-nb\") pod \"dnsmasq-dns-6546db6db7-9gmnp\" (UID: \"37957a65-eea2-46e5-8aca-52d6d7a4681c\") " pod="openstack/dnsmasq-dns-6546db6db7-9gmnp" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.439825 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/37957a65-eea2-46e5-8aca-52d6d7a4681c-dns-svc\") pod \"dnsmasq-dns-6546db6db7-9gmnp\" (UID: \"37957a65-eea2-46e5-8aca-52d6d7a4681c\") " pod="openstack/dnsmasq-dns-6546db6db7-9gmnp" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.440323 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37957a65-eea2-46e5-8aca-52d6d7a4681c-config\") pod \"dnsmasq-dns-6546db6db7-9gmnp\" (UID: \"37957a65-eea2-46e5-8aca-52d6d7a4681c\") " pod="openstack/dnsmasq-dns-6546db6db7-9gmnp" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.440439 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/37957a65-eea2-46e5-8aca-52d6d7a4681c-ovsdbserver-sb\") pod \"dnsmasq-dns-6546db6db7-9gmnp\" (UID: \"37957a65-eea2-46e5-8aca-52d6d7a4681c\") " pod="openstack/dnsmasq-dns-6546db6db7-9gmnp" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.440990 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75768a7e-65f0-498e-8f4e-5e178de8110e-scripts\") pod \"keystone-bootstrap-lzk69\" (UID: \"75768a7e-65f0-498e-8f4e-5e178de8110e\") " pod="openstack/keystone-bootstrap-lzk69" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.441114 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75768a7e-65f0-498e-8f4e-5e178de8110e-combined-ca-bundle\") pod \"keystone-bootstrap-lzk69\" (UID: \"75768a7e-65f0-498e-8f4e-5e178de8110e\") " pod="openstack/keystone-bootstrap-lzk69" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.441495 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/75768a7e-65f0-498e-8f4e-5e178de8110e-fernet-keys\") pod \"keystone-bootstrap-lzk69\" (UID: \"75768a7e-65f0-498e-8f4e-5e178de8110e\") " pod="openstack/keystone-bootstrap-lzk69" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.444256 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/75768a7e-65f0-498e-8f4e-5e178de8110e-credential-keys\") pod \"keystone-bootstrap-lzk69\" (UID: \"75768a7e-65f0-498e-8f4e-5e178de8110e\") " pod="openstack/keystone-bootstrap-lzk69" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.449300 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75768a7e-65f0-498e-8f4e-5e178de8110e-config-data\") pod \"keystone-bootstrap-lzk69\" (UID: \"75768a7e-65f0-498e-8f4e-5e178de8110e\") " pod="openstack/keystone-bootstrap-lzk69" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.456295 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckftv\" (UniqueName: \"kubernetes.io/projected/37957a65-eea2-46e5-8aca-52d6d7a4681c-kube-api-access-ckftv\") pod \"dnsmasq-dns-6546db6db7-9gmnp\" (UID: \"37957a65-eea2-46e5-8aca-52d6d7a4681c\") " pod="openstack/dnsmasq-dns-6546db6db7-9gmnp" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.463026 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n28fx\" (UniqueName: \"kubernetes.io/projected/75768a7e-65f0-498e-8f4e-5e178de8110e-kube-api-access-n28fx\") pod \"keystone-bootstrap-lzk69\" (UID: \"75768a7e-65f0-498e-8f4e-5e178de8110e\") " pod="openstack/keystone-bootstrap-lzk69" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.540973 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.542820 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.542811 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6546db6db7-9gmnp" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.556588 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.556729 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.566199 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.577063 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-g5npx"] Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.607450 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-w6mv2"] Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.607535 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-g5npx" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.609153 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-w6mv2" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.621331 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-8npk7"] Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.623731 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-8npk7" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.627196 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-g5npx"] Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.640754 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/feff4031-5012-468f-8dd6-d58c5dae8d29-db-sync-config-data\") pod \"barbican-db-sync-g5npx\" (UID: \"feff4031-5012-468f-8dd6-d58c5dae8d29\") " pod="openstack/barbican-db-sync-g5npx" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.640849 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b0e75bc-78b4-45e2-9c55-7b573ab3cc15-scripts\") pod \"ceilometer-0\" (UID: \"8b0e75bc-78b4-45e2-9c55-7b573ab3cc15\") " pod="openstack/ceilometer-0" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.640870 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/bb192e24-d3b0-4e96-8bbf-edb5b93ecf64-config\") pod \"neutron-db-sync-w6mv2\" (UID: \"bb192e24-d3b0-4e96-8bbf-edb5b93ecf64\") " pod="openstack/neutron-db-sync-w6mv2" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.640889 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8b0e75bc-78b4-45e2-9c55-7b573ab3cc15-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8b0e75bc-78b4-45e2-9c55-7b573ab3cc15\") " pod="openstack/ceilometer-0" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.640915 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/feff4031-5012-468f-8dd6-d58c5dae8d29-combined-ca-bundle\") pod \"barbican-db-sync-g5npx\" (UID: \"feff4031-5012-468f-8dd6-d58c5dae8d29\") " pod="openstack/barbican-db-sync-g5npx" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.640950 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvgkt\" (UniqueName: \"kubernetes.io/projected/feff4031-5012-468f-8dd6-d58c5dae8d29-kube-api-access-rvgkt\") pod \"barbican-db-sync-g5npx\" (UID: \"feff4031-5012-468f-8dd6-d58c5dae8d29\") " pod="openstack/barbican-db-sync-g5npx" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.640983 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8b0e75bc-78b4-45e2-9c55-7b573ab3cc15-run-httpd\") pod \"ceilometer-0\" (UID: \"8b0e75bc-78b4-45e2-9c55-7b573ab3cc15\") " pod="openstack/ceilometer-0" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.641004 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b0e75bc-78b4-45e2-9c55-7b573ab3cc15-config-data\") pod \"ceilometer-0\" (UID: \"8b0e75bc-78b4-45e2-9c55-7b573ab3cc15\") " pod="openstack/ceilometer-0" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.641021 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dx66f\" (UniqueName: \"kubernetes.io/projected/bb192e24-d3b0-4e96-8bbf-edb5b93ecf64-kube-api-access-dx66f\") pod \"neutron-db-sync-w6mv2\" (UID: \"bb192e24-d3b0-4e96-8bbf-edb5b93ecf64\") " pod="openstack/neutron-db-sync-w6mv2" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.641035 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8b0e75bc-78b4-45e2-9c55-7b573ab3cc15-log-httpd\") pod \"ceilometer-0\" (UID: \"8b0e75bc-78b4-45e2-9c55-7b573ab3cc15\") " pod="openstack/ceilometer-0" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.641335 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb192e24-d3b0-4e96-8bbf-edb5b93ecf64-combined-ca-bundle\") pod \"neutron-db-sync-w6mv2\" (UID: \"bb192e24-d3b0-4e96-8bbf-edb5b93ecf64\") " pod="openstack/neutron-db-sync-w6mv2" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.641417 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b0e75bc-78b4-45e2-9c55-7b573ab3cc15-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8b0e75bc-78b4-45e2-9c55-7b573ab3cc15\") " pod="openstack/ceilometer-0" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.641475 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftdk2\" (UniqueName: \"kubernetes.io/projected/8b0e75bc-78b4-45e2-9c55-7b573ab3cc15-kube-api-access-ftdk2\") pod \"ceilometer-0\" (UID: \"8b0e75bc-78b4-45e2-9c55-7b573ab3cc15\") " pod="openstack/ceilometer-0" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.644253 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-8lj7t" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.644538 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-9mkjw" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.644585 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.644729 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.644543 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.645137 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.645172 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.645303 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-rbcpr" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.687426 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-lzk69" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.743092 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/feff4031-5012-468f-8dd6-d58c5dae8d29-db-sync-config-data\") pod \"barbican-db-sync-g5npx\" (UID: \"feff4031-5012-468f-8dd6-d58c5dae8d29\") " pod="openstack/barbican-db-sync-g5npx" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.743481 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tl5c\" (UniqueName: \"kubernetes.io/projected/ab063039-b4d9-45d8-9336-35316fd1ab08-kube-api-access-8tl5c\") pod \"cinder-db-sync-8npk7\" (UID: \"ab063039-b4d9-45d8-9336-35316fd1ab08\") " pod="openstack/cinder-db-sync-8npk7" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.743530 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b0e75bc-78b4-45e2-9c55-7b573ab3cc15-scripts\") pod \"ceilometer-0\" (UID: \"8b0e75bc-78b4-45e2-9c55-7b573ab3cc15\") " pod="openstack/ceilometer-0" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.743569 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/bb192e24-d3b0-4e96-8bbf-edb5b93ecf64-config\") pod \"neutron-db-sync-w6mv2\" (UID: \"bb192e24-d3b0-4e96-8bbf-edb5b93ecf64\") " pod="openstack/neutron-db-sync-w6mv2" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.743602 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab063039-b4d9-45d8-9336-35316fd1ab08-config-data\") pod \"cinder-db-sync-8npk7\" (UID: \"ab063039-b4d9-45d8-9336-35316fd1ab08\") " pod="openstack/cinder-db-sync-8npk7" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.743642 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8b0e75bc-78b4-45e2-9c55-7b573ab3cc15-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8b0e75bc-78b4-45e2-9c55-7b573ab3cc15\") " pod="openstack/ceilometer-0" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.743691 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/feff4031-5012-468f-8dd6-d58c5dae8d29-combined-ca-bundle\") pod \"barbican-db-sync-g5npx\" (UID: \"feff4031-5012-468f-8dd6-d58c5dae8d29\") " pod="openstack/barbican-db-sync-g5npx" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.743749 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab063039-b4d9-45d8-9336-35316fd1ab08-combined-ca-bundle\") pod \"cinder-db-sync-8npk7\" (UID: \"ab063039-b4d9-45d8-9336-35316fd1ab08\") " pod="openstack/cinder-db-sync-8npk7" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.743796 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvgkt\" (UniqueName: \"kubernetes.io/projected/feff4031-5012-468f-8dd6-d58c5dae8d29-kube-api-access-rvgkt\") pod \"barbican-db-sync-g5npx\" (UID: \"feff4031-5012-468f-8dd6-d58c5dae8d29\") " pod="openstack/barbican-db-sync-g5npx" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.743880 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8b0e75bc-78b4-45e2-9c55-7b573ab3cc15-run-httpd\") pod \"ceilometer-0\" (UID: \"8b0e75bc-78b4-45e2-9c55-7b573ab3cc15\") " pod="openstack/ceilometer-0" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.743937 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b0e75bc-78b4-45e2-9c55-7b573ab3cc15-config-data\") pod \"ceilometer-0\" (UID: \"8b0e75bc-78b4-45e2-9c55-7b573ab3cc15\") " pod="openstack/ceilometer-0" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.743985 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dx66f\" (UniqueName: \"kubernetes.io/projected/bb192e24-d3b0-4e96-8bbf-edb5b93ecf64-kube-api-access-dx66f\") pod \"neutron-db-sync-w6mv2\" (UID: \"bb192e24-d3b0-4e96-8bbf-edb5b93ecf64\") " pod="openstack/neutron-db-sync-w6mv2" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.744016 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8b0e75bc-78b4-45e2-9c55-7b573ab3cc15-log-httpd\") pod \"ceilometer-0\" (UID: \"8b0e75bc-78b4-45e2-9c55-7b573ab3cc15\") " pod="openstack/ceilometer-0" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.744049 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab063039-b4d9-45d8-9336-35316fd1ab08-scripts\") pod \"cinder-db-sync-8npk7\" (UID: \"ab063039-b4d9-45d8-9336-35316fd1ab08\") " pod="openstack/cinder-db-sync-8npk7" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.744091 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ab063039-b4d9-45d8-9336-35316fd1ab08-etc-machine-id\") pod \"cinder-db-sync-8npk7\" (UID: \"ab063039-b4d9-45d8-9336-35316fd1ab08\") " pod="openstack/cinder-db-sync-8npk7" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.744126 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ab063039-b4d9-45d8-9336-35316fd1ab08-db-sync-config-data\") pod \"cinder-db-sync-8npk7\" (UID: \"ab063039-b4d9-45d8-9336-35316fd1ab08\") " pod="openstack/cinder-db-sync-8npk7" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.744180 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb192e24-d3b0-4e96-8bbf-edb5b93ecf64-combined-ca-bundle\") pod \"neutron-db-sync-w6mv2\" (UID: \"bb192e24-d3b0-4e96-8bbf-edb5b93ecf64\") " pod="openstack/neutron-db-sync-w6mv2" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.744225 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b0e75bc-78b4-45e2-9c55-7b573ab3cc15-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8b0e75bc-78b4-45e2-9c55-7b573ab3cc15\") " pod="openstack/ceilometer-0" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.744270 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftdk2\" (UniqueName: \"kubernetes.io/projected/8b0e75bc-78b4-45e2-9c55-7b573ab3cc15-kube-api-access-ftdk2\") pod \"ceilometer-0\" (UID: \"8b0e75bc-78b4-45e2-9c55-7b573ab3cc15\") " pod="openstack/ceilometer-0" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.745898 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8b0e75bc-78b4-45e2-9c55-7b573ab3cc15-run-httpd\") pod \"ceilometer-0\" (UID: \"8b0e75bc-78b4-45e2-9c55-7b573ab3cc15\") " pod="openstack/ceilometer-0" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.747847 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/feff4031-5012-468f-8dd6-d58c5dae8d29-combined-ca-bundle\") pod \"barbican-db-sync-g5npx\" (UID: \"feff4031-5012-468f-8dd6-d58c5dae8d29\") " pod="openstack/barbican-db-sync-g5npx" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.747933 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/bb192e24-d3b0-4e96-8bbf-edb5b93ecf64-config\") pod \"neutron-db-sync-w6mv2\" (UID: \"bb192e24-d3b0-4e96-8bbf-edb5b93ecf64\") " pod="openstack/neutron-db-sync-w6mv2" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.748311 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8b0e75bc-78b4-45e2-9c55-7b573ab3cc15-log-httpd\") pod \"ceilometer-0\" (UID: \"8b0e75bc-78b4-45e2-9c55-7b573ab3cc15\") " pod="openstack/ceilometer-0" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.748927 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/feff4031-5012-468f-8dd6-d58c5dae8d29-db-sync-config-data\") pod \"barbican-db-sync-g5npx\" (UID: \"feff4031-5012-468f-8dd6-d58c5dae8d29\") " pod="openstack/barbican-db-sync-g5npx" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.749229 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b0e75bc-78b4-45e2-9c55-7b573ab3cc15-scripts\") pod \"ceilometer-0\" (UID: \"8b0e75bc-78b4-45e2-9c55-7b573ab3cc15\") " pod="openstack/ceilometer-0" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.750988 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b0e75bc-78b4-45e2-9c55-7b573ab3cc15-config-data\") pod \"ceilometer-0\" (UID: \"8b0e75bc-78b4-45e2-9c55-7b573ab3cc15\") " pod="openstack/ceilometer-0" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.752223 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b0e75bc-78b4-45e2-9c55-7b573ab3cc15-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8b0e75bc-78b4-45e2-9c55-7b573ab3cc15\") " pod="openstack/ceilometer-0" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.752242 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8b0e75bc-78b4-45e2-9c55-7b573ab3cc15-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8b0e75bc-78b4-45e2-9c55-7b573ab3cc15\") " pod="openstack/ceilometer-0" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.752758 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb192e24-d3b0-4e96-8bbf-edb5b93ecf64-combined-ca-bundle\") pod \"neutron-db-sync-w6mv2\" (UID: \"bb192e24-d3b0-4e96-8bbf-edb5b93ecf64\") " pod="openstack/neutron-db-sync-w6mv2" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.790835 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-w6mv2"] Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.792031 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dx66f\" (UniqueName: \"kubernetes.io/projected/bb192e24-d3b0-4e96-8bbf-edb5b93ecf64-kube-api-access-dx66f\") pod \"neutron-db-sync-w6mv2\" (UID: \"bb192e24-d3b0-4e96-8bbf-edb5b93ecf64\") " pod="openstack/neutron-db-sync-w6mv2" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.797856 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-8npk7"] Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.835874 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvgkt\" (UniqueName: \"kubernetes.io/projected/feff4031-5012-468f-8dd6-d58c5dae8d29-kube-api-access-rvgkt\") pod \"barbican-db-sync-g5npx\" (UID: \"feff4031-5012-468f-8dd6-d58c5dae8d29\") " pod="openstack/barbican-db-sync-g5npx" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.839074 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftdk2\" (UniqueName: \"kubernetes.io/projected/8b0e75bc-78b4-45e2-9c55-7b573ab3cc15-kube-api-access-ftdk2\") pod \"ceilometer-0\" (UID: \"8b0e75bc-78b4-45e2-9c55-7b573ab3cc15\") " pod="openstack/ceilometer-0" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.845827 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab063039-b4d9-45d8-9336-35316fd1ab08-scripts\") pod \"cinder-db-sync-8npk7\" (UID: \"ab063039-b4d9-45d8-9336-35316fd1ab08\") " pod="openstack/cinder-db-sync-8npk7" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.845884 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ab063039-b4d9-45d8-9336-35316fd1ab08-etc-machine-id\") pod \"cinder-db-sync-8npk7\" (UID: \"ab063039-b4d9-45d8-9336-35316fd1ab08\") " pod="openstack/cinder-db-sync-8npk7" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.845906 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ab063039-b4d9-45d8-9336-35316fd1ab08-db-sync-config-data\") pod \"cinder-db-sync-8npk7\" (UID: \"ab063039-b4d9-45d8-9336-35316fd1ab08\") " pod="openstack/cinder-db-sync-8npk7" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.845970 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tl5c\" (UniqueName: \"kubernetes.io/projected/ab063039-b4d9-45d8-9336-35316fd1ab08-kube-api-access-8tl5c\") pod \"cinder-db-sync-8npk7\" (UID: \"ab063039-b4d9-45d8-9336-35316fd1ab08\") " pod="openstack/cinder-db-sync-8npk7" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.845991 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab063039-b4d9-45d8-9336-35316fd1ab08-config-data\") pod \"cinder-db-sync-8npk7\" (UID: \"ab063039-b4d9-45d8-9336-35316fd1ab08\") " pod="openstack/cinder-db-sync-8npk7" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.846034 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab063039-b4d9-45d8-9336-35316fd1ab08-combined-ca-bundle\") pod \"cinder-db-sync-8npk7\" (UID: \"ab063039-b4d9-45d8-9336-35316fd1ab08\") " pod="openstack/cinder-db-sync-8npk7" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.853641 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ab063039-b4d9-45d8-9336-35316fd1ab08-etc-machine-id\") pod \"cinder-db-sync-8npk7\" (UID: \"ab063039-b4d9-45d8-9336-35316fd1ab08\") " pod="openstack/cinder-db-sync-8npk7" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.854856 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab063039-b4d9-45d8-9336-35316fd1ab08-combined-ca-bundle\") pod \"cinder-db-sync-8npk7\" (UID: \"ab063039-b4d9-45d8-9336-35316fd1ab08\") " pod="openstack/cinder-db-sync-8npk7" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.857062 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab063039-b4d9-45d8-9336-35316fd1ab08-config-data\") pod \"cinder-db-sync-8npk7\" (UID: \"ab063039-b4d9-45d8-9336-35316fd1ab08\") " pod="openstack/cinder-db-sync-8npk7" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.861796 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.864988 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ab063039-b4d9-45d8-9336-35316fd1ab08-db-sync-config-data\") pod \"cinder-db-sync-8npk7\" (UID: \"ab063039-b4d9-45d8-9336-35316fd1ab08\") " pod="openstack/cinder-db-sync-8npk7" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.869905 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab063039-b4d9-45d8-9336-35316fd1ab08-scripts\") pod \"cinder-db-sync-8npk7\" (UID: \"ab063039-b4d9-45d8-9336-35316fd1ab08\") " pod="openstack/cinder-db-sync-8npk7" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.883784 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tl5c\" (UniqueName: \"kubernetes.io/projected/ab063039-b4d9-45d8-9336-35316fd1ab08-kube-api-access-8tl5c\") pod \"cinder-db-sync-8npk7\" (UID: \"ab063039-b4d9-45d8-9336-35316fd1ab08\") " pod="openstack/cinder-db-sync-8npk7" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.912684 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6546db6db7-9gmnp"] Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.945253 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-6wkj4"] Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.946051 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-g5npx" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.946287 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-6wkj4" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.948779 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.948935 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.949123 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-c78vm" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.955688 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-w6mv2" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.965052 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-8npk7" Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.980657 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-gkdpr"] Nov 24 11:25:41 crc kubenswrapper[5072]: I1124 11:25:41.981965 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7987f74bbc-gkdpr" Nov 24 11:25:42 crc kubenswrapper[5072]: I1124 11:25:42.013643 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-6wkj4"] Nov 24 11:25:42 crc kubenswrapper[5072]: I1124 11:25:42.033956 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-gkdpr"] Nov 24 11:25:42 crc kubenswrapper[5072]: I1124 11:25:42.048912 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/09cc3e8f-663e-448b-b90f-8d794006c335-dns-svc\") pod \"dnsmasq-dns-7987f74bbc-gkdpr\" (UID: \"09cc3e8f-663e-448b-b90f-8d794006c335\") " pod="openstack/dnsmasq-dns-7987f74bbc-gkdpr" Nov 24 11:25:42 crc kubenswrapper[5072]: I1124 11:25:42.049336 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68f6d27e-d239-4e24-8381-872893433a07-config-data\") pod \"placement-db-sync-6wkj4\" (UID: \"68f6d27e-d239-4e24-8381-872893433a07\") " pod="openstack/placement-db-sync-6wkj4" Nov 24 11:25:42 crc kubenswrapper[5072]: I1124 11:25:42.049675 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68f6d27e-d239-4e24-8381-872893433a07-scripts\") pod \"placement-db-sync-6wkj4\" (UID: \"68f6d27e-d239-4e24-8381-872893433a07\") " pod="openstack/placement-db-sync-6wkj4" Nov 24 11:25:42 crc kubenswrapper[5072]: I1124 11:25:42.049753 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hxwm\" (UniqueName: \"kubernetes.io/projected/09cc3e8f-663e-448b-b90f-8d794006c335-kube-api-access-8hxwm\") pod \"dnsmasq-dns-7987f74bbc-gkdpr\" (UID: \"09cc3e8f-663e-448b-b90f-8d794006c335\") " pod="openstack/dnsmasq-dns-7987f74bbc-gkdpr" Nov 24 11:25:42 crc kubenswrapper[5072]: I1124 11:25:42.049800 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/09cc3e8f-663e-448b-b90f-8d794006c335-ovsdbserver-nb\") pod \"dnsmasq-dns-7987f74bbc-gkdpr\" (UID: \"09cc3e8f-663e-448b-b90f-8d794006c335\") " pod="openstack/dnsmasq-dns-7987f74bbc-gkdpr" Nov 24 11:25:42 crc kubenswrapper[5072]: I1124 11:25:42.049827 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xg59k\" (UniqueName: \"kubernetes.io/projected/68f6d27e-d239-4e24-8381-872893433a07-kube-api-access-xg59k\") pod \"placement-db-sync-6wkj4\" (UID: \"68f6d27e-d239-4e24-8381-872893433a07\") " pod="openstack/placement-db-sync-6wkj4" Nov 24 11:25:42 crc kubenswrapper[5072]: I1124 11:25:42.049875 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cc3e8f-663e-448b-b90f-8d794006c335-config\") pod \"dnsmasq-dns-7987f74bbc-gkdpr\" (UID: \"09cc3e8f-663e-448b-b90f-8d794006c335\") " pod="openstack/dnsmasq-dns-7987f74bbc-gkdpr" Nov 24 11:25:42 crc kubenswrapper[5072]: I1124 11:25:42.049911 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68f6d27e-d239-4e24-8381-872893433a07-logs\") pod \"placement-db-sync-6wkj4\" (UID: \"68f6d27e-d239-4e24-8381-872893433a07\") " pod="openstack/placement-db-sync-6wkj4" Nov 24 11:25:42 crc kubenswrapper[5072]: I1124 11:25:42.049932 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68f6d27e-d239-4e24-8381-872893433a07-combined-ca-bundle\") pod \"placement-db-sync-6wkj4\" (UID: \"68f6d27e-d239-4e24-8381-872893433a07\") " pod="openstack/placement-db-sync-6wkj4" Nov 24 11:25:42 crc kubenswrapper[5072]: I1124 11:25:42.049996 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/09cc3e8f-663e-448b-b90f-8d794006c335-ovsdbserver-sb\") pod \"dnsmasq-dns-7987f74bbc-gkdpr\" (UID: \"09cc3e8f-663e-448b-b90f-8d794006c335\") " pod="openstack/dnsmasq-dns-7987f74bbc-gkdpr" Nov 24 11:25:42 crc kubenswrapper[5072]: I1124 11:25:42.151325 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/09cc3e8f-663e-448b-b90f-8d794006c335-dns-svc\") pod \"dnsmasq-dns-7987f74bbc-gkdpr\" (UID: \"09cc3e8f-663e-448b-b90f-8d794006c335\") " pod="openstack/dnsmasq-dns-7987f74bbc-gkdpr" Nov 24 11:25:42 crc kubenswrapper[5072]: I1124 11:25:42.151393 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68f6d27e-d239-4e24-8381-872893433a07-config-data\") pod \"placement-db-sync-6wkj4\" (UID: \"68f6d27e-d239-4e24-8381-872893433a07\") " pod="openstack/placement-db-sync-6wkj4" Nov 24 11:25:42 crc kubenswrapper[5072]: I1124 11:25:42.151415 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68f6d27e-d239-4e24-8381-872893433a07-scripts\") pod \"placement-db-sync-6wkj4\" (UID: \"68f6d27e-d239-4e24-8381-872893433a07\") " pod="openstack/placement-db-sync-6wkj4" Nov 24 11:25:42 crc kubenswrapper[5072]: I1124 11:25:42.151466 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hxwm\" (UniqueName: \"kubernetes.io/projected/09cc3e8f-663e-448b-b90f-8d794006c335-kube-api-access-8hxwm\") pod \"dnsmasq-dns-7987f74bbc-gkdpr\" (UID: \"09cc3e8f-663e-448b-b90f-8d794006c335\") " pod="openstack/dnsmasq-dns-7987f74bbc-gkdpr" Nov 24 11:25:42 crc kubenswrapper[5072]: I1124 11:25:42.151493 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/09cc3e8f-663e-448b-b90f-8d794006c335-ovsdbserver-nb\") pod \"dnsmasq-dns-7987f74bbc-gkdpr\" (UID: \"09cc3e8f-663e-448b-b90f-8d794006c335\") " pod="openstack/dnsmasq-dns-7987f74bbc-gkdpr" Nov 24 11:25:42 crc kubenswrapper[5072]: I1124 11:25:42.151508 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xg59k\" (UniqueName: \"kubernetes.io/projected/68f6d27e-d239-4e24-8381-872893433a07-kube-api-access-xg59k\") pod \"placement-db-sync-6wkj4\" (UID: \"68f6d27e-d239-4e24-8381-872893433a07\") " pod="openstack/placement-db-sync-6wkj4" Nov 24 11:25:42 crc kubenswrapper[5072]: I1124 11:25:42.151536 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cc3e8f-663e-448b-b90f-8d794006c335-config\") pod \"dnsmasq-dns-7987f74bbc-gkdpr\" (UID: \"09cc3e8f-663e-448b-b90f-8d794006c335\") " pod="openstack/dnsmasq-dns-7987f74bbc-gkdpr" Nov 24 11:25:42 crc kubenswrapper[5072]: I1124 11:25:42.152669 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cc3e8f-663e-448b-b90f-8d794006c335-config\") pod \"dnsmasq-dns-7987f74bbc-gkdpr\" (UID: \"09cc3e8f-663e-448b-b90f-8d794006c335\") " pod="openstack/dnsmasq-dns-7987f74bbc-gkdpr" Nov 24 11:25:42 crc kubenswrapper[5072]: I1124 11:25:42.152676 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/09cc3e8f-663e-448b-b90f-8d794006c335-dns-svc\") pod \"dnsmasq-dns-7987f74bbc-gkdpr\" (UID: \"09cc3e8f-663e-448b-b90f-8d794006c335\") " pod="openstack/dnsmasq-dns-7987f74bbc-gkdpr" Nov 24 11:25:42 crc kubenswrapper[5072]: I1124 11:25:42.153782 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/09cc3e8f-663e-448b-b90f-8d794006c335-ovsdbserver-nb\") pod \"dnsmasq-dns-7987f74bbc-gkdpr\" (UID: \"09cc3e8f-663e-448b-b90f-8d794006c335\") " pod="openstack/dnsmasq-dns-7987f74bbc-gkdpr" Nov 24 11:25:42 crc kubenswrapper[5072]: I1124 11:25:42.153849 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68f6d27e-d239-4e24-8381-872893433a07-logs\") pod \"placement-db-sync-6wkj4\" (UID: \"68f6d27e-d239-4e24-8381-872893433a07\") " pod="openstack/placement-db-sync-6wkj4" Nov 24 11:25:42 crc kubenswrapper[5072]: I1124 11:25:42.153867 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68f6d27e-d239-4e24-8381-872893433a07-combined-ca-bundle\") pod \"placement-db-sync-6wkj4\" (UID: \"68f6d27e-d239-4e24-8381-872893433a07\") " pod="openstack/placement-db-sync-6wkj4" Nov 24 11:25:42 crc kubenswrapper[5072]: I1124 11:25:42.154134 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68f6d27e-d239-4e24-8381-872893433a07-logs\") pod \"placement-db-sync-6wkj4\" (UID: \"68f6d27e-d239-4e24-8381-872893433a07\") " pod="openstack/placement-db-sync-6wkj4" Nov 24 11:25:42 crc kubenswrapper[5072]: I1124 11:25:42.154748 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/09cc3e8f-663e-448b-b90f-8d794006c335-ovsdbserver-sb\") pod \"dnsmasq-dns-7987f74bbc-gkdpr\" (UID: \"09cc3e8f-663e-448b-b90f-8d794006c335\") " pod="openstack/dnsmasq-dns-7987f74bbc-gkdpr" Nov 24 11:25:42 crc kubenswrapper[5072]: I1124 11:25:42.154209 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/09cc3e8f-663e-448b-b90f-8d794006c335-ovsdbserver-sb\") pod \"dnsmasq-dns-7987f74bbc-gkdpr\" (UID: \"09cc3e8f-663e-448b-b90f-8d794006c335\") " pod="openstack/dnsmasq-dns-7987f74bbc-gkdpr" Nov 24 11:25:42 crc kubenswrapper[5072]: I1124 11:25:42.156135 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68f6d27e-d239-4e24-8381-872893433a07-scripts\") pod \"placement-db-sync-6wkj4\" (UID: \"68f6d27e-d239-4e24-8381-872893433a07\") " pod="openstack/placement-db-sync-6wkj4" Nov 24 11:25:42 crc kubenswrapper[5072]: I1124 11:25:42.159562 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68f6d27e-d239-4e24-8381-872893433a07-combined-ca-bundle\") pod \"placement-db-sync-6wkj4\" (UID: \"68f6d27e-d239-4e24-8381-872893433a07\") " pod="openstack/placement-db-sync-6wkj4" Nov 24 11:25:42 crc kubenswrapper[5072]: I1124 11:25:42.161104 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68f6d27e-d239-4e24-8381-872893433a07-config-data\") pod \"placement-db-sync-6wkj4\" (UID: \"68f6d27e-d239-4e24-8381-872893433a07\") " pod="openstack/placement-db-sync-6wkj4" Nov 24 11:25:42 crc kubenswrapper[5072]: I1124 11:25:42.170892 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xg59k\" (UniqueName: \"kubernetes.io/projected/68f6d27e-d239-4e24-8381-872893433a07-kube-api-access-xg59k\") pod \"placement-db-sync-6wkj4\" (UID: \"68f6d27e-d239-4e24-8381-872893433a07\") " pod="openstack/placement-db-sync-6wkj4" Nov 24 11:25:42 crc kubenswrapper[5072]: I1124 11:25:42.171126 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hxwm\" (UniqueName: \"kubernetes.io/projected/09cc3e8f-663e-448b-b90f-8d794006c335-kube-api-access-8hxwm\") pod \"dnsmasq-dns-7987f74bbc-gkdpr\" (UID: \"09cc3e8f-663e-448b-b90f-8d794006c335\") " pod="openstack/dnsmasq-dns-7987f74bbc-gkdpr" Nov 24 11:25:42 crc kubenswrapper[5072]: I1124 11:25:42.278622 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6546db6db7-9gmnp"] Nov 24 11:25:42 crc kubenswrapper[5072]: I1124 11:25:42.281864 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-6wkj4" Nov 24 11:25:42 crc kubenswrapper[5072]: I1124 11:25:42.325817 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7987f74bbc-gkdpr" Nov 24 11:25:42 crc kubenswrapper[5072]: I1124 11:25:42.409002 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:25:42 crc kubenswrapper[5072]: I1124 11:25:42.414739 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-lzk69"] Nov 24 11:25:42 crc kubenswrapper[5072]: W1124 11:25:42.415466 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8b0e75bc_78b4_45e2_9c55_7b573ab3cc15.slice/crio-e223c0f113b92b9808ab93835aff54c6d0e81c819bc94ad66797878bff8a649e WatchSource:0}: Error finding container e223c0f113b92b9808ab93835aff54c6d0e81c819bc94ad66797878bff8a649e: Status 404 returned error can't find the container with id e223c0f113b92b9808ab93835aff54c6d0e81c819bc94ad66797878bff8a649e Nov 24 11:25:42 crc kubenswrapper[5072]: I1124 11:25:42.579220 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-8npk7"] Nov 24 11:25:42 crc kubenswrapper[5072]: I1124 11:25:42.584705 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-g5npx"] Nov 24 11:25:42 crc kubenswrapper[5072]: I1124 11:25:42.599156 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-w6mv2"] Nov 24 11:25:42 crc kubenswrapper[5072]: I1124 11:25:42.813037 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-gkdpr"] Nov 24 11:25:42 crc kubenswrapper[5072]: I1124 11:25:42.865690 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-6wkj4"] Nov 24 11:25:42 crc kubenswrapper[5072]: I1124 11:25:42.940963 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-w6mv2" event={"ID":"bb192e24-d3b0-4e96-8bbf-edb5b93ecf64","Type":"ContainerStarted","Data":"01682fdca88f8d5d594c3f26d4e2b74dcece45edb2e28f32c44602dfccc2f459"} Nov 24 11:25:42 crc kubenswrapper[5072]: I1124 11:25:42.941016 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-w6mv2" event={"ID":"bb192e24-d3b0-4e96-8bbf-edb5b93ecf64","Type":"ContainerStarted","Data":"22b61f5c47c23120a3be2e16de692b6d797a5c4f4b1ffed88756c7ee883898ac"} Nov 24 11:25:42 crc kubenswrapper[5072]: I1124 11:25:42.943487 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-g5npx" event={"ID":"feff4031-5012-468f-8dd6-d58c5dae8d29","Type":"ContainerStarted","Data":"bdbd39a144d44d45a300b33842ecdeb7cb131ab8fd0489d4eb9c0865a9231705"} Nov 24 11:25:42 crc kubenswrapper[5072]: I1124 11:25:42.945854 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-6wkj4" event={"ID":"68f6d27e-d239-4e24-8381-872893433a07","Type":"ContainerStarted","Data":"aa4ca9518aee1324e10f1692917fddcddfa62021fe409712ddaf77b42ed7b287"} Nov 24 11:25:42 crc kubenswrapper[5072]: I1124 11:25:42.947406 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8b0e75bc-78b4-45e2-9c55-7b573ab3cc15","Type":"ContainerStarted","Data":"e223c0f113b92b9808ab93835aff54c6d0e81c819bc94ad66797878bff8a649e"} Nov 24 11:25:42 crc kubenswrapper[5072]: I1124 11:25:42.949667 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-lzk69" event={"ID":"75768a7e-65f0-498e-8f4e-5e178de8110e","Type":"ContainerStarted","Data":"d47123d9a768cc80969cf1ab5eeb3b37a3f4ba43a727da9cffb6be1900702a41"} Nov 24 11:25:42 crc kubenswrapper[5072]: I1124 11:25:42.949693 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-lzk69" event={"ID":"75768a7e-65f0-498e-8f4e-5e178de8110e","Type":"ContainerStarted","Data":"76e52b5a4d4c3f8f086a62c8ae594ec13395efc2a694cab674e5d1da75a55bc3"} Nov 24 11:25:42 crc kubenswrapper[5072]: I1124 11:25:42.957014 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-w6mv2" podStartSLOduration=1.956994889 podStartE2EDuration="1.956994889s" podCreationTimestamp="2025-11-24 11:25:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:25:42.955414869 +0000 UTC m=+994.666939355" watchObservedRunningTime="2025-11-24 11:25:42.956994889 +0000 UTC m=+994.668519375" Nov 24 11:25:42 crc kubenswrapper[5072]: I1124 11:25:42.959269 5072 generic.go:334] "Generic (PLEG): container finished" podID="37957a65-eea2-46e5-8aca-52d6d7a4681c" containerID="9c653e02cb0959080a1a52547e16f5b8a41b1bdfcd90a26db8119c4bcde681de" exitCode=0 Nov 24 11:25:42 crc kubenswrapper[5072]: I1124 11:25:42.959705 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6546db6db7-9gmnp" event={"ID":"37957a65-eea2-46e5-8aca-52d6d7a4681c","Type":"ContainerDied","Data":"9c653e02cb0959080a1a52547e16f5b8a41b1bdfcd90a26db8119c4bcde681de"} Nov 24 11:25:42 crc kubenswrapper[5072]: I1124 11:25:42.959764 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6546db6db7-9gmnp" event={"ID":"37957a65-eea2-46e5-8aca-52d6d7a4681c","Type":"ContainerStarted","Data":"72ea1440c65d44d0972892a0a5c525359aef01c807bd4f87e57f01a7aa169e8b"} Nov 24 11:25:42 crc kubenswrapper[5072]: I1124 11:25:42.962945 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-8npk7" event={"ID":"ab063039-b4d9-45d8-9336-35316fd1ab08","Type":"ContainerStarted","Data":"ed8d58bb6d200b2eed07554c18358fbda3effb95e82793acbfa6e6f8373b4e18"} Nov 24 11:25:42 crc kubenswrapper[5072]: I1124 11:25:42.965559 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7987f74bbc-gkdpr" event={"ID":"09cc3e8f-663e-448b-b90f-8d794006c335","Type":"ContainerStarted","Data":"f8fbf131a977f58c3d5c7bd192ce10d3a68ad3c9f8f869645a44ce6215082ff9"} Nov 24 11:25:42 crc kubenswrapper[5072]: I1124 11:25:42.985393 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-lzk69" podStartSLOduration=1.985356108 podStartE2EDuration="1.985356108s" podCreationTimestamp="2025-11-24 11:25:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:25:42.971115467 +0000 UTC m=+994.682639933" watchObservedRunningTime="2025-11-24 11:25:42.985356108 +0000 UTC m=+994.696880594" Nov 24 11:25:43 crc kubenswrapper[5072]: I1124 11:25:43.287359 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6546db6db7-9gmnp" Nov 24 11:25:43 crc kubenswrapper[5072]: I1124 11:25:43.376918 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/37957a65-eea2-46e5-8aca-52d6d7a4681c-ovsdbserver-sb\") pod \"37957a65-eea2-46e5-8aca-52d6d7a4681c\" (UID: \"37957a65-eea2-46e5-8aca-52d6d7a4681c\") " Nov 24 11:25:43 crc kubenswrapper[5072]: I1124 11:25:43.376966 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/37957a65-eea2-46e5-8aca-52d6d7a4681c-dns-svc\") pod \"37957a65-eea2-46e5-8aca-52d6d7a4681c\" (UID: \"37957a65-eea2-46e5-8aca-52d6d7a4681c\") " Nov 24 11:25:43 crc kubenswrapper[5072]: I1124 11:25:43.376990 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37957a65-eea2-46e5-8aca-52d6d7a4681c-config\") pod \"37957a65-eea2-46e5-8aca-52d6d7a4681c\" (UID: \"37957a65-eea2-46e5-8aca-52d6d7a4681c\") " Nov 24 11:25:43 crc kubenswrapper[5072]: I1124 11:25:43.377014 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/37957a65-eea2-46e5-8aca-52d6d7a4681c-ovsdbserver-nb\") pod \"37957a65-eea2-46e5-8aca-52d6d7a4681c\" (UID: \"37957a65-eea2-46e5-8aca-52d6d7a4681c\") " Nov 24 11:25:43 crc kubenswrapper[5072]: I1124 11:25:43.377102 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ckftv\" (UniqueName: \"kubernetes.io/projected/37957a65-eea2-46e5-8aca-52d6d7a4681c-kube-api-access-ckftv\") pod \"37957a65-eea2-46e5-8aca-52d6d7a4681c\" (UID: \"37957a65-eea2-46e5-8aca-52d6d7a4681c\") " Nov 24 11:25:43 crc kubenswrapper[5072]: I1124 11:25:43.381157 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37957a65-eea2-46e5-8aca-52d6d7a4681c-kube-api-access-ckftv" (OuterVolumeSpecName: "kube-api-access-ckftv") pod "37957a65-eea2-46e5-8aca-52d6d7a4681c" (UID: "37957a65-eea2-46e5-8aca-52d6d7a4681c"). InnerVolumeSpecName "kube-api-access-ckftv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:25:43 crc kubenswrapper[5072]: I1124 11:25:43.413366 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37957a65-eea2-46e5-8aca-52d6d7a4681c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "37957a65-eea2-46e5-8aca-52d6d7a4681c" (UID: "37957a65-eea2-46e5-8aca-52d6d7a4681c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:25:43 crc kubenswrapper[5072]: I1124 11:25:43.439879 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37957a65-eea2-46e5-8aca-52d6d7a4681c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "37957a65-eea2-46e5-8aca-52d6d7a4681c" (UID: "37957a65-eea2-46e5-8aca-52d6d7a4681c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:25:43 crc kubenswrapper[5072]: I1124 11:25:43.475100 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37957a65-eea2-46e5-8aca-52d6d7a4681c-config" (OuterVolumeSpecName: "config") pod "37957a65-eea2-46e5-8aca-52d6d7a4681c" (UID: "37957a65-eea2-46e5-8aca-52d6d7a4681c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:25:43 crc kubenswrapper[5072]: I1124 11:25:43.485342 5072 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/37957a65-eea2-46e5-8aca-52d6d7a4681c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 11:25:43 crc kubenswrapper[5072]: I1124 11:25:43.485387 5072 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/37957a65-eea2-46e5-8aca-52d6d7a4681c-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 11:25:43 crc kubenswrapper[5072]: I1124 11:25:43.485399 5072 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37957a65-eea2-46e5-8aca-52d6d7a4681c-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:25:43 crc kubenswrapper[5072]: I1124 11:25:43.485408 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ckftv\" (UniqueName: \"kubernetes.io/projected/37957a65-eea2-46e5-8aca-52d6d7a4681c-kube-api-access-ckftv\") on node \"crc\" DevicePath \"\"" Nov 24 11:25:43 crc kubenswrapper[5072]: I1124 11:25:43.521850 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37957a65-eea2-46e5-8aca-52d6d7a4681c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "37957a65-eea2-46e5-8aca-52d6d7a4681c" (UID: "37957a65-eea2-46e5-8aca-52d6d7a4681c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:25:43 crc kubenswrapper[5072]: I1124 11:25:43.588508 5072 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/37957a65-eea2-46e5-8aca-52d6d7a4681c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 11:25:43 crc kubenswrapper[5072]: I1124 11:25:43.603096 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:25:43 crc kubenswrapper[5072]: I1124 11:25:43.977366 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6546db6db7-9gmnp" event={"ID":"37957a65-eea2-46e5-8aca-52d6d7a4681c","Type":"ContainerDied","Data":"72ea1440c65d44d0972892a0a5c525359aef01c807bd4f87e57f01a7aa169e8b"} Nov 24 11:25:43 crc kubenswrapper[5072]: I1124 11:25:43.977427 5072 scope.go:117] "RemoveContainer" containerID="9c653e02cb0959080a1a52547e16f5b8a41b1bdfcd90a26db8119c4bcde681de" Nov 24 11:25:43 crc kubenswrapper[5072]: I1124 11:25:43.977391 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6546db6db7-9gmnp" Nov 24 11:25:43 crc kubenswrapper[5072]: I1124 11:25:43.991722 5072 generic.go:334] "Generic (PLEG): container finished" podID="09cc3e8f-663e-448b-b90f-8d794006c335" containerID="d934220f2b88c3c0da8cc478cf088ad1c5a8282506d738e4c323c259cbd686d2" exitCode=0 Nov 24 11:25:43 crc kubenswrapper[5072]: I1124 11:25:43.992952 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7987f74bbc-gkdpr" event={"ID":"09cc3e8f-663e-448b-b90f-8d794006c335","Type":"ContainerDied","Data":"d934220f2b88c3c0da8cc478cf088ad1c5a8282506d738e4c323c259cbd686d2"} Nov 24 11:25:44 crc kubenswrapper[5072]: I1124 11:25:44.036887 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6546db6db7-9gmnp"] Nov 24 11:25:44 crc kubenswrapper[5072]: I1124 11:25:44.042283 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6546db6db7-9gmnp"] Nov 24 11:25:45 crc kubenswrapper[5072]: I1124 11:25:45.006873 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7987f74bbc-gkdpr" event={"ID":"09cc3e8f-663e-448b-b90f-8d794006c335","Type":"ContainerStarted","Data":"a220cd3e03c994b9b665cb1ac88ac20ceafaee04d80bdc570e14bcfef12389bf"} Nov 24 11:25:45 crc kubenswrapper[5072]: I1124 11:25:45.007293 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7987f74bbc-gkdpr" Nov 24 11:25:45 crc kubenswrapper[5072]: I1124 11:25:45.032091 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37957a65-eea2-46e5-8aca-52d6d7a4681c" path="/var/lib/kubelet/pods/37957a65-eea2-46e5-8aca-52d6d7a4681c/volumes" Nov 24 11:25:45 crc kubenswrapper[5072]: I1124 11:25:45.033933 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7987f74bbc-gkdpr" podStartSLOduration=4.033908383 podStartE2EDuration="4.033908383s" podCreationTimestamp="2025-11-24 11:25:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:25:45.02748824 +0000 UTC m=+996.739012716" watchObservedRunningTime="2025-11-24 11:25:45.033908383 +0000 UTC m=+996.745432859" Nov 24 11:25:47 crc kubenswrapper[5072]: I1124 11:25:47.025933 5072 generic.go:334] "Generic (PLEG): container finished" podID="75768a7e-65f0-498e-8f4e-5e178de8110e" containerID="d47123d9a768cc80969cf1ab5eeb3b37a3f4ba43a727da9cffb6be1900702a41" exitCode=0 Nov 24 11:25:47 crc kubenswrapper[5072]: I1124 11:25:47.026766 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-lzk69" event={"ID":"75768a7e-65f0-498e-8f4e-5e178de8110e","Type":"ContainerDied","Data":"d47123d9a768cc80969cf1ab5eeb3b37a3f4ba43a727da9cffb6be1900702a41"} Nov 24 11:25:49 crc kubenswrapper[5072]: I1124 11:25:49.163799 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-lzk69" Nov 24 11:25:49 crc kubenswrapper[5072]: I1124 11:25:49.197642 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75768a7e-65f0-498e-8f4e-5e178de8110e-combined-ca-bundle\") pod \"75768a7e-65f0-498e-8f4e-5e178de8110e\" (UID: \"75768a7e-65f0-498e-8f4e-5e178de8110e\") " Nov 24 11:25:49 crc kubenswrapper[5072]: I1124 11:25:49.198083 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n28fx\" (UniqueName: \"kubernetes.io/projected/75768a7e-65f0-498e-8f4e-5e178de8110e-kube-api-access-n28fx\") pod \"75768a7e-65f0-498e-8f4e-5e178de8110e\" (UID: \"75768a7e-65f0-498e-8f4e-5e178de8110e\") " Nov 24 11:25:49 crc kubenswrapper[5072]: I1124 11:25:49.198169 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/75768a7e-65f0-498e-8f4e-5e178de8110e-credential-keys\") pod \"75768a7e-65f0-498e-8f4e-5e178de8110e\" (UID: \"75768a7e-65f0-498e-8f4e-5e178de8110e\") " Nov 24 11:25:49 crc kubenswrapper[5072]: I1124 11:25:49.198217 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75768a7e-65f0-498e-8f4e-5e178de8110e-scripts\") pod \"75768a7e-65f0-498e-8f4e-5e178de8110e\" (UID: \"75768a7e-65f0-498e-8f4e-5e178de8110e\") " Nov 24 11:25:49 crc kubenswrapper[5072]: I1124 11:25:49.198240 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75768a7e-65f0-498e-8f4e-5e178de8110e-config-data\") pod \"75768a7e-65f0-498e-8f4e-5e178de8110e\" (UID: \"75768a7e-65f0-498e-8f4e-5e178de8110e\") " Nov 24 11:25:49 crc kubenswrapper[5072]: I1124 11:25:49.198267 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/75768a7e-65f0-498e-8f4e-5e178de8110e-fernet-keys\") pod \"75768a7e-65f0-498e-8f4e-5e178de8110e\" (UID: \"75768a7e-65f0-498e-8f4e-5e178de8110e\") " Nov 24 11:25:49 crc kubenswrapper[5072]: I1124 11:25:49.214526 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75768a7e-65f0-498e-8f4e-5e178de8110e-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "75768a7e-65f0-498e-8f4e-5e178de8110e" (UID: "75768a7e-65f0-498e-8f4e-5e178de8110e"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:25:49 crc kubenswrapper[5072]: I1124 11:25:49.217516 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75768a7e-65f0-498e-8f4e-5e178de8110e-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "75768a7e-65f0-498e-8f4e-5e178de8110e" (UID: "75768a7e-65f0-498e-8f4e-5e178de8110e"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:25:49 crc kubenswrapper[5072]: I1124 11:25:49.220230 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75768a7e-65f0-498e-8f4e-5e178de8110e-scripts" (OuterVolumeSpecName: "scripts") pod "75768a7e-65f0-498e-8f4e-5e178de8110e" (UID: "75768a7e-65f0-498e-8f4e-5e178de8110e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:25:49 crc kubenswrapper[5072]: I1124 11:25:49.227438 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75768a7e-65f0-498e-8f4e-5e178de8110e-kube-api-access-n28fx" (OuterVolumeSpecName: "kube-api-access-n28fx") pod "75768a7e-65f0-498e-8f4e-5e178de8110e" (UID: "75768a7e-65f0-498e-8f4e-5e178de8110e"). InnerVolumeSpecName "kube-api-access-n28fx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:25:49 crc kubenswrapper[5072]: I1124 11:25:49.230688 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75768a7e-65f0-498e-8f4e-5e178de8110e-config-data" (OuterVolumeSpecName: "config-data") pod "75768a7e-65f0-498e-8f4e-5e178de8110e" (UID: "75768a7e-65f0-498e-8f4e-5e178de8110e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:25:49 crc kubenswrapper[5072]: I1124 11:25:49.244328 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75768a7e-65f0-498e-8f4e-5e178de8110e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "75768a7e-65f0-498e-8f4e-5e178de8110e" (UID: "75768a7e-65f0-498e-8f4e-5e178de8110e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:25:49 crc kubenswrapper[5072]: I1124 11:25:49.301046 5072 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75768a7e-65f0-498e-8f4e-5e178de8110e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:25:49 crc kubenswrapper[5072]: I1124 11:25:49.301082 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n28fx\" (UniqueName: \"kubernetes.io/projected/75768a7e-65f0-498e-8f4e-5e178de8110e-kube-api-access-n28fx\") on node \"crc\" DevicePath \"\"" Nov 24 11:25:49 crc kubenswrapper[5072]: I1124 11:25:49.301094 5072 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/75768a7e-65f0-498e-8f4e-5e178de8110e-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 24 11:25:49 crc kubenswrapper[5072]: I1124 11:25:49.301104 5072 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75768a7e-65f0-498e-8f4e-5e178de8110e-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:25:49 crc kubenswrapper[5072]: I1124 11:25:49.301113 5072 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75768a7e-65f0-498e-8f4e-5e178de8110e-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:25:49 crc kubenswrapper[5072]: I1124 11:25:49.301120 5072 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/75768a7e-65f0-498e-8f4e-5e178de8110e-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 24 11:25:50 crc kubenswrapper[5072]: I1124 11:25:50.064885 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-lzk69" event={"ID":"75768a7e-65f0-498e-8f4e-5e178de8110e","Type":"ContainerDied","Data":"76e52b5a4d4c3f8f086a62c8ae594ec13395efc2a694cab674e5d1da75a55bc3"} Nov 24 11:25:50 crc kubenswrapper[5072]: I1124 11:25:50.065174 5072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76e52b5a4d4c3f8f086a62c8ae594ec13395efc2a694cab674e5d1da75a55bc3" Nov 24 11:25:50 crc kubenswrapper[5072]: I1124 11:25:50.064997 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-lzk69" Nov 24 11:25:50 crc kubenswrapper[5072]: I1124 11:25:50.349720 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-lzk69"] Nov 24 11:25:50 crc kubenswrapper[5072]: I1124 11:25:50.355474 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-lzk69"] Nov 24 11:25:50 crc kubenswrapper[5072]: I1124 11:25:50.442580 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-jrmwr"] Nov 24 11:25:50 crc kubenswrapper[5072]: E1124 11:25:50.442877 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37957a65-eea2-46e5-8aca-52d6d7a4681c" containerName="init" Nov 24 11:25:50 crc kubenswrapper[5072]: I1124 11:25:50.442892 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="37957a65-eea2-46e5-8aca-52d6d7a4681c" containerName="init" Nov 24 11:25:50 crc kubenswrapper[5072]: E1124 11:25:50.442924 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75768a7e-65f0-498e-8f4e-5e178de8110e" containerName="keystone-bootstrap" Nov 24 11:25:50 crc kubenswrapper[5072]: I1124 11:25:50.442931 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="75768a7e-65f0-498e-8f4e-5e178de8110e" containerName="keystone-bootstrap" Nov 24 11:25:50 crc kubenswrapper[5072]: I1124 11:25:50.443075 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="75768a7e-65f0-498e-8f4e-5e178de8110e" containerName="keystone-bootstrap" Nov 24 11:25:50 crc kubenswrapper[5072]: I1124 11:25:50.443105 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="37957a65-eea2-46e5-8aca-52d6d7a4681c" containerName="init" Nov 24 11:25:50 crc kubenswrapper[5072]: I1124 11:25:50.443656 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jrmwr" Nov 24 11:25:50 crc kubenswrapper[5072]: I1124 11:25:50.445679 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 24 11:25:50 crc kubenswrapper[5072]: I1124 11:25:50.445867 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 24 11:25:50 crc kubenswrapper[5072]: I1124 11:25:50.445914 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-lc8qn" Nov 24 11:25:50 crc kubenswrapper[5072]: I1124 11:25:50.446056 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 24 11:25:50 crc kubenswrapper[5072]: I1124 11:25:50.453404 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-jrmwr"] Nov 24 11:25:50 crc kubenswrapper[5072]: I1124 11:25:50.460586 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 24 11:25:50 crc kubenswrapper[5072]: I1124 11:25:50.526010 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9d9bdb5-a7d6-4caf-9212-4707da33f459-config-data\") pod \"keystone-bootstrap-jrmwr\" (UID: \"b9d9bdb5-a7d6-4caf-9212-4707da33f459\") " pod="openstack/keystone-bootstrap-jrmwr" Nov 24 11:25:50 crc kubenswrapper[5072]: I1124 11:25:50.528064 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b9d9bdb5-a7d6-4caf-9212-4707da33f459-fernet-keys\") pod \"keystone-bootstrap-jrmwr\" (UID: \"b9d9bdb5-a7d6-4caf-9212-4707da33f459\") " pod="openstack/keystone-bootstrap-jrmwr" Nov 24 11:25:50 crc kubenswrapper[5072]: I1124 11:25:50.528190 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b9d9bdb5-a7d6-4caf-9212-4707da33f459-credential-keys\") pod \"keystone-bootstrap-jrmwr\" (UID: \"b9d9bdb5-a7d6-4caf-9212-4707da33f459\") " pod="openstack/keystone-bootstrap-jrmwr" Nov 24 11:25:50 crc kubenswrapper[5072]: I1124 11:25:50.528235 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9d9bdb5-a7d6-4caf-9212-4707da33f459-combined-ca-bundle\") pod \"keystone-bootstrap-jrmwr\" (UID: \"b9d9bdb5-a7d6-4caf-9212-4707da33f459\") " pod="openstack/keystone-bootstrap-jrmwr" Nov 24 11:25:50 crc kubenswrapper[5072]: I1124 11:25:50.528271 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8l4vz\" (UniqueName: \"kubernetes.io/projected/b9d9bdb5-a7d6-4caf-9212-4707da33f459-kube-api-access-8l4vz\") pod \"keystone-bootstrap-jrmwr\" (UID: \"b9d9bdb5-a7d6-4caf-9212-4707da33f459\") " pod="openstack/keystone-bootstrap-jrmwr" Nov 24 11:25:50 crc kubenswrapper[5072]: I1124 11:25:50.528314 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9d9bdb5-a7d6-4caf-9212-4707da33f459-scripts\") pod \"keystone-bootstrap-jrmwr\" (UID: \"b9d9bdb5-a7d6-4caf-9212-4707da33f459\") " pod="openstack/keystone-bootstrap-jrmwr" Nov 24 11:25:50 crc kubenswrapper[5072]: I1124 11:25:50.630013 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9d9bdb5-a7d6-4caf-9212-4707da33f459-config-data\") pod \"keystone-bootstrap-jrmwr\" (UID: \"b9d9bdb5-a7d6-4caf-9212-4707da33f459\") " pod="openstack/keystone-bootstrap-jrmwr" Nov 24 11:25:50 crc kubenswrapper[5072]: I1124 11:25:50.630131 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b9d9bdb5-a7d6-4caf-9212-4707da33f459-fernet-keys\") pod \"keystone-bootstrap-jrmwr\" (UID: \"b9d9bdb5-a7d6-4caf-9212-4707da33f459\") " pod="openstack/keystone-bootstrap-jrmwr" Nov 24 11:25:50 crc kubenswrapper[5072]: I1124 11:25:50.630189 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b9d9bdb5-a7d6-4caf-9212-4707da33f459-credential-keys\") pod \"keystone-bootstrap-jrmwr\" (UID: \"b9d9bdb5-a7d6-4caf-9212-4707da33f459\") " pod="openstack/keystone-bootstrap-jrmwr" Nov 24 11:25:50 crc kubenswrapper[5072]: I1124 11:25:50.630225 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9d9bdb5-a7d6-4caf-9212-4707da33f459-combined-ca-bundle\") pod \"keystone-bootstrap-jrmwr\" (UID: \"b9d9bdb5-a7d6-4caf-9212-4707da33f459\") " pod="openstack/keystone-bootstrap-jrmwr" Nov 24 11:25:50 crc kubenswrapper[5072]: I1124 11:25:50.630256 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8l4vz\" (UniqueName: \"kubernetes.io/projected/b9d9bdb5-a7d6-4caf-9212-4707da33f459-kube-api-access-8l4vz\") pod \"keystone-bootstrap-jrmwr\" (UID: \"b9d9bdb5-a7d6-4caf-9212-4707da33f459\") " pod="openstack/keystone-bootstrap-jrmwr" Nov 24 11:25:50 crc kubenswrapper[5072]: I1124 11:25:50.630298 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9d9bdb5-a7d6-4caf-9212-4707da33f459-scripts\") pod \"keystone-bootstrap-jrmwr\" (UID: \"b9d9bdb5-a7d6-4caf-9212-4707da33f459\") " pod="openstack/keystone-bootstrap-jrmwr" Nov 24 11:25:50 crc kubenswrapper[5072]: I1124 11:25:50.636330 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9d9bdb5-a7d6-4caf-9212-4707da33f459-config-data\") pod \"keystone-bootstrap-jrmwr\" (UID: \"b9d9bdb5-a7d6-4caf-9212-4707da33f459\") " pod="openstack/keystone-bootstrap-jrmwr" Nov 24 11:25:50 crc kubenswrapper[5072]: I1124 11:25:50.637103 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9d9bdb5-a7d6-4caf-9212-4707da33f459-scripts\") pod \"keystone-bootstrap-jrmwr\" (UID: \"b9d9bdb5-a7d6-4caf-9212-4707da33f459\") " pod="openstack/keystone-bootstrap-jrmwr" Nov 24 11:25:50 crc kubenswrapper[5072]: I1124 11:25:50.638003 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9d9bdb5-a7d6-4caf-9212-4707da33f459-combined-ca-bundle\") pod \"keystone-bootstrap-jrmwr\" (UID: \"b9d9bdb5-a7d6-4caf-9212-4707da33f459\") " pod="openstack/keystone-bootstrap-jrmwr" Nov 24 11:25:50 crc kubenswrapper[5072]: I1124 11:25:50.638265 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b9d9bdb5-a7d6-4caf-9212-4707da33f459-fernet-keys\") pod \"keystone-bootstrap-jrmwr\" (UID: \"b9d9bdb5-a7d6-4caf-9212-4707da33f459\") " pod="openstack/keystone-bootstrap-jrmwr" Nov 24 11:25:50 crc kubenswrapper[5072]: I1124 11:25:50.645657 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b9d9bdb5-a7d6-4caf-9212-4707da33f459-credential-keys\") pod \"keystone-bootstrap-jrmwr\" (UID: \"b9d9bdb5-a7d6-4caf-9212-4707da33f459\") " pod="openstack/keystone-bootstrap-jrmwr" Nov 24 11:25:50 crc kubenswrapper[5072]: I1124 11:25:50.648273 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8l4vz\" (UniqueName: \"kubernetes.io/projected/b9d9bdb5-a7d6-4caf-9212-4707da33f459-kube-api-access-8l4vz\") pod \"keystone-bootstrap-jrmwr\" (UID: \"b9d9bdb5-a7d6-4caf-9212-4707da33f459\") " pod="openstack/keystone-bootstrap-jrmwr" Nov 24 11:25:50 crc kubenswrapper[5072]: I1124 11:25:50.775642 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jrmwr" Nov 24 11:25:51 crc kubenswrapper[5072]: I1124 11:25:51.030691 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75768a7e-65f0-498e-8f4e-5e178de8110e" path="/var/lib/kubelet/pods/75768a7e-65f0-498e-8f4e-5e178de8110e/volumes" Nov 24 11:25:52 crc kubenswrapper[5072]: I1124 11:25:52.327595 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7987f74bbc-gkdpr" Nov 24 11:25:52 crc kubenswrapper[5072]: I1124 11:25:52.405423 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54f9b7b8d9-w56kf"] Nov 24 11:25:52 crc kubenswrapper[5072]: I1124 11:25:52.405691 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-54f9b7b8d9-w56kf" podUID="65c4aeb0-5394-4ff2-b993-449041d6ba77" containerName="dnsmasq-dns" containerID="cri-o://a6afe5388d692db48c23ec636539320874fa9385f06e96c71c08f8277c15fdf3" gracePeriod=10 Nov 24 11:25:53 crc kubenswrapper[5072]: I1124 11:25:53.096673 5072 generic.go:334] "Generic (PLEG): container finished" podID="65c4aeb0-5394-4ff2-b993-449041d6ba77" containerID="a6afe5388d692db48c23ec636539320874fa9385f06e96c71c08f8277c15fdf3" exitCode=0 Nov 24 11:25:53 crc kubenswrapper[5072]: I1124 11:25:53.096722 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54f9b7b8d9-w56kf" event={"ID":"65c4aeb0-5394-4ff2-b993-449041d6ba77","Type":"ContainerDied","Data":"a6afe5388d692db48c23ec636539320874fa9385f06e96c71c08f8277c15fdf3"} Nov 24 11:25:55 crc kubenswrapper[5072]: I1124 11:25:55.349922 5072 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-54f9b7b8d9-w56kf" podUID="65c4aeb0-5394-4ff2-b993-449041d6ba77" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.121:5353: connect: connection refused" Nov 24 11:25:58 crc kubenswrapper[5072]: I1124 11:25:58.143145 5072 generic.go:334] "Generic (PLEG): container finished" podID="bb192e24-d3b0-4e96-8bbf-edb5b93ecf64" containerID="01682fdca88f8d5d594c3f26d4e2b74dcece45edb2e28f32c44602dfccc2f459" exitCode=0 Nov 24 11:25:58 crc kubenswrapper[5072]: I1124 11:25:58.143251 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-w6mv2" event={"ID":"bb192e24-d3b0-4e96-8bbf-edb5b93ecf64","Type":"ContainerDied","Data":"01682fdca88f8d5d594c3f26d4e2b74dcece45edb2e28f32c44602dfccc2f459"} Nov 24 11:26:00 crc kubenswrapper[5072]: I1124 11:26:00.349961 5072 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-54f9b7b8d9-w56kf" podUID="65c4aeb0-5394-4ff2-b993-449041d6ba77" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.121:5353: connect: connection refused" Nov 24 11:26:02 crc kubenswrapper[5072]: E1124 11:26:02.404294 5072 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Nov 24 11:26:02 crc kubenswrapper[5072]: E1124 11:26:02.404666 5072 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rvgkt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-g5npx_openstack(feff4031-5012-468f-8dd6-d58c5dae8d29): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 11:26:02 crc kubenswrapper[5072]: E1124 11:26:02.405853 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-g5npx" podUID="feff4031-5012-468f-8dd6-d58c5dae8d29" Nov 24 11:26:02 crc kubenswrapper[5072]: I1124 11:26:02.493430 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-w6mv2" Nov 24 11:26:02 crc kubenswrapper[5072]: I1124 11:26:02.587424 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb192e24-d3b0-4e96-8bbf-edb5b93ecf64-combined-ca-bundle\") pod \"bb192e24-d3b0-4e96-8bbf-edb5b93ecf64\" (UID: \"bb192e24-d3b0-4e96-8bbf-edb5b93ecf64\") " Nov 24 11:26:02 crc kubenswrapper[5072]: I1124 11:26:02.587482 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dx66f\" (UniqueName: \"kubernetes.io/projected/bb192e24-d3b0-4e96-8bbf-edb5b93ecf64-kube-api-access-dx66f\") pod \"bb192e24-d3b0-4e96-8bbf-edb5b93ecf64\" (UID: \"bb192e24-d3b0-4e96-8bbf-edb5b93ecf64\") " Nov 24 11:26:02 crc kubenswrapper[5072]: I1124 11:26:02.587568 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/bb192e24-d3b0-4e96-8bbf-edb5b93ecf64-config\") pod \"bb192e24-d3b0-4e96-8bbf-edb5b93ecf64\" (UID: \"bb192e24-d3b0-4e96-8bbf-edb5b93ecf64\") " Nov 24 11:26:02 crc kubenswrapper[5072]: I1124 11:26:02.594635 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb192e24-d3b0-4e96-8bbf-edb5b93ecf64-kube-api-access-dx66f" (OuterVolumeSpecName: "kube-api-access-dx66f") pod "bb192e24-d3b0-4e96-8bbf-edb5b93ecf64" (UID: "bb192e24-d3b0-4e96-8bbf-edb5b93ecf64"). InnerVolumeSpecName "kube-api-access-dx66f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:26:02 crc kubenswrapper[5072]: I1124 11:26:02.614226 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb192e24-d3b0-4e96-8bbf-edb5b93ecf64-config" (OuterVolumeSpecName: "config") pod "bb192e24-d3b0-4e96-8bbf-edb5b93ecf64" (UID: "bb192e24-d3b0-4e96-8bbf-edb5b93ecf64"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:26:02 crc kubenswrapper[5072]: I1124 11:26:02.622801 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb192e24-d3b0-4e96-8bbf-edb5b93ecf64-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bb192e24-d3b0-4e96-8bbf-edb5b93ecf64" (UID: "bb192e24-d3b0-4e96-8bbf-edb5b93ecf64"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:26:02 crc kubenswrapper[5072]: I1124 11:26:02.689362 5072 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb192e24-d3b0-4e96-8bbf-edb5b93ecf64-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:02 crc kubenswrapper[5072]: I1124 11:26:02.689412 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dx66f\" (UniqueName: \"kubernetes.io/projected/bb192e24-d3b0-4e96-8bbf-edb5b93ecf64-kube-api-access-dx66f\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:02 crc kubenswrapper[5072]: I1124 11:26:02.689426 5072 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/bb192e24-d3b0-4e96-8bbf-edb5b93ecf64-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:03 crc kubenswrapper[5072]: I1124 11:26:03.190584 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-w6mv2" event={"ID":"bb192e24-d3b0-4e96-8bbf-edb5b93ecf64","Type":"ContainerDied","Data":"22b61f5c47c23120a3be2e16de692b6d797a5c4f4b1ffed88756c7ee883898ac"} Nov 24 11:26:03 crc kubenswrapper[5072]: I1124 11:26:03.190667 5072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22b61f5c47c23120a3be2e16de692b6d797a5c4f4b1ffed88756c7ee883898ac" Nov 24 11:26:03 crc kubenswrapper[5072]: I1124 11:26:03.190730 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-w6mv2" Nov 24 11:26:03 crc kubenswrapper[5072]: E1124 11:26:03.194262 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-g5npx" podUID="feff4031-5012-468f-8dd6-d58c5dae8d29" Nov 24 11:26:03 crc kubenswrapper[5072]: E1124 11:26:03.589892 5072 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Nov 24 11:26:03 crc kubenswrapper[5072]: E1124 11:26:03.590296 5072 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8tl5c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-8npk7_openstack(ab063039-b4d9-45d8-9336-35316fd1ab08): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 11:26:03 crc kubenswrapper[5072]: E1124 11:26:03.591584 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-8npk7" podUID="ab063039-b4d9-45d8-9336-35316fd1ab08" Nov 24 11:26:03 crc kubenswrapper[5072]: I1124 11:26:03.704715 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54f9b7b8d9-w56kf" Nov 24 11:26:03 crc kubenswrapper[5072]: I1124 11:26:03.787617 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7b946d459c-n4llq"] Nov 24 11:26:03 crc kubenswrapper[5072]: E1124 11:26:03.787956 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb192e24-d3b0-4e96-8bbf-edb5b93ecf64" containerName="neutron-db-sync" Nov 24 11:26:03 crc kubenswrapper[5072]: I1124 11:26:03.787973 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb192e24-d3b0-4e96-8bbf-edb5b93ecf64" containerName="neutron-db-sync" Nov 24 11:26:03 crc kubenswrapper[5072]: E1124 11:26:03.787990 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65c4aeb0-5394-4ff2-b993-449041d6ba77" containerName="init" Nov 24 11:26:03 crc kubenswrapper[5072]: I1124 11:26:03.787996 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="65c4aeb0-5394-4ff2-b993-449041d6ba77" containerName="init" Nov 24 11:26:03 crc kubenswrapper[5072]: E1124 11:26:03.788007 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65c4aeb0-5394-4ff2-b993-449041d6ba77" containerName="dnsmasq-dns" Nov 24 11:26:03 crc kubenswrapper[5072]: I1124 11:26:03.788013 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="65c4aeb0-5394-4ff2-b993-449041d6ba77" containerName="dnsmasq-dns" Nov 24 11:26:03 crc kubenswrapper[5072]: I1124 11:26:03.788167 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="65c4aeb0-5394-4ff2-b993-449041d6ba77" containerName="dnsmasq-dns" Nov 24 11:26:03 crc kubenswrapper[5072]: I1124 11:26:03.788188 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb192e24-d3b0-4e96-8bbf-edb5b93ecf64" containerName="neutron-db-sync" Nov 24 11:26:03 crc kubenswrapper[5072]: I1124 11:26:03.788975 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b946d459c-n4llq" Nov 24 11:26:03 crc kubenswrapper[5072]: I1124 11:26:03.812279 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65c4aeb0-5394-4ff2-b993-449041d6ba77-config\") pod \"65c4aeb0-5394-4ff2-b993-449041d6ba77\" (UID: \"65c4aeb0-5394-4ff2-b993-449041d6ba77\") " Nov 24 11:26:03 crc kubenswrapper[5072]: I1124 11:26:03.812509 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b946d459c-n4llq"] Nov 24 11:26:03 crc kubenswrapper[5072]: I1124 11:26:03.812534 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/65c4aeb0-5394-4ff2-b993-449041d6ba77-ovsdbserver-nb\") pod \"65c4aeb0-5394-4ff2-b993-449041d6ba77\" (UID: \"65c4aeb0-5394-4ff2-b993-449041d6ba77\") " Nov 24 11:26:03 crc kubenswrapper[5072]: I1124 11:26:03.812697 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/65c4aeb0-5394-4ff2-b993-449041d6ba77-ovsdbserver-sb\") pod \"65c4aeb0-5394-4ff2-b993-449041d6ba77\" (UID: \"65c4aeb0-5394-4ff2-b993-449041d6ba77\") " Nov 24 11:26:03 crc kubenswrapper[5072]: I1124 11:26:03.812765 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wnsv2\" (UniqueName: \"kubernetes.io/projected/65c4aeb0-5394-4ff2-b993-449041d6ba77-kube-api-access-wnsv2\") pod \"65c4aeb0-5394-4ff2-b993-449041d6ba77\" (UID: \"65c4aeb0-5394-4ff2-b993-449041d6ba77\") " Nov 24 11:26:03 crc kubenswrapper[5072]: I1124 11:26:03.812828 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/65c4aeb0-5394-4ff2-b993-449041d6ba77-dns-svc\") pod \"65c4aeb0-5394-4ff2-b993-449041d6ba77\" (UID: \"65c4aeb0-5394-4ff2-b993-449041d6ba77\") " Nov 24 11:26:03 crc kubenswrapper[5072]: I1124 11:26:03.813144 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0569a2f4-e2fb-4625-a547-a9244109a287-ovsdbserver-nb\") pod \"dnsmasq-dns-7b946d459c-n4llq\" (UID: \"0569a2f4-e2fb-4625-a547-a9244109a287\") " pod="openstack/dnsmasq-dns-7b946d459c-n4llq" Nov 24 11:26:03 crc kubenswrapper[5072]: I1124 11:26:03.813242 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0569a2f4-e2fb-4625-a547-a9244109a287-dns-svc\") pod \"dnsmasq-dns-7b946d459c-n4llq\" (UID: \"0569a2f4-e2fb-4625-a547-a9244109a287\") " pod="openstack/dnsmasq-dns-7b946d459c-n4llq" Nov 24 11:26:03 crc kubenswrapper[5072]: I1124 11:26:03.813426 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0569a2f4-e2fb-4625-a547-a9244109a287-ovsdbserver-sb\") pod \"dnsmasq-dns-7b946d459c-n4llq\" (UID: \"0569a2f4-e2fb-4625-a547-a9244109a287\") " pod="openstack/dnsmasq-dns-7b946d459c-n4llq" Nov 24 11:26:03 crc kubenswrapper[5072]: I1124 11:26:03.813525 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0569a2f4-e2fb-4625-a547-a9244109a287-config\") pod \"dnsmasq-dns-7b946d459c-n4llq\" (UID: \"0569a2f4-e2fb-4625-a547-a9244109a287\") " pod="openstack/dnsmasq-dns-7b946d459c-n4llq" Nov 24 11:26:03 crc kubenswrapper[5072]: I1124 11:26:03.813557 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27w5p\" (UniqueName: \"kubernetes.io/projected/0569a2f4-e2fb-4625-a547-a9244109a287-kube-api-access-27w5p\") pod \"dnsmasq-dns-7b946d459c-n4llq\" (UID: \"0569a2f4-e2fb-4625-a547-a9244109a287\") " pod="openstack/dnsmasq-dns-7b946d459c-n4llq" Nov 24 11:26:03 crc kubenswrapper[5072]: I1124 11:26:03.865446 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6765f59d56-zj7gz"] Nov 24 11:26:03 crc kubenswrapper[5072]: I1124 11:26:03.866805 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6765f59d56-zj7gz" Nov 24 11:26:03 crc kubenswrapper[5072]: I1124 11:26:03.871871 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 24 11:26:03 crc kubenswrapper[5072]: I1124 11:26:03.872033 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Nov 24 11:26:03 crc kubenswrapper[5072]: I1124 11:26:03.872138 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-8lj7t" Nov 24 11:26:03 crc kubenswrapper[5072]: I1124 11:26:03.872258 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 24 11:26:03 crc kubenswrapper[5072]: I1124 11:26:03.896433 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6765f59d56-zj7gz"] Nov 24 11:26:03 crc kubenswrapper[5072]: I1124 11:26:03.896809 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65c4aeb0-5394-4ff2-b993-449041d6ba77-kube-api-access-wnsv2" (OuterVolumeSpecName: "kube-api-access-wnsv2") pod "65c4aeb0-5394-4ff2-b993-449041d6ba77" (UID: "65c4aeb0-5394-4ff2-b993-449041d6ba77"). InnerVolumeSpecName "kube-api-access-wnsv2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:26:03 crc kubenswrapper[5072]: I1124 11:26:03.918612 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ea6b17ec-1925-4441-965e-9f2eeca16bec-httpd-config\") pod \"neutron-6765f59d56-zj7gz\" (UID: \"ea6b17ec-1925-4441-965e-9f2eeca16bec\") " pod="openstack/neutron-6765f59d56-zj7gz" Nov 24 11:26:03 crc kubenswrapper[5072]: I1124 11:26:03.918661 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea6b17ec-1925-4441-965e-9f2eeca16bec-combined-ca-bundle\") pod \"neutron-6765f59d56-zj7gz\" (UID: \"ea6b17ec-1925-4441-965e-9f2eeca16bec\") " pod="openstack/neutron-6765f59d56-zj7gz" Nov 24 11:26:03 crc kubenswrapper[5072]: I1124 11:26:03.918692 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0569a2f4-e2fb-4625-a547-a9244109a287-config\") pod \"dnsmasq-dns-7b946d459c-n4llq\" (UID: \"0569a2f4-e2fb-4625-a547-a9244109a287\") " pod="openstack/dnsmasq-dns-7b946d459c-n4llq" Nov 24 11:26:03 crc kubenswrapper[5072]: I1124 11:26:03.918715 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27w5p\" (UniqueName: \"kubernetes.io/projected/0569a2f4-e2fb-4625-a547-a9244109a287-kube-api-access-27w5p\") pod \"dnsmasq-dns-7b946d459c-n4llq\" (UID: \"0569a2f4-e2fb-4625-a547-a9244109a287\") " pod="openstack/dnsmasq-dns-7b946d459c-n4llq" Nov 24 11:26:03 crc kubenswrapper[5072]: I1124 11:26:03.918753 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0569a2f4-e2fb-4625-a547-a9244109a287-ovsdbserver-nb\") pod \"dnsmasq-dns-7b946d459c-n4llq\" (UID: \"0569a2f4-e2fb-4625-a547-a9244109a287\") " pod="openstack/dnsmasq-dns-7b946d459c-n4llq" Nov 24 11:26:03 crc kubenswrapper[5072]: I1124 11:26:03.918801 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0569a2f4-e2fb-4625-a547-a9244109a287-dns-svc\") pod \"dnsmasq-dns-7b946d459c-n4llq\" (UID: \"0569a2f4-e2fb-4625-a547-a9244109a287\") " pod="openstack/dnsmasq-dns-7b946d459c-n4llq" Nov 24 11:26:03 crc kubenswrapper[5072]: I1124 11:26:03.918834 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea6b17ec-1925-4441-965e-9f2eeca16bec-ovndb-tls-certs\") pod \"neutron-6765f59d56-zj7gz\" (UID: \"ea6b17ec-1925-4441-965e-9f2eeca16bec\") " pod="openstack/neutron-6765f59d56-zj7gz" Nov 24 11:26:03 crc kubenswrapper[5072]: I1124 11:26:03.918882 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m66fz\" (UniqueName: \"kubernetes.io/projected/ea6b17ec-1925-4441-965e-9f2eeca16bec-kube-api-access-m66fz\") pod \"neutron-6765f59d56-zj7gz\" (UID: \"ea6b17ec-1925-4441-965e-9f2eeca16bec\") " pod="openstack/neutron-6765f59d56-zj7gz" Nov 24 11:26:03 crc kubenswrapper[5072]: I1124 11:26:03.918914 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0569a2f4-e2fb-4625-a547-a9244109a287-ovsdbserver-sb\") pod \"dnsmasq-dns-7b946d459c-n4llq\" (UID: \"0569a2f4-e2fb-4625-a547-a9244109a287\") " pod="openstack/dnsmasq-dns-7b946d459c-n4llq" Nov 24 11:26:03 crc kubenswrapper[5072]: I1124 11:26:03.918944 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ea6b17ec-1925-4441-965e-9f2eeca16bec-config\") pod \"neutron-6765f59d56-zj7gz\" (UID: \"ea6b17ec-1925-4441-965e-9f2eeca16bec\") " pod="openstack/neutron-6765f59d56-zj7gz" Nov 24 11:26:03 crc kubenswrapper[5072]: I1124 11:26:03.918995 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wnsv2\" (UniqueName: \"kubernetes.io/projected/65c4aeb0-5394-4ff2-b993-449041d6ba77-kube-api-access-wnsv2\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:03 crc kubenswrapper[5072]: I1124 11:26:03.920096 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0569a2f4-e2fb-4625-a547-a9244109a287-config\") pod \"dnsmasq-dns-7b946d459c-n4llq\" (UID: \"0569a2f4-e2fb-4625-a547-a9244109a287\") " pod="openstack/dnsmasq-dns-7b946d459c-n4llq" Nov 24 11:26:03 crc kubenswrapper[5072]: I1124 11:26:03.920699 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0569a2f4-e2fb-4625-a547-a9244109a287-ovsdbserver-nb\") pod \"dnsmasq-dns-7b946d459c-n4llq\" (UID: \"0569a2f4-e2fb-4625-a547-a9244109a287\") " pod="openstack/dnsmasq-dns-7b946d459c-n4llq" Nov 24 11:26:03 crc kubenswrapper[5072]: I1124 11:26:03.920931 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0569a2f4-e2fb-4625-a547-a9244109a287-ovsdbserver-sb\") pod \"dnsmasq-dns-7b946d459c-n4llq\" (UID: \"0569a2f4-e2fb-4625-a547-a9244109a287\") " pod="openstack/dnsmasq-dns-7b946d459c-n4llq" Nov 24 11:26:03 crc kubenswrapper[5072]: I1124 11:26:03.921261 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0569a2f4-e2fb-4625-a547-a9244109a287-dns-svc\") pod \"dnsmasq-dns-7b946d459c-n4llq\" (UID: \"0569a2f4-e2fb-4625-a547-a9244109a287\") " pod="openstack/dnsmasq-dns-7b946d459c-n4llq" Nov 24 11:26:03 crc kubenswrapper[5072]: I1124 11:26:03.963118 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27w5p\" (UniqueName: \"kubernetes.io/projected/0569a2f4-e2fb-4625-a547-a9244109a287-kube-api-access-27w5p\") pod \"dnsmasq-dns-7b946d459c-n4llq\" (UID: \"0569a2f4-e2fb-4625-a547-a9244109a287\") " pod="openstack/dnsmasq-dns-7b946d459c-n4llq" Nov 24 11:26:04 crc kubenswrapper[5072]: I1124 11:26:04.025256 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m66fz\" (UniqueName: \"kubernetes.io/projected/ea6b17ec-1925-4441-965e-9f2eeca16bec-kube-api-access-m66fz\") pod \"neutron-6765f59d56-zj7gz\" (UID: \"ea6b17ec-1925-4441-965e-9f2eeca16bec\") " pod="openstack/neutron-6765f59d56-zj7gz" Nov 24 11:26:04 crc kubenswrapper[5072]: I1124 11:26:04.025325 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ea6b17ec-1925-4441-965e-9f2eeca16bec-config\") pod \"neutron-6765f59d56-zj7gz\" (UID: \"ea6b17ec-1925-4441-965e-9f2eeca16bec\") " pod="openstack/neutron-6765f59d56-zj7gz" Nov 24 11:26:04 crc kubenswrapper[5072]: I1124 11:26:04.025349 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ea6b17ec-1925-4441-965e-9f2eeca16bec-httpd-config\") pod \"neutron-6765f59d56-zj7gz\" (UID: \"ea6b17ec-1925-4441-965e-9f2eeca16bec\") " pod="openstack/neutron-6765f59d56-zj7gz" Nov 24 11:26:04 crc kubenswrapper[5072]: I1124 11:26:04.025388 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea6b17ec-1925-4441-965e-9f2eeca16bec-combined-ca-bundle\") pod \"neutron-6765f59d56-zj7gz\" (UID: \"ea6b17ec-1925-4441-965e-9f2eeca16bec\") " pod="openstack/neutron-6765f59d56-zj7gz" Nov 24 11:26:04 crc kubenswrapper[5072]: I1124 11:26:04.025475 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea6b17ec-1925-4441-965e-9f2eeca16bec-ovndb-tls-certs\") pod \"neutron-6765f59d56-zj7gz\" (UID: \"ea6b17ec-1925-4441-965e-9f2eeca16bec\") " pod="openstack/neutron-6765f59d56-zj7gz" Nov 24 11:26:04 crc kubenswrapper[5072]: I1124 11:26:04.038144 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ea6b17ec-1925-4441-965e-9f2eeca16bec-httpd-config\") pod \"neutron-6765f59d56-zj7gz\" (UID: \"ea6b17ec-1925-4441-965e-9f2eeca16bec\") " pod="openstack/neutron-6765f59d56-zj7gz" Nov 24 11:26:04 crc kubenswrapper[5072]: I1124 11:26:04.038862 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea6b17ec-1925-4441-965e-9f2eeca16bec-ovndb-tls-certs\") pod \"neutron-6765f59d56-zj7gz\" (UID: \"ea6b17ec-1925-4441-965e-9f2eeca16bec\") " pod="openstack/neutron-6765f59d56-zj7gz" Nov 24 11:26:04 crc kubenswrapper[5072]: I1124 11:26:04.041570 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m66fz\" (UniqueName: \"kubernetes.io/projected/ea6b17ec-1925-4441-965e-9f2eeca16bec-kube-api-access-m66fz\") pod \"neutron-6765f59d56-zj7gz\" (UID: \"ea6b17ec-1925-4441-965e-9f2eeca16bec\") " pod="openstack/neutron-6765f59d56-zj7gz" Nov 24 11:26:04 crc kubenswrapper[5072]: I1124 11:26:04.043832 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea6b17ec-1925-4441-965e-9f2eeca16bec-combined-ca-bundle\") pod \"neutron-6765f59d56-zj7gz\" (UID: \"ea6b17ec-1925-4441-965e-9f2eeca16bec\") " pod="openstack/neutron-6765f59d56-zj7gz" Nov 24 11:26:04 crc kubenswrapper[5072]: I1124 11:26:04.047560 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/ea6b17ec-1925-4441-965e-9f2eeca16bec-config\") pod \"neutron-6765f59d56-zj7gz\" (UID: \"ea6b17ec-1925-4441-965e-9f2eeca16bec\") " pod="openstack/neutron-6765f59d56-zj7gz" Nov 24 11:26:04 crc kubenswrapper[5072]: I1124 11:26:04.058687 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65c4aeb0-5394-4ff2-b993-449041d6ba77-config" (OuterVolumeSpecName: "config") pod "65c4aeb0-5394-4ff2-b993-449041d6ba77" (UID: "65c4aeb0-5394-4ff2-b993-449041d6ba77"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:26:04 crc kubenswrapper[5072]: I1124 11:26:04.094255 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65c4aeb0-5394-4ff2-b993-449041d6ba77-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "65c4aeb0-5394-4ff2-b993-449041d6ba77" (UID: "65c4aeb0-5394-4ff2-b993-449041d6ba77"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:26:04 crc kubenswrapper[5072]: W1124 11:26:04.095613 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb9d9bdb5_a7d6_4caf_9212_4707da33f459.slice/crio-b02c519107a937abb9eb7a9aa2d97d5dadf52caa8aa30dff9b3cb869ea082c6f WatchSource:0}: Error finding container b02c519107a937abb9eb7a9aa2d97d5dadf52caa8aa30dff9b3cb869ea082c6f: Status 404 returned error can't find the container with id b02c519107a937abb9eb7a9aa2d97d5dadf52caa8aa30dff9b3cb869ea082c6f Nov 24 11:26:04 crc kubenswrapper[5072]: I1124 11:26:04.097597 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-jrmwr"] Nov 24 11:26:04 crc kubenswrapper[5072]: I1124 11:26:04.126967 5072 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65c4aeb0-5394-4ff2-b993-449041d6ba77-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:04 crc kubenswrapper[5072]: I1124 11:26:04.127229 5072 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/65c4aeb0-5394-4ff2-b993-449041d6ba77-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:04 crc kubenswrapper[5072]: I1124 11:26:04.131874 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b946d459c-n4llq" Nov 24 11:26:04 crc kubenswrapper[5072]: E1124 11:26:04.141692 5072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/65c4aeb0-5394-4ff2-b993-449041d6ba77-ovsdbserver-sb podName:65c4aeb0-5394-4ff2-b993-449041d6ba77 nodeName:}" failed. No retries permitted until 2025-11-24 11:26:04.641665928 +0000 UTC m=+1016.353190404 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "ovsdbserver-sb" (UniqueName: "kubernetes.io/configmap/65c4aeb0-5394-4ff2-b993-449041d6ba77-ovsdbserver-sb") pod "65c4aeb0-5394-4ff2-b993-449041d6ba77" (UID: "65c4aeb0-5394-4ff2-b993-449041d6ba77") : error deleting /var/lib/kubelet/pods/65c4aeb0-5394-4ff2-b993-449041d6ba77/volume-subpaths: remove /var/lib/kubelet/pods/65c4aeb0-5394-4ff2-b993-449041d6ba77/volume-subpaths: no such file or directory Nov 24 11:26:04 crc kubenswrapper[5072]: I1124 11:26:04.142218 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65c4aeb0-5394-4ff2-b993-449041d6ba77-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "65c4aeb0-5394-4ff2-b993-449041d6ba77" (UID: "65c4aeb0-5394-4ff2-b993-449041d6ba77"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:26:04 crc kubenswrapper[5072]: I1124 11:26:04.187798 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6765f59d56-zj7gz" Nov 24 11:26:04 crc kubenswrapper[5072]: I1124 11:26:04.229494 5072 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/65c4aeb0-5394-4ff2-b993-449041d6ba77-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:04 crc kubenswrapper[5072]: I1124 11:26:04.233295 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jrmwr" event={"ID":"b9d9bdb5-a7d6-4caf-9212-4707da33f459","Type":"ContainerStarted","Data":"b02c519107a937abb9eb7a9aa2d97d5dadf52caa8aa30dff9b3cb869ea082c6f"} Nov 24 11:26:04 crc kubenswrapper[5072]: I1124 11:26:04.239967 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-6wkj4" event={"ID":"68f6d27e-d239-4e24-8381-872893433a07","Type":"ContainerStarted","Data":"7d6b84973fd5541609924ca765899daaaa67f701c20299c73f35e8c6a1ccfc28"} Nov 24 11:26:04 crc kubenswrapper[5072]: I1124 11:26:04.249784 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8b0e75bc-78b4-45e2-9c55-7b573ab3cc15","Type":"ContainerStarted","Data":"08862e0312856263cde359eba19295ccb970707b32c1800e019b48031123b752"} Nov 24 11:26:04 crc kubenswrapper[5072]: I1124 11:26:04.254748 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54f9b7b8d9-w56kf" Nov 24 11:26:04 crc kubenswrapper[5072]: I1124 11:26:04.254898 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54f9b7b8d9-w56kf" event={"ID":"65c4aeb0-5394-4ff2-b993-449041d6ba77","Type":"ContainerDied","Data":"6ccdb1a5e1a5d38d9960866157aa4333c206525d7abde7bb8b8b2f86220a5a1d"} Nov 24 11:26:04 crc kubenswrapper[5072]: I1124 11:26:04.254926 5072 scope.go:117] "RemoveContainer" containerID="a6afe5388d692db48c23ec636539320874fa9385f06e96c71c08f8277c15fdf3" Nov 24 11:26:04 crc kubenswrapper[5072]: E1124 11:26:04.255652 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-8npk7" podUID="ab063039-b4d9-45d8-9336-35316fd1ab08" Nov 24 11:26:04 crc kubenswrapper[5072]: I1124 11:26:04.263428 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-6wkj4" podStartSLOduration=2.592477206 podStartE2EDuration="23.263411162s" podCreationTimestamp="2025-11-24 11:25:41 +0000 UTC" firstStartedPulling="2025-11-24 11:25:42.865338749 +0000 UTC m=+994.576863225" lastFinishedPulling="2025-11-24 11:26:03.536272705 +0000 UTC m=+1015.247797181" observedRunningTime="2025-11-24 11:26:04.261889503 +0000 UTC m=+1015.973413979" watchObservedRunningTime="2025-11-24 11:26:04.263411162 +0000 UTC m=+1015.974935638" Nov 24 11:26:04 crc kubenswrapper[5072]: I1124 11:26:04.302335 5072 scope.go:117] "RemoveContainer" containerID="8ec854b2cfbd331db577cc2df1c111b686beff0b181fb0a7c05bba54a207a5ee" Nov 24 11:26:04 crc kubenswrapper[5072]: I1124 11:26:04.736485 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/65c4aeb0-5394-4ff2-b993-449041d6ba77-ovsdbserver-sb\") pod \"65c4aeb0-5394-4ff2-b993-449041d6ba77\" (UID: \"65c4aeb0-5394-4ff2-b993-449041d6ba77\") " Nov 24 11:26:04 crc kubenswrapper[5072]: I1124 11:26:04.737565 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65c4aeb0-5394-4ff2-b993-449041d6ba77-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "65c4aeb0-5394-4ff2-b993-449041d6ba77" (UID: "65c4aeb0-5394-4ff2-b993-449041d6ba77"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:26:04 crc kubenswrapper[5072]: I1124 11:26:04.838663 5072 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/65c4aeb0-5394-4ff2-b993-449041d6ba77-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:04 crc kubenswrapper[5072]: I1124 11:26:04.906551 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54f9b7b8d9-w56kf"] Nov 24 11:26:04 crc kubenswrapper[5072]: I1124 11:26:04.917344 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-54f9b7b8d9-w56kf"] Nov 24 11:26:05 crc kubenswrapper[5072]: I1124 11:26:05.033133 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65c4aeb0-5394-4ff2-b993-449041d6ba77" path="/var/lib/kubelet/pods/65c4aeb0-5394-4ff2-b993-449041d6ba77/volumes" Nov 24 11:26:05 crc kubenswrapper[5072]: I1124 11:26:05.102669 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b946d459c-n4llq"] Nov 24 11:26:05 crc kubenswrapper[5072]: I1124 11:26:05.172351 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6765f59d56-zj7gz"] Nov 24 11:26:05 crc kubenswrapper[5072]: I1124 11:26:05.272292 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jrmwr" event={"ID":"b9d9bdb5-a7d6-4caf-9212-4707da33f459","Type":"ContainerStarted","Data":"95efdc3d4ac893766dbae25cc0770efd6934b697c873d7eb81fc63d472f44a96"} Nov 24 11:26:05 crc kubenswrapper[5072]: I1124 11:26:05.294159 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-jrmwr" podStartSLOduration=15.294138907 podStartE2EDuration="15.294138907s" podCreationTimestamp="2025-11-24 11:25:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:26:05.290728611 +0000 UTC m=+1017.002253087" watchObservedRunningTime="2025-11-24 11:26:05.294138907 +0000 UTC m=+1017.005663383" Nov 24 11:26:06 crc kubenswrapper[5072]: I1124 11:26:06.227362 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6dc7d7697-tf7nw"] Nov 24 11:26:06 crc kubenswrapper[5072]: I1124 11:26:06.231178 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6dc7d7697-tf7nw" Nov 24 11:26:06 crc kubenswrapper[5072]: I1124 11:26:06.234339 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Nov 24 11:26:06 crc kubenswrapper[5072]: I1124 11:26:06.234580 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Nov 24 11:26:06 crc kubenswrapper[5072]: I1124 11:26:06.253364 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6dc7d7697-tf7nw"] Nov 24 11:26:06 crc kubenswrapper[5072]: I1124 11:26:06.257342 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxkfq\" (UniqueName: \"kubernetes.io/projected/c1ae9399-6f4c-4053-84c8-821eb2867dc8-kube-api-access-cxkfq\") pod \"neutron-6dc7d7697-tf7nw\" (UID: \"c1ae9399-6f4c-4053-84c8-821eb2867dc8\") " pod="openstack/neutron-6dc7d7697-tf7nw" Nov 24 11:26:06 crc kubenswrapper[5072]: I1124 11:26:06.257455 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c1ae9399-6f4c-4053-84c8-821eb2867dc8-httpd-config\") pod \"neutron-6dc7d7697-tf7nw\" (UID: \"c1ae9399-6f4c-4053-84c8-821eb2867dc8\") " pod="openstack/neutron-6dc7d7697-tf7nw" Nov 24 11:26:06 crc kubenswrapper[5072]: I1124 11:26:06.257607 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c1ae9399-6f4c-4053-84c8-821eb2867dc8-config\") pod \"neutron-6dc7d7697-tf7nw\" (UID: \"c1ae9399-6f4c-4053-84c8-821eb2867dc8\") " pod="openstack/neutron-6dc7d7697-tf7nw" Nov 24 11:26:06 crc kubenswrapper[5072]: I1124 11:26:06.257678 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1ae9399-6f4c-4053-84c8-821eb2867dc8-internal-tls-certs\") pod \"neutron-6dc7d7697-tf7nw\" (UID: \"c1ae9399-6f4c-4053-84c8-821eb2867dc8\") " pod="openstack/neutron-6dc7d7697-tf7nw" Nov 24 11:26:06 crc kubenswrapper[5072]: I1124 11:26:06.257742 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1ae9399-6f4c-4053-84c8-821eb2867dc8-combined-ca-bundle\") pod \"neutron-6dc7d7697-tf7nw\" (UID: \"c1ae9399-6f4c-4053-84c8-821eb2867dc8\") " pod="openstack/neutron-6dc7d7697-tf7nw" Nov 24 11:26:06 crc kubenswrapper[5072]: I1124 11:26:06.257813 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1ae9399-6f4c-4053-84c8-821eb2867dc8-ovndb-tls-certs\") pod \"neutron-6dc7d7697-tf7nw\" (UID: \"c1ae9399-6f4c-4053-84c8-821eb2867dc8\") " pod="openstack/neutron-6dc7d7697-tf7nw" Nov 24 11:26:06 crc kubenswrapper[5072]: I1124 11:26:06.257889 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1ae9399-6f4c-4053-84c8-821eb2867dc8-public-tls-certs\") pod \"neutron-6dc7d7697-tf7nw\" (UID: \"c1ae9399-6f4c-4053-84c8-821eb2867dc8\") " pod="openstack/neutron-6dc7d7697-tf7nw" Nov 24 11:26:06 crc kubenswrapper[5072]: I1124 11:26:06.285392 5072 generic.go:334] "Generic (PLEG): container finished" podID="0569a2f4-e2fb-4625-a547-a9244109a287" containerID="fbe7265e908585ef0adee5887602c27361c3e52b01e60532bf15f49311b82a21" exitCode=0 Nov 24 11:26:06 crc kubenswrapper[5072]: I1124 11:26:06.285482 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b946d459c-n4llq" event={"ID":"0569a2f4-e2fb-4625-a547-a9244109a287","Type":"ContainerDied","Data":"fbe7265e908585ef0adee5887602c27361c3e52b01e60532bf15f49311b82a21"} Nov 24 11:26:06 crc kubenswrapper[5072]: I1124 11:26:06.285507 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b946d459c-n4llq" event={"ID":"0569a2f4-e2fb-4625-a547-a9244109a287","Type":"ContainerStarted","Data":"810306c0b02a9c0d6c50fef46a80e382fd1bfb2df7dc1b35d6877adc5ce49677"} Nov 24 11:26:06 crc kubenswrapper[5072]: I1124 11:26:06.299014 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6765f59d56-zj7gz" event={"ID":"ea6b17ec-1925-4441-965e-9f2eeca16bec","Type":"ContainerStarted","Data":"fa3af4260987b08192d8788da8a5f087c0f3f8e5cbd5e787586354887bec78fe"} Nov 24 11:26:06 crc kubenswrapper[5072]: I1124 11:26:06.299051 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6765f59d56-zj7gz" event={"ID":"ea6b17ec-1925-4441-965e-9f2eeca16bec","Type":"ContainerStarted","Data":"520695adde43cd501b9afc9befe9d308cef3532d7c842639fa0993497d308b4e"} Nov 24 11:26:06 crc kubenswrapper[5072]: I1124 11:26:06.299060 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6765f59d56-zj7gz" event={"ID":"ea6b17ec-1925-4441-965e-9f2eeca16bec","Type":"ContainerStarted","Data":"75e77858822e47f2caedc6238227e146f0d48c793a75683695151e48c31da8fa"} Nov 24 11:26:06 crc kubenswrapper[5072]: I1124 11:26:06.299748 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6765f59d56-zj7gz" Nov 24 11:26:06 crc kubenswrapper[5072]: I1124 11:26:06.301601 5072 generic.go:334] "Generic (PLEG): container finished" podID="68f6d27e-d239-4e24-8381-872893433a07" containerID="7d6b84973fd5541609924ca765899daaaa67f701c20299c73f35e8c6a1ccfc28" exitCode=0 Nov 24 11:26:06 crc kubenswrapper[5072]: I1124 11:26:06.301642 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-6wkj4" event={"ID":"68f6d27e-d239-4e24-8381-872893433a07","Type":"ContainerDied","Data":"7d6b84973fd5541609924ca765899daaaa67f701c20299c73f35e8c6a1ccfc28"} Nov 24 11:26:06 crc kubenswrapper[5072]: I1124 11:26:06.309463 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8b0e75bc-78b4-45e2-9c55-7b573ab3cc15","Type":"ContainerStarted","Data":"43986ad77a0fa21d6223cb16aed3a85747ac1c462e8ad731db536723897da2b2"} Nov 24 11:26:06 crc kubenswrapper[5072]: I1124 11:26:06.331965 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6765f59d56-zj7gz" podStartSLOduration=3.331949093 podStartE2EDuration="3.331949093s" podCreationTimestamp="2025-11-24 11:26:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:26:06.331875811 +0000 UTC m=+1018.043400297" watchObservedRunningTime="2025-11-24 11:26:06.331949093 +0000 UTC m=+1018.043473569" Nov 24 11:26:06 crc kubenswrapper[5072]: I1124 11:26:06.359574 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1ae9399-6f4c-4053-84c8-821eb2867dc8-ovndb-tls-certs\") pod \"neutron-6dc7d7697-tf7nw\" (UID: \"c1ae9399-6f4c-4053-84c8-821eb2867dc8\") " pod="openstack/neutron-6dc7d7697-tf7nw" Nov 24 11:26:06 crc kubenswrapper[5072]: I1124 11:26:06.359621 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1ae9399-6f4c-4053-84c8-821eb2867dc8-public-tls-certs\") pod \"neutron-6dc7d7697-tf7nw\" (UID: \"c1ae9399-6f4c-4053-84c8-821eb2867dc8\") " pod="openstack/neutron-6dc7d7697-tf7nw" Nov 24 11:26:06 crc kubenswrapper[5072]: I1124 11:26:06.359845 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxkfq\" (UniqueName: \"kubernetes.io/projected/c1ae9399-6f4c-4053-84c8-821eb2867dc8-kube-api-access-cxkfq\") pod \"neutron-6dc7d7697-tf7nw\" (UID: \"c1ae9399-6f4c-4053-84c8-821eb2867dc8\") " pod="openstack/neutron-6dc7d7697-tf7nw" Nov 24 11:26:06 crc kubenswrapper[5072]: I1124 11:26:06.359870 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c1ae9399-6f4c-4053-84c8-821eb2867dc8-httpd-config\") pod \"neutron-6dc7d7697-tf7nw\" (UID: \"c1ae9399-6f4c-4053-84c8-821eb2867dc8\") " pod="openstack/neutron-6dc7d7697-tf7nw" Nov 24 11:26:06 crc kubenswrapper[5072]: I1124 11:26:06.359895 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c1ae9399-6f4c-4053-84c8-821eb2867dc8-config\") pod \"neutron-6dc7d7697-tf7nw\" (UID: \"c1ae9399-6f4c-4053-84c8-821eb2867dc8\") " pod="openstack/neutron-6dc7d7697-tf7nw" Nov 24 11:26:06 crc kubenswrapper[5072]: I1124 11:26:06.359929 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1ae9399-6f4c-4053-84c8-821eb2867dc8-internal-tls-certs\") pod \"neutron-6dc7d7697-tf7nw\" (UID: \"c1ae9399-6f4c-4053-84c8-821eb2867dc8\") " pod="openstack/neutron-6dc7d7697-tf7nw" Nov 24 11:26:06 crc kubenswrapper[5072]: I1124 11:26:06.359960 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1ae9399-6f4c-4053-84c8-821eb2867dc8-combined-ca-bundle\") pod \"neutron-6dc7d7697-tf7nw\" (UID: \"c1ae9399-6f4c-4053-84c8-821eb2867dc8\") " pod="openstack/neutron-6dc7d7697-tf7nw" Nov 24 11:26:06 crc kubenswrapper[5072]: I1124 11:26:06.366169 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1ae9399-6f4c-4053-84c8-821eb2867dc8-combined-ca-bundle\") pod \"neutron-6dc7d7697-tf7nw\" (UID: \"c1ae9399-6f4c-4053-84c8-821eb2867dc8\") " pod="openstack/neutron-6dc7d7697-tf7nw" Nov 24 11:26:06 crc kubenswrapper[5072]: I1124 11:26:06.366493 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1ae9399-6f4c-4053-84c8-821eb2867dc8-ovndb-tls-certs\") pod \"neutron-6dc7d7697-tf7nw\" (UID: \"c1ae9399-6f4c-4053-84c8-821eb2867dc8\") " pod="openstack/neutron-6dc7d7697-tf7nw" Nov 24 11:26:06 crc kubenswrapper[5072]: I1124 11:26:06.369126 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1ae9399-6f4c-4053-84c8-821eb2867dc8-internal-tls-certs\") pod \"neutron-6dc7d7697-tf7nw\" (UID: \"c1ae9399-6f4c-4053-84c8-821eb2867dc8\") " pod="openstack/neutron-6dc7d7697-tf7nw" Nov 24 11:26:06 crc kubenswrapper[5072]: I1124 11:26:06.371899 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/c1ae9399-6f4c-4053-84c8-821eb2867dc8-config\") pod \"neutron-6dc7d7697-tf7nw\" (UID: \"c1ae9399-6f4c-4053-84c8-821eb2867dc8\") " pod="openstack/neutron-6dc7d7697-tf7nw" Nov 24 11:26:06 crc kubenswrapper[5072]: I1124 11:26:06.377003 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1ae9399-6f4c-4053-84c8-821eb2867dc8-public-tls-certs\") pod \"neutron-6dc7d7697-tf7nw\" (UID: \"c1ae9399-6f4c-4053-84c8-821eb2867dc8\") " pod="openstack/neutron-6dc7d7697-tf7nw" Nov 24 11:26:06 crc kubenswrapper[5072]: I1124 11:26:06.390195 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxkfq\" (UniqueName: \"kubernetes.io/projected/c1ae9399-6f4c-4053-84c8-821eb2867dc8-kube-api-access-cxkfq\") pod \"neutron-6dc7d7697-tf7nw\" (UID: \"c1ae9399-6f4c-4053-84c8-821eb2867dc8\") " pod="openstack/neutron-6dc7d7697-tf7nw" Nov 24 11:26:06 crc kubenswrapper[5072]: I1124 11:26:06.390205 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c1ae9399-6f4c-4053-84c8-821eb2867dc8-httpd-config\") pod \"neutron-6dc7d7697-tf7nw\" (UID: \"c1ae9399-6f4c-4053-84c8-821eb2867dc8\") " pod="openstack/neutron-6dc7d7697-tf7nw" Nov 24 11:26:06 crc kubenswrapper[5072]: I1124 11:26:06.548583 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6dc7d7697-tf7nw" Nov 24 11:26:07 crc kubenswrapper[5072]: I1124 11:26:07.164286 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6dc7d7697-tf7nw"] Nov 24 11:26:07 crc kubenswrapper[5072]: I1124 11:26:07.317968 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b946d459c-n4llq" event={"ID":"0569a2f4-e2fb-4625-a547-a9244109a287","Type":"ContainerStarted","Data":"aa5f178a132c6f24fb4bd764a33ef9d6d4aac489ef3620699f3193e1f0778570"} Nov 24 11:26:07 crc kubenswrapper[5072]: I1124 11:26:07.319055 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7b946d459c-n4llq" Nov 24 11:26:07 crc kubenswrapper[5072]: I1124 11:26:07.320421 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6dc7d7697-tf7nw" event={"ID":"c1ae9399-6f4c-4053-84c8-821eb2867dc8","Type":"ContainerStarted","Data":"0db671731781b522130b7fff2f390b28b432bf6339fca7d6db435cf597f4e0a4"} Nov 24 11:26:07 crc kubenswrapper[5072]: I1124 11:26:07.322794 5072 generic.go:334] "Generic (PLEG): container finished" podID="b9d9bdb5-a7d6-4caf-9212-4707da33f459" containerID="95efdc3d4ac893766dbae25cc0770efd6934b697c873d7eb81fc63d472f44a96" exitCode=0 Nov 24 11:26:07 crc kubenswrapper[5072]: I1124 11:26:07.322837 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jrmwr" event={"ID":"b9d9bdb5-a7d6-4caf-9212-4707da33f459","Type":"ContainerDied","Data":"95efdc3d4ac893766dbae25cc0770efd6934b697c873d7eb81fc63d472f44a96"} Nov 24 11:26:07 crc kubenswrapper[5072]: I1124 11:26:07.340340 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7b946d459c-n4llq" podStartSLOduration=4.340319943 podStartE2EDuration="4.340319943s" podCreationTimestamp="2025-11-24 11:26:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:26:07.332281089 +0000 UTC m=+1019.043805565" watchObservedRunningTime="2025-11-24 11:26:07.340319943 +0000 UTC m=+1019.051844419" Nov 24 11:26:07 crc kubenswrapper[5072]: I1124 11:26:07.653618 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-6wkj4" Nov 24 11:26:07 crc kubenswrapper[5072]: I1124 11:26:07.683199 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68f6d27e-d239-4e24-8381-872893433a07-scripts\") pod \"68f6d27e-d239-4e24-8381-872893433a07\" (UID: \"68f6d27e-d239-4e24-8381-872893433a07\") " Nov 24 11:26:07 crc kubenswrapper[5072]: I1124 11:26:07.683242 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68f6d27e-d239-4e24-8381-872893433a07-logs\") pod \"68f6d27e-d239-4e24-8381-872893433a07\" (UID: \"68f6d27e-d239-4e24-8381-872893433a07\") " Nov 24 11:26:07 crc kubenswrapper[5072]: I1124 11:26:07.683334 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xg59k\" (UniqueName: \"kubernetes.io/projected/68f6d27e-d239-4e24-8381-872893433a07-kube-api-access-xg59k\") pod \"68f6d27e-d239-4e24-8381-872893433a07\" (UID: \"68f6d27e-d239-4e24-8381-872893433a07\") " Nov 24 11:26:07 crc kubenswrapper[5072]: I1124 11:26:07.683417 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68f6d27e-d239-4e24-8381-872893433a07-combined-ca-bundle\") pod \"68f6d27e-d239-4e24-8381-872893433a07\" (UID: \"68f6d27e-d239-4e24-8381-872893433a07\") " Nov 24 11:26:07 crc kubenswrapper[5072]: I1124 11:26:07.683517 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68f6d27e-d239-4e24-8381-872893433a07-config-data\") pod \"68f6d27e-d239-4e24-8381-872893433a07\" (UID: \"68f6d27e-d239-4e24-8381-872893433a07\") " Nov 24 11:26:07 crc kubenswrapper[5072]: I1124 11:26:07.683762 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68f6d27e-d239-4e24-8381-872893433a07-logs" (OuterVolumeSpecName: "logs") pod "68f6d27e-d239-4e24-8381-872893433a07" (UID: "68f6d27e-d239-4e24-8381-872893433a07"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:26:07 crc kubenswrapper[5072]: I1124 11:26:07.683864 5072 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68f6d27e-d239-4e24-8381-872893433a07-logs\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:07 crc kubenswrapper[5072]: I1124 11:26:07.688676 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68f6d27e-d239-4e24-8381-872893433a07-scripts" (OuterVolumeSpecName: "scripts") pod "68f6d27e-d239-4e24-8381-872893433a07" (UID: "68f6d27e-d239-4e24-8381-872893433a07"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:26:07 crc kubenswrapper[5072]: I1124 11:26:07.689044 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68f6d27e-d239-4e24-8381-872893433a07-kube-api-access-xg59k" (OuterVolumeSpecName: "kube-api-access-xg59k") pod "68f6d27e-d239-4e24-8381-872893433a07" (UID: "68f6d27e-d239-4e24-8381-872893433a07"). InnerVolumeSpecName "kube-api-access-xg59k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:26:07 crc kubenswrapper[5072]: I1124 11:26:07.722493 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68f6d27e-d239-4e24-8381-872893433a07-config-data" (OuterVolumeSpecName: "config-data") pod "68f6d27e-d239-4e24-8381-872893433a07" (UID: "68f6d27e-d239-4e24-8381-872893433a07"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:26:07 crc kubenswrapper[5072]: I1124 11:26:07.739429 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68f6d27e-d239-4e24-8381-872893433a07-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "68f6d27e-d239-4e24-8381-872893433a07" (UID: "68f6d27e-d239-4e24-8381-872893433a07"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:26:07 crc kubenswrapper[5072]: I1124 11:26:07.785759 5072 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68f6d27e-d239-4e24-8381-872893433a07-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:07 crc kubenswrapper[5072]: I1124 11:26:07.785804 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xg59k\" (UniqueName: \"kubernetes.io/projected/68f6d27e-d239-4e24-8381-872893433a07-kube-api-access-xg59k\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:07 crc kubenswrapper[5072]: I1124 11:26:07.785818 5072 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68f6d27e-d239-4e24-8381-872893433a07-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:07 crc kubenswrapper[5072]: I1124 11:26:07.785830 5072 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68f6d27e-d239-4e24-8381-872893433a07-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:08 crc kubenswrapper[5072]: I1124 11:26:08.331020 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6dc7d7697-tf7nw" event={"ID":"c1ae9399-6f4c-4053-84c8-821eb2867dc8","Type":"ContainerStarted","Data":"805436af26e8ff96bd20093385a2525463665c21a86230ff53dc16c9df992967"} Nov 24 11:26:08 crc kubenswrapper[5072]: I1124 11:26:08.333691 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-6wkj4" Nov 24 11:26:08 crc kubenswrapper[5072]: I1124 11:26:08.337167 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-6wkj4" event={"ID":"68f6d27e-d239-4e24-8381-872893433a07","Type":"ContainerDied","Data":"aa4ca9518aee1324e10f1692917fddcddfa62021fe409712ddaf77b42ed7b287"} Nov 24 11:26:08 crc kubenswrapper[5072]: I1124 11:26:08.337219 5072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa4ca9518aee1324e10f1692917fddcddfa62021fe409712ddaf77b42ed7b287" Nov 24 11:26:08 crc kubenswrapper[5072]: I1124 11:26:08.526173 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-64d9f94c7b-p7b2p"] Nov 24 11:26:08 crc kubenswrapper[5072]: E1124 11:26:08.526678 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68f6d27e-d239-4e24-8381-872893433a07" containerName="placement-db-sync" Nov 24 11:26:08 crc kubenswrapper[5072]: I1124 11:26:08.526693 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="68f6d27e-d239-4e24-8381-872893433a07" containerName="placement-db-sync" Nov 24 11:26:08 crc kubenswrapper[5072]: I1124 11:26:08.526960 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="68f6d27e-d239-4e24-8381-872893433a07" containerName="placement-db-sync" Nov 24 11:26:08 crc kubenswrapper[5072]: I1124 11:26:08.528292 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-64d9f94c7b-p7b2p" Nov 24 11:26:08 crc kubenswrapper[5072]: I1124 11:26:08.540476 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 24 11:26:08 crc kubenswrapper[5072]: I1124 11:26:08.540640 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 24 11:26:08 crc kubenswrapper[5072]: I1124 11:26:08.540887 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Nov 24 11:26:08 crc kubenswrapper[5072]: I1124 11:26:08.541729 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-c78vm" Nov 24 11:26:08 crc kubenswrapper[5072]: I1124 11:26:08.541995 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Nov 24 11:26:08 crc kubenswrapper[5072]: I1124 11:26:08.557231 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-64d9f94c7b-p7b2p"] Nov 24 11:26:08 crc kubenswrapper[5072]: I1124 11:26:08.602514 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/35ccd8e2-71e0-4a36-a51a-5c9a4734b124-public-tls-certs\") pod \"placement-64d9f94c7b-p7b2p\" (UID: \"35ccd8e2-71e0-4a36-a51a-5c9a4734b124\") " pod="openstack/placement-64d9f94c7b-p7b2p" Nov 24 11:26:08 crc kubenswrapper[5072]: I1124 11:26:08.602598 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bm4cw\" (UniqueName: \"kubernetes.io/projected/35ccd8e2-71e0-4a36-a51a-5c9a4734b124-kube-api-access-bm4cw\") pod \"placement-64d9f94c7b-p7b2p\" (UID: \"35ccd8e2-71e0-4a36-a51a-5c9a4734b124\") " pod="openstack/placement-64d9f94c7b-p7b2p" Nov 24 11:26:08 crc kubenswrapper[5072]: I1124 11:26:08.602728 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35ccd8e2-71e0-4a36-a51a-5c9a4734b124-logs\") pod \"placement-64d9f94c7b-p7b2p\" (UID: \"35ccd8e2-71e0-4a36-a51a-5c9a4734b124\") " pod="openstack/placement-64d9f94c7b-p7b2p" Nov 24 11:26:08 crc kubenswrapper[5072]: I1124 11:26:08.602774 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35ccd8e2-71e0-4a36-a51a-5c9a4734b124-combined-ca-bundle\") pod \"placement-64d9f94c7b-p7b2p\" (UID: \"35ccd8e2-71e0-4a36-a51a-5c9a4734b124\") " pod="openstack/placement-64d9f94c7b-p7b2p" Nov 24 11:26:08 crc kubenswrapper[5072]: I1124 11:26:08.602815 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35ccd8e2-71e0-4a36-a51a-5c9a4734b124-config-data\") pod \"placement-64d9f94c7b-p7b2p\" (UID: \"35ccd8e2-71e0-4a36-a51a-5c9a4734b124\") " pod="openstack/placement-64d9f94c7b-p7b2p" Nov 24 11:26:08 crc kubenswrapper[5072]: I1124 11:26:08.602841 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35ccd8e2-71e0-4a36-a51a-5c9a4734b124-scripts\") pod \"placement-64d9f94c7b-p7b2p\" (UID: \"35ccd8e2-71e0-4a36-a51a-5c9a4734b124\") " pod="openstack/placement-64d9f94c7b-p7b2p" Nov 24 11:26:08 crc kubenswrapper[5072]: I1124 11:26:08.603053 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/35ccd8e2-71e0-4a36-a51a-5c9a4734b124-internal-tls-certs\") pod \"placement-64d9f94c7b-p7b2p\" (UID: \"35ccd8e2-71e0-4a36-a51a-5c9a4734b124\") " pod="openstack/placement-64d9f94c7b-p7b2p" Nov 24 11:26:08 crc kubenswrapper[5072]: I1124 11:26:08.758455 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35ccd8e2-71e0-4a36-a51a-5c9a4734b124-combined-ca-bundle\") pod \"placement-64d9f94c7b-p7b2p\" (UID: \"35ccd8e2-71e0-4a36-a51a-5c9a4734b124\") " pod="openstack/placement-64d9f94c7b-p7b2p" Nov 24 11:26:08 crc kubenswrapper[5072]: I1124 11:26:08.758520 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35ccd8e2-71e0-4a36-a51a-5c9a4734b124-config-data\") pod \"placement-64d9f94c7b-p7b2p\" (UID: \"35ccd8e2-71e0-4a36-a51a-5c9a4734b124\") " pod="openstack/placement-64d9f94c7b-p7b2p" Nov 24 11:26:08 crc kubenswrapper[5072]: I1124 11:26:08.758539 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35ccd8e2-71e0-4a36-a51a-5c9a4734b124-scripts\") pod \"placement-64d9f94c7b-p7b2p\" (UID: \"35ccd8e2-71e0-4a36-a51a-5c9a4734b124\") " pod="openstack/placement-64d9f94c7b-p7b2p" Nov 24 11:26:08 crc kubenswrapper[5072]: I1124 11:26:08.758604 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/35ccd8e2-71e0-4a36-a51a-5c9a4734b124-internal-tls-certs\") pod \"placement-64d9f94c7b-p7b2p\" (UID: \"35ccd8e2-71e0-4a36-a51a-5c9a4734b124\") " pod="openstack/placement-64d9f94c7b-p7b2p" Nov 24 11:26:08 crc kubenswrapper[5072]: I1124 11:26:08.758625 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/35ccd8e2-71e0-4a36-a51a-5c9a4734b124-public-tls-certs\") pod \"placement-64d9f94c7b-p7b2p\" (UID: \"35ccd8e2-71e0-4a36-a51a-5c9a4734b124\") " pod="openstack/placement-64d9f94c7b-p7b2p" Nov 24 11:26:08 crc kubenswrapper[5072]: I1124 11:26:08.758653 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bm4cw\" (UniqueName: \"kubernetes.io/projected/35ccd8e2-71e0-4a36-a51a-5c9a4734b124-kube-api-access-bm4cw\") pod \"placement-64d9f94c7b-p7b2p\" (UID: \"35ccd8e2-71e0-4a36-a51a-5c9a4734b124\") " pod="openstack/placement-64d9f94c7b-p7b2p" Nov 24 11:26:08 crc kubenswrapper[5072]: I1124 11:26:08.760990 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35ccd8e2-71e0-4a36-a51a-5c9a4734b124-logs\") pod \"placement-64d9f94c7b-p7b2p\" (UID: \"35ccd8e2-71e0-4a36-a51a-5c9a4734b124\") " pod="openstack/placement-64d9f94c7b-p7b2p" Nov 24 11:26:08 crc kubenswrapper[5072]: I1124 11:26:08.762940 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35ccd8e2-71e0-4a36-a51a-5c9a4734b124-logs\") pod \"placement-64d9f94c7b-p7b2p\" (UID: \"35ccd8e2-71e0-4a36-a51a-5c9a4734b124\") " pod="openstack/placement-64d9f94c7b-p7b2p" Nov 24 11:26:08 crc kubenswrapper[5072]: I1124 11:26:08.765960 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/35ccd8e2-71e0-4a36-a51a-5c9a4734b124-public-tls-certs\") pod \"placement-64d9f94c7b-p7b2p\" (UID: \"35ccd8e2-71e0-4a36-a51a-5c9a4734b124\") " pod="openstack/placement-64d9f94c7b-p7b2p" Nov 24 11:26:08 crc kubenswrapper[5072]: I1124 11:26:08.772259 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35ccd8e2-71e0-4a36-a51a-5c9a4734b124-config-data\") pod \"placement-64d9f94c7b-p7b2p\" (UID: \"35ccd8e2-71e0-4a36-a51a-5c9a4734b124\") " pod="openstack/placement-64d9f94c7b-p7b2p" Nov 24 11:26:08 crc kubenswrapper[5072]: I1124 11:26:08.779114 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bm4cw\" (UniqueName: \"kubernetes.io/projected/35ccd8e2-71e0-4a36-a51a-5c9a4734b124-kube-api-access-bm4cw\") pod \"placement-64d9f94c7b-p7b2p\" (UID: \"35ccd8e2-71e0-4a36-a51a-5c9a4734b124\") " pod="openstack/placement-64d9f94c7b-p7b2p" Nov 24 11:26:08 crc kubenswrapper[5072]: I1124 11:26:08.779596 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/35ccd8e2-71e0-4a36-a51a-5c9a4734b124-internal-tls-certs\") pod \"placement-64d9f94c7b-p7b2p\" (UID: \"35ccd8e2-71e0-4a36-a51a-5c9a4734b124\") " pod="openstack/placement-64d9f94c7b-p7b2p" Nov 24 11:26:08 crc kubenswrapper[5072]: I1124 11:26:08.781791 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35ccd8e2-71e0-4a36-a51a-5c9a4734b124-combined-ca-bundle\") pod \"placement-64d9f94c7b-p7b2p\" (UID: \"35ccd8e2-71e0-4a36-a51a-5c9a4734b124\") " pod="openstack/placement-64d9f94c7b-p7b2p" Nov 24 11:26:08 crc kubenswrapper[5072]: I1124 11:26:08.785671 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35ccd8e2-71e0-4a36-a51a-5c9a4734b124-scripts\") pod \"placement-64d9f94c7b-p7b2p\" (UID: \"35ccd8e2-71e0-4a36-a51a-5c9a4734b124\") " pod="openstack/placement-64d9f94c7b-p7b2p" Nov 24 11:26:08 crc kubenswrapper[5072]: I1124 11:26:08.855956 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-64d9f94c7b-p7b2p" Nov 24 11:26:11 crc kubenswrapper[5072]: I1124 11:26:11.338861 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jrmwr" Nov 24 11:26:11 crc kubenswrapper[5072]: I1124 11:26:11.368098 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jrmwr" event={"ID":"b9d9bdb5-a7d6-4caf-9212-4707da33f459","Type":"ContainerDied","Data":"b02c519107a937abb9eb7a9aa2d97d5dadf52caa8aa30dff9b3cb869ea082c6f"} Nov 24 11:26:11 crc kubenswrapper[5072]: I1124 11:26:11.368129 5072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b02c519107a937abb9eb7a9aa2d97d5dadf52caa8aa30dff9b3cb869ea082c6f" Nov 24 11:26:11 crc kubenswrapper[5072]: I1124 11:26:11.368199 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jrmwr" Nov 24 11:26:11 crc kubenswrapper[5072]: I1124 11:26:11.402539 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9d9bdb5-a7d6-4caf-9212-4707da33f459-combined-ca-bundle\") pod \"b9d9bdb5-a7d6-4caf-9212-4707da33f459\" (UID: \"b9d9bdb5-a7d6-4caf-9212-4707da33f459\") " Nov 24 11:26:11 crc kubenswrapper[5072]: I1124 11:26:11.402588 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9d9bdb5-a7d6-4caf-9212-4707da33f459-config-data\") pod \"b9d9bdb5-a7d6-4caf-9212-4707da33f459\" (UID: \"b9d9bdb5-a7d6-4caf-9212-4707da33f459\") " Nov 24 11:26:11 crc kubenswrapper[5072]: I1124 11:26:11.402606 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b9d9bdb5-a7d6-4caf-9212-4707da33f459-fernet-keys\") pod \"b9d9bdb5-a7d6-4caf-9212-4707da33f459\" (UID: \"b9d9bdb5-a7d6-4caf-9212-4707da33f459\") " Nov 24 11:26:11 crc kubenswrapper[5072]: I1124 11:26:11.402675 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b9d9bdb5-a7d6-4caf-9212-4707da33f459-credential-keys\") pod \"b9d9bdb5-a7d6-4caf-9212-4707da33f459\" (UID: \"b9d9bdb5-a7d6-4caf-9212-4707da33f459\") " Nov 24 11:26:11 crc kubenswrapper[5072]: I1124 11:26:11.402699 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8l4vz\" (UniqueName: \"kubernetes.io/projected/b9d9bdb5-a7d6-4caf-9212-4707da33f459-kube-api-access-8l4vz\") pod \"b9d9bdb5-a7d6-4caf-9212-4707da33f459\" (UID: \"b9d9bdb5-a7d6-4caf-9212-4707da33f459\") " Nov 24 11:26:11 crc kubenswrapper[5072]: I1124 11:26:11.402729 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9d9bdb5-a7d6-4caf-9212-4707da33f459-scripts\") pod \"b9d9bdb5-a7d6-4caf-9212-4707da33f459\" (UID: \"b9d9bdb5-a7d6-4caf-9212-4707da33f459\") " Nov 24 11:26:11 crc kubenswrapper[5072]: I1124 11:26:11.409588 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9d9bdb5-a7d6-4caf-9212-4707da33f459-scripts" (OuterVolumeSpecName: "scripts") pod "b9d9bdb5-a7d6-4caf-9212-4707da33f459" (UID: "b9d9bdb5-a7d6-4caf-9212-4707da33f459"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:26:11 crc kubenswrapper[5072]: I1124 11:26:11.411725 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9d9bdb5-a7d6-4caf-9212-4707da33f459-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "b9d9bdb5-a7d6-4caf-9212-4707da33f459" (UID: "b9d9bdb5-a7d6-4caf-9212-4707da33f459"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:26:11 crc kubenswrapper[5072]: I1124 11:26:11.411818 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9d9bdb5-a7d6-4caf-9212-4707da33f459-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "b9d9bdb5-a7d6-4caf-9212-4707da33f459" (UID: "b9d9bdb5-a7d6-4caf-9212-4707da33f459"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:26:11 crc kubenswrapper[5072]: I1124 11:26:11.412077 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9d9bdb5-a7d6-4caf-9212-4707da33f459-kube-api-access-8l4vz" (OuterVolumeSpecName: "kube-api-access-8l4vz") pod "b9d9bdb5-a7d6-4caf-9212-4707da33f459" (UID: "b9d9bdb5-a7d6-4caf-9212-4707da33f459"). InnerVolumeSpecName "kube-api-access-8l4vz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:26:11 crc kubenswrapper[5072]: I1124 11:26:11.434893 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9d9bdb5-a7d6-4caf-9212-4707da33f459-config-data" (OuterVolumeSpecName: "config-data") pod "b9d9bdb5-a7d6-4caf-9212-4707da33f459" (UID: "b9d9bdb5-a7d6-4caf-9212-4707da33f459"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:26:11 crc kubenswrapper[5072]: I1124 11:26:11.463508 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9d9bdb5-a7d6-4caf-9212-4707da33f459-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b9d9bdb5-a7d6-4caf-9212-4707da33f459" (UID: "b9d9bdb5-a7d6-4caf-9212-4707da33f459"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:26:11 crc kubenswrapper[5072]: I1124 11:26:11.504221 5072 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9d9bdb5-a7d6-4caf-9212-4707da33f459-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:11 crc kubenswrapper[5072]: I1124 11:26:11.504258 5072 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9d9bdb5-a7d6-4caf-9212-4707da33f459-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:11 crc kubenswrapper[5072]: I1124 11:26:11.504284 5072 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b9d9bdb5-a7d6-4caf-9212-4707da33f459-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:11 crc kubenswrapper[5072]: I1124 11:26:11.504296 5072 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b9d9bdb5-a7d6-4caf-9212-4707da33f459-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:11 crc kubenswrapper[5072]: I1124 11:26:11.504309 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8l4vz\" (UniqueName: \"kubernetes.io/projected/b9d9bdb5-a7d6-4caf-9212-4707da33f459-kube-api-access-8l4vz\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:11 crc kubenswrapper[5072]: I1124 11:26:11.504322 5072 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9d9bdb5-a7d6-4caf-9212-4707da33f459-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:11 crc kubenswrapper[5072]: I1124 11:26:11.694392 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-64d9f94c7b-p7b2p"] Nov 24 11:26:11 crc kubenswrapper[5072]: W1124 11:26:11.703867 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod35ccd8e2_71e0_4a36_a51a_5c9a4734b124.slice/crio-d1427134e955ee39243729d882b8899dbf15dd81109b8863fd2867fc1de7eb4b WatchSource:0}: Error finding container d1427134e955ee39243729d882b8899dbf15dd81109b8863fd2867fc1de7eb4b: Status 404 returned error can't find the container with id d1427134e955ee39243729d882b8899dbf15dd81109b8863fd2867fc1de7eb4b Nov 24 11:26:12 crc kubenswrapper[5072]: I1124 11:26:12.378120 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-64d9f94c7b-p7b2p" event={"ID":"35ccd8e2-71e0-4a36-a51a-5c9a4734b124","Type":"ContainerStarted","Data":"7c888ce9e3ebeabacca5da1313a7c407ae019a48c56c1b67b35a03ec8aa936f2"} Nov 24 11:26:12 crc kubenswrapper[5072]: I1124 11:26:12.378465 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-64d9f94c7b-p7b2p" event={"ID":"35ccd8e2-71e0-4a36-a51a-5c9a4734b124","Type":"ContainerStarted","Data":"20fc736d470c014ab051a61be59142d5739c32572f66e8dee17a7ef5e77084ac"} Nov 24 11:26:12 crc kubenswrapper[5072]: I1124 11:26:12.378481 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-64d9f94c7b-p7b2p" event={"ID":"35ccd8e2-71e0-4a36-a51a-5c9a4734b124","Type":"ContainerStarted","Data":"d1427134e955ee39243729d882b8899dbf15dd81109b8863fd2867fc1de7eb4b"} Nov 24 11:26:12 crc kubenswrapper[5072]: I1124 11:26:12.379816 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-64d9f94c7b-p7b2p" Nov 24 11:26:12 crc kubenswrapper[5072]: I1124 11:26:12.379852 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-64d9f94c7b-p7b2p" Nov 24 11:26:12 crc kubenswrapper[5072]: I1124 11:26:12.388636 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6dc7d7697-tf7nw" event={"ID":"c1ae9399-6f4c-4053-84c8-821eb2867dc8","Type":"ContainerStarted","Data":"11b88ac32ba1452b217336d07427b55f0398497d2385e8c04183cdbee5b96707"} Nov 24 11:26:12 crc kubenswrapper[5072]: I1124 11:26:12.388902 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6dc7d7697-tf7nw" Nov 24 11:26:12 crc kubenswrapper[5072]: I1124 11:26:12.391051 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8b0e75bc-78b4-45e2-9c55-7b573ab3cc15","Type":"ContainerStarted","Data":"6252bd6b25c76505e4286cfb1d08d90db2cce33ad288b8059ef7e4a6c64394d8"} Nov 24 11:26:12 crc kubenswrapper[5072]: I1124 11:26:12.428192 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-64d9f94c7b-p7b2p" podStartSLOduration=4.428165376 podStartE2EDuration="4.428165376s" podCreationTimestamp="2025-11-24 11:26:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:26:12.40424466 +0000 UTC m=+1024.115769146" watchObservedRunningTime="2025-11-24 11:26:12.428165376 +0000 UTC m=+1024.139689872" Nov 24 11:26:12 crc kubenswrapper[5072]: I1124 11:26:12.436662 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6dc7d7697-tf7nw" podStartSLOduration=6.43663986 podStartE2EDuration="6.43663986s" podCreationTimestamp="2025-11-24 11:26:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:26:12.423416225 +0000 UTC m=+1024.134940751" watchObservedRunningTime="2025-11-24 11:26:12.43663986 +0000 UTC m=+1024.148164336" Nov 24 11:26:12 crc kubenswrapper[5072]: I1124 11:26:12.444155 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-6cc7b79dbf-mkd8x"] Nov 24 11:26:12 crc kubenswrapper[5072]: E1124 11:26:12.444838 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9d9bdb5-a7d6-4caf-9212-4707da33f459" containerName="keystone-bootstrap" Nov 24 11:26:12 crc kubenswrapper[5072]: I1124 11:26:12.444876 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9d9bdb5-a7d6-4caf-9212-4707da33f459" containerName="keystone-bootstrap" Nov 24 11:26:12 crc kubenswrapper[5072]: I1124 11:26:12.445193 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9d9bdb5-a7d6-4caf-9212-4707da33f459" containerName="keystone-bootstrap" Nov 24 11:26:12 crc kubenswrapper[5072]: I1124 11:26:12.446167 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6cc7b79dbf-mkd8x" Nov 24 11:26:12 crc kubenswrapper[5072]: I1124 11:26:12.451972 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 24 11:26:12 crc kubenswrapper[5072]: I1124 11:26:12.452174 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Nov 24 11:26:12 crc kubenswrapper[5072]: I1124 11:26:12.452268 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-lc8qn" Nov 24 11:26:12 crc kubenswrapper[5072]: I1124 11:26:12.452581 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 24 11:26:12 crc kubenswrapper[5072]: I1124 11:26:12.452794 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Nov 24 11:26:12 crc kubenswrapper[5072]: I1124 11:26:12.452995 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 24 11:26:12 crc kubenswrapper[5072]: I1124 11:26:12.454752 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6cc7b79dbf-mkd8x"] Nov 24 11:26:12 crc kubenswrapper[5072]: I1124 11:26:12.522931 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f71f36ff-e9cc-4207-8381-a4edf917c2b1-combined-ca-bundle\") pod \"keystone-6cc7b79dbf-mkd8x\" (UID: \"f71f36ff-e9cc-4207-8381-a4edf917c2b1\") " pod="openstack/keystone-6cc7b79dbf-mkd8x" Nov 24 11:26:12 crc kubenswrapper[5072]: I1124 11:26:12.523102 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f71f36ff-e9cc-4207-8381-a4edf917c2b1-public-tls-certs\") pod \"keystone-6cc7b79dbf-mkd8x\" (UID: \"f71f36ff-e9cc-4207-8381-a4edf917c2b1\") " pod="openstack/keystone-6cc7b79dbf-mkd8x" Nov 24 11:26:12 crc kubenswrapper[5072]: I1124 11:26:12.523179 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f71f36ff-e9cc-4207-8381-a4edf917c2b1-fernet-keys\") pod \"keystone-6cc7b79dbf-mkd8x\" (UID: \"f71f36ff-e9cc-4207-8381-a4edf917c2b1\") " pod="openstack/keystone-6cc7b79dbf-mkd8x" Nov 24 11:26:12 crc kubenswrapper[5072]: I1124 11:26:12.523199 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f71f36ff-e9cc-4207-8381-a4edf917c2b1-config-data\") pod \"keystone-6cc7b79dbf-mkd8x\" (UID: \"f71f36ff-e9cc-4207-8381-a4edf917c2b1\") " pod="openstack/keystone-6cc7b79dbf-mkd8x" Nov 24 11:26:12 crc kubenswrapper[5072]: I1124 11:26:12.523265 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xn4f\" (UniqueName: \"kubernetes.io/projected/f71f36ff-e9cc-4207-8381-a4edf917c2b1-kube-api-access-4xn4f\") pod \"keystone-6cc7b79dbf-mkd8x\" (UID: \"f71f36ff-e9cc-4207-8381-a4edf917c2b1\") " pod="openstack/keystone-6cc7b79dbf-mkd8x" Nov 24 11:26:12 crc kubenswrapper[5072]: I1124 11:26:12.523304 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f71f36ff-e9cc-4207-8381-a4edf917c2b1-internal-tls-certs\") pod \"keystone-6cc7b79dbf-mkd8x\" (UID: \"f71f36ff-e9cc-4207-8381-a4edf917c2b1\") " pod="openstack/keystone-6cc7b79dbf-mkd8x" Nov 24 11:26:12 crc kubenswrapper[5072]: I1124 11:26:12.523325 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f71f36ff-e9cc-4207-8381-a4edf917c2b1-scripts\") pod \"keystone-6cc7b79dbf-mkd8x\" (UID: \"f71f36ff-e9cc-4207-8381-a4edf917c2b1\") " pod="openstack/keystone-6cc7b79dbf-mkd8x" Nov 24 11:26:12 crc kubenswrapper[5072]: I1124 11:26:12.523363 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f71f36ff-e9cc-4207-8381-a4edf917c2b1-credential-keys\") pod \"keystone-6cc7b79dbf-mkd8x\" (UID: \"f71f36ff-e9cc-4207-8381-a4edf917c2b1\") " pod="openstack/keystone-6cc7b79dbf-mkd8x" Nov 24 11:26:12 crc kubenswrapper[5072]: I1124 11:26:12.624009 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xn4f\" (UniqueName: \"kubernetes.io/projected/f71f36ff-e9cc-4207-8381-a4edf917c2b1-kube-api-access-4xn4f\") pod \"keystone-6cc7b79dbf-mkd8x\" (UID: \"f71f36ff-e9cc-4207-8381-a4edf917c2b1\") " pod="openstack/keystone-6cc7b79dbf-mkd8x" Nov 24 11:26:12 crc kubenswrapper[5072]: I1124 11:26:12.624138 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f71f36ff-e9cc-4207-8381-a4edf917c2b1-internal-tls-certs\") pod \"keystone-6cc7b79dbf-mkd8x\" (UID: \"f71f36ff-e9cc-4207-8381-a4edf917c2b1\") " pod="openstack/keystone-6cc7b79dbf-mkd8x" Nov 24 11:26:12 crc kubenswrapper[5072]: I1124 11:26:12.624224 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f71f36ff-e9cc-4207-8381-a4edf917c2b1-scripts\") pod \"keystone-6cc7b79dbf-mkd8x\" (UID: \"f71f36ff-e9cc-4207-8381-a4edf917c2b1\") " pod="openstack/keystone-6cc7b79dbf-mkd8x" Nov 24 11:26:12 crc kubenswrapper[5072]: I1124 11:26:12.624306 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f71f36ff-e9cc-4207-8381-a4edf917c2b1-credential-keys\") pod \"keystone-6cc7b79dbf-mkd8x\" (UID: \"f71f36ff-e9cc-4207-8381-a4edf917c2b1\") " pod="openstack/keystone-6cc7b79dbf-mkd8x" Nov 24 11:26:12 crc kubenswrapper[5072]: I1124 11:26:12.624419 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f71f36ff-e9cc-4207-8381-a4edf917c2b1-combined-ca-bundle\") pod \"keystone-6cc7b79dbf-mkd8x\" (UID: \"f71f36ff-e9cc-4207-8381-a4edf917c2b1\") " pod="openstack/keystone-6cc7b79dbf-mkd8x" Nov 24 11:26:12 crc kubenswrapper[5072]: I1124 11:26:12.624523 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f71f36ff-e9cc-4207-8381-a4edf917c2b1-public-tls-certs\") pod \"keystone-6cc7b79dbf-mkd8x\" (UID: \"f71f36ff-e9cc-4207-8381-a4edf917c2b1\") " pod="openstack/keystone-6cc7b79dbf-mkd8x" Nov 24 11:26:12 crc kubenswrapper[5072]: I1124 11:26:12.624597 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f71f36ff-e9cc-4207-8381-a4edf917c2b1-fernet-keys\") pod \"keystone-6cc7b79dbf-mkd8x\" (UID: \"f71f36ff-e9cc-4207-8381-a4edf917c2b1\") " pod="openstack/keystone-6cc7b79dbf-mkd8x" Nov 24 11:26:12 crc kubenswrapper[5072]: I1124 11:26:12.624663 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f71f36ff-e9cc-4207-8381-a4edf917c2b1-config-data\") pod \"keystone-6cc7b79dbf-mkd8x\" (UID: \"f71f36ff-e9cc-4207-8381-a4edf917c2b1\") " pod="openstack/keystone-6cc7b79dbf-mkd8x" Nov 24 11:26:12 crc kubenswrapper[5072]: I1124 11:26:12.629821 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f71f36ff-e9cc-4207-8381-a4edf917c2b1-credential-keys\") pod \"keystone-6cc7b79dbf-mkd8x\" (UID: \"f71f36ff-e9cc-4207-8381-a4edf917c2b1\") " pod="openstack/keystone-6cc7b79dbf-mkd8x" Nov 24 11:26:12 crc kubenswrapper[5072]: I1124 11:26:12.630485 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f71f36ff-e9cc-4207-8381-a4edf917c2b1-config-data\") pod \"keystone-6cc7b79dbf-mkd8x\" (UID: \"f71f36ff-e9cc-4207-8381-a4edf917c2b1\") " pod="openstack/keystone-6cc7b79dbf-mkd8x" Nov 24 11:26:12 crc kubenswrapper[5072]: I1124 11:26:12.630744 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f71f36ff-e9cc-4207-8381-a4edf917c2b1-scripts\") pod \"keystone-6cc7b79dbf-mkd8x\" (UID: \"f71f36ff-e9cc-4207-8381-a4edf917c2b1\") " pod="openstack/keystone-6cc7b79dbf-mkd8x" Nov 24 11:26:12 crc kubenswrapper[5072]: I1124 11:26:12.630884 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f71f36ff-e9cc-4207-8381-a4edf917c2b1-combined-ca-bundle\") pod \"keystone-6cc7b79dbf-mkd8x\" (UID: \"f71f36ff-e9cc-4207-8381-a4edf917c2b1\") " pod="openstack/keystone-6cc7b79dbf-mkd8x" Nov 24 11:26:12 crc kubenswrapper[5072]: I1124 11:26:12.630958 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f71f36ff-e9cc-4207-8381-a4edf917c2b1-fernet-keys\") pod \"keystone-6cc7b79dbf-mkd8x\" (UID: \"f71f36ff-e9cc-4207-8381-a4edf917c2b1\") " pod="openstack/keystone-6cc7b79dbf-mkd8x" Nov 24 11:26:12 crc kubenswrapper[5072]: I1124 11:26:12.635946 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f71f36ff-e9cc-4207-8381-a4edf917c2b1-public-tls-certs\") pod \"keystone-6cc7b79dbf-mkd8x\" (UID: \"f71f36ff-e9cc-4207-8381-a4edf917c2b1\") " pod="openstack/keystone-6cc7b79dbf-mkd8x" Nov 24 11:26:12 crc kubenswrapper[5072]: I1124 11:26:12.636569 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f71f36ff-e9cc-4207-8381-a4edf917c2b1-internal-tls-certs\") pod \"keystone-6cc7b79dbf-mkd8x\" (UID: \"f71f36ff-e9cc-4207-8381-a4edf917c2b1\") " pod="openstack/keystone-6cc7b79dbf-mkd8x" Nov 24 11:26:12 crc kubenswrapper[5072]: I1124 11:26:12.644107 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xn4f\" (UniqueName: \"kubernetes.io/projected/f71f36ff-e9cc-4207-8381-a4edf917c2b1-kube-api-access-4xn4f\") pod \"keystone-6cc7b79dbf-mkd8x\" (UID: \"f71f36ff-e9cc-4207-8381-a4edf917c2b1\") " pod="openstack/keystone-6cc7b79dbf-mkd8x" Nov 24 11:26:12 crc kubenswrapper[5072]: I1124 11:26:12.822624 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6cc7b79dbf-mkd8x" Nov 24 11:26:13 crc kubenswrapper[5072]: I1124 11:26:13.312234 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6cc7b79dbf-mkd8x"] Nov 24 11:26:13 crc kubenswrapper[5072]: I1124 11:26:13.400844 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6cc7b79dbf-mkd8x" event={"ID":"f71f36ff-e9cc-4207-8381-a4edf917c2b1","Type":"ContainerStarted","Data":"aa4a0890ebd1a040f35682c1e9315842244cac01564965e744b743f09d7cb36d"} Nov 24 11:26:14 crc kubenswrapper[5072]: I1124 11:26:14.134400 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7b946d459c-n4llq" Nov 24 11:26:14 crc kubenswrapper[5072]: I1124 11:26:14.191587 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-gkdpr"] Nov 24 11:26:14 crc kubenswrapper[5072]: I1124 11:26:14.191856 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7987f74bbc-gkdpr" podUID="09cc3e8f-663e-448b-b90f-8d794006c335" containerName="dnsmasq-dns" containerID="cri-o://a220cd3e03c994b9b665cb1ac88ac20ceafaee04d80bdc570e14bcfef12389bf" gracePeriod=10 Nov 24 11:26:14 crc kubenswrapper[5072]: I1124 11:26:14.414509 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6cc7b79dbf-mkd8x" event={"ID":"f71f36ff-e9cc-4207-8381-a4edf917c2b1","Type":"ContainerStarted","Data":"55043ca3188479fe691d79be1b5c31b5f98d8463d8a14d62e19dffe9acd6a760"} Nov 24 11:26:14 crc kubenswrapper[5072]: I1124 11:26:14.416491 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-6cc7b79dbf-mkd8x" Nov 24 11:26:14 crc kubenswrapper[5072]: I1124 11:26:14.418559 5072 generic.go:334] "Generic (PLEG): container finished" podID="09cc3e8f-663e-448b-b90f-8d794006c335" containerID="a220cd3e03c994b9b665cb1ac88ac20ceafaee04d80bdc570e14bcfef12389bf" exitCode=0 Nov 24 11:26:14 crc kubenswrapper[5072]: I1124 11:26:14.418666 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7987f74bbc-gkdpr" event={"ID":"09cc3e8f-663e-448b-b90f-8d794006c335","Type":"ContainerDied","Data":"a220cd3e03c994b9b665cb1ac88ac20ceafaee04d80bdc570e14bcfef12389bf"} Nov 24 11:26:14 crc kubenswrapper[5072]: I1124 11:26:14.442023 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-6cc7b79dbf-mkd8x" podStartSLOduration=2.441995662 podStartE2EDuration="2.441995662s" podCreationTimestamp="2025-11-24 11:26:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:26:14.433216419 +0000 UTC m=+1026.144740895" watchObservedRunningTime="2025-11-24 11:26:14.441995662 +0000 UTC m=+1026.153520138" Nov 24 11:26:14 crc kubenswrapper[5072]: I1124 11:26:14.691187 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7987f74bbc-gkdpr" Nov 24 11:26:14 crc kubenswrapper[5072]: I1124 11:26:14.715842 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/09cc3e8f-663e-448b-b90f-8d794006c335-ovsdbserver-nb\") pod \"09cc3e8f-663e-448b-b90f-8d794006c335\" (UID: \"09cc3e8f-663e-448b-b90f-8d794006c335\") " Nov 24 11:26:14 crc kubenswrapper[5072]: I1124 11:26:14.715942 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hxwm\" (UniqueName: \"kubernetes.io/projected/09cc3e8f-663e-448b-b90f-8d794006c335-kube-api-access-8hxwm\") pod \"09cc3e8f-663e-448b-b90f-8d794006c335\" (UID: \"09cc3e8f-663e-448b-b90f-8d794006c335\") " Nov 24 11:26:14 crc kubenswrapper[5072]: I1124 11:26:14.716089 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cc3e8f-663e-448b-b90f-8d794006c335-config\") pod \"09cc3e8f-663e-448b-b90f-8d794006c335\" (UID: \"09cc3e8f-663e-448b-b90f-8d794006c335\") " Nov 24 11:26:14 crc kubenswrapper[5072]: I1124 11:26:14.716115 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/09cc3e8f-663e-448b-b90f-8d794006c335-ovsdbserver-sb\") pod \"09cc3e8f-663e-448b-b90f-8d794006c335\" (UID: \"09cc3e8f-663e-448b-b90f-8d794006c335\") " Nov 24 11:26:14 crc kubenswrapper[5072]: I1124 11:26:14.716228 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/09cc3e8f-663e-448b-b90f-8d794006c335-dns-svc\") pod \"09cc3e8f-663e-448b-b90f-8d794006c335\" (UID: \"09cc3e8f-663e-448b-b90f-8d794006c335\") " Nov 24 11:26:14 crc kubenswrapper[5072]: I1124 11:26:14.725612 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09cc3e8f-663e-448b-b90f-8d794006c335-kube-api-access-8hxwm" (OuterVolumeSpecName: "kube-api-access-8hxwm") pod "09cc3e8f-663e-448b-b90f-8d794006c335" (UID: "09cc3e8f-663e-448b-b90f-8d794006c335"). InnerVolumeSpecName "kube-api-access-8hxwm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:26:14 crc kubenswrapper[5072]: I1124 11:26:14.758352 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09cc3e8f-663e-448b-b90f-8d794006c335-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "09cc3e8f-663e-448b-b90f-8d794006c335" (UID: "09cc3e8f-663e-448b-b90f-8d794006c335"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:26:14 crc kubenswrapper[5072]: I1124 11:26:14.761554 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09cc3e8f-663e-448b-b90f-8d794006c335-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "09cc3e8f-663e-448b-b90f-8d794006c335" (UID: "09cc3e8f-663e-448b-b90f-8d794006c335"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:26:14 crc kubenswrapper[5072]: I1124 11:26:14.785425 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09cc3e8f-663e-448b-b90f-8d794006c335-config" (OuterVolumeSpecName: "config") pod "09cc3e8f-663e-448b-b90f-8d794006c335" (UID: "09cc3e8f-663e-448b-b90f-8d794006c335"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:26:14 crc kubenswrapper[5072]: I1124 11:26:14.785545 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09cc3e8f-663e-448b-b90f-8d794006c335-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "09cc3e8f-663e-448b-b90f-8d794006c335" (UID: "09cc3e8f-663e-448b-b90f-8d794006c335"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:26:14 crc kubenswrapper[5072]: I1124 11:26:14.817653 5072 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/09cc3e8f-663e-448b-b90f-8d794006c335-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:14 crc kubenswrapper[5072]: I1124 11:26:14.817686 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8hxwm\" (UniqueName: \"kubernetes.io/projected/09cc3e8f-663e-448b-b90f-8d794006c335-kube-api-access-8hxwm\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:14 crc kubenswrapper[5072]: I1124 11:26:14.817698 5072 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cc3e8f-663e-448b-b90f-8d794006c335-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:14 crc kubenswrapper[5072]: I1124 11:26:14.817707 5072 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/09cc3e8f-663e-448b-b90f-8d794006c335-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:14 crc kubenswrapper[5072]: I1124 11:26:14.817715 5072 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/09cc3e8f-663e-448b-b90f-8d794006c335-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:15 crc kubenswrapper[5072]: I1124 11:26:15.437350 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7987f74bbc-gkdpr" Nov 24 11:26:15 crc kubenswrapper[5072]: I1124 11:26:15.437520 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7987f74bbc-gkdpr" event={"ID":"09cc3e8f-663e-448b-b90f-8d794006c335","Type":"ContainerDied","Data":"f8fbf131a977f58c3d5c7bd192ce10d3a68ad3c9f8f869645a44ce6215082ff9"} Nov 24 11:26:15 crc kubenswrapper[5072]: I1124 11:26:15.438227 5072 scope.go:117] "RemoveContainer" containerID="a220cd3e03c994b9b665cb1ac88ac20ceafaee04d80bdc570e14bcfef12389bf" Nov 24 11:26:15 crc kubenswrapper[5072]: I1124 11:26:15.472603 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-gkdpr"] Nov 24 11:26:15 crc kubenswrapper[5072]: I1124 11:26:15.480995 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-gkdpr"] Nov 24 11:26:17 crc kubenswrapper[5072]: I1124 11:26:17.035093 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09cc3e8f-663e-448b-b90f-8d794006c335" path="/var/lib/kubelet/pods/09cc3e8f-663e-448b-b90f-8d794006c335/volumes" Nov 24 11:26:19 crc kubenswrapper[5072]: I1124 11:26:19.001779 5072 scope.go:117] "RemoveContainer" containerID="d934220f2b88c3c0da8cc478cf088ad1c5a8282506d738e4c323c259cbd686d2" Nov 24 11:26:20 crc kubenswrapper[5072]: I1124 11:26:20.493524 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-g5npx" event={"ID":"feff4031-5012-468f-8dd6-d58c5dae8d29","Type":"ContainerStarted","Data":"177d910126f83504bed2ff81ce80cbea56bdbb20d350d92a1c83d12f5b98f316"} Nov 24 11:26:20 crc kubenswrapper[5072]: I1124 11:26:20.507840 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8b0e75bc-78b4-45e2-9c55-7b573ab3cc15","Type":"ContainerStarted","Data":"81e61b7a968f85778c1b121aef76aad86078bce3bbd9f05b3195cf88dc6a517a"} Nov 24 11:26:20 crc kubenswrapper[5072]: I1124 11:26:20.508261 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 11:26:20 crc kubenswrapper[5072]: I1124 11:26:20.508270 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8b0e75bc-78b4-45e2-9c55-7b573ab3cc15" containerName="ceilometer-central-agent" containerID="cri-o://08862e0312856263cde359eba19295ccb970707b32c1800e019b48031123b752" gracePeriod=30 Nov 24 11:26:20 crc kubenswrapper[5072]: I1124 11:26:20.508790 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8b0e75bc-78b4-45e2-9c55-7b573ab3cc15" containerName="ceilometer-notification-agent" containerID="cri-o://43986ad77a0fa21d6223cb16aed3a85747ac1c462e8ad731db536723897da2b2" gracePeriod=30 Nov 24 11:26:20 crc kubenswrapper[5072]: I1124 11:26:20.509023 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8b0e75bc-78b4-45e2-9c55-7b573ab3cc15" containerName="sg-core" containerID="cri-o://6252bd6b25c76505e4286cfb1d08d90db2cce33ad288b8059ef7e4a6c64394d8" gracePeriod=30 Nov 24 11:26:20 crc kubenswrapper[5072]: I1124 11:26:20.508794 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8b0e75bc-78b4-45e2-9c55-7b573ab3cc15" containerName="proxy-httpd" containerID="cri-o://81e61b7a968f85778c1b121aef76aad86078bce3bbd9f05b3195cf88dc6a517a" gracePeriod=30 Nov 24 11:26:20 crc kubenswrapper[5072]: I1124 11:26:20.524658 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-g5npx" podStartSLOduration=2.602421257 podStartE2EDuration="39.52462483s" podCreationTimestamp="2025-11-24 11:25:41 +0000 UTC" firstStartedPulling="2025-11-24 11:25:42.597386072 +0000 UTC m=+994.308910548" lastFinishedPulling="2025-11-24 11:26:19.519589645 +0000 UTC m=+1031.231114121" observedRunningTime="2025-11-24 11:26:20.514272658 +0000 UTC m=+1032.225797174" watchObservedRunningTime="2025-11-24 11:26:20.52462483 +0000 UTC m=+1032.236149346" Nov 24 11:26:20 crc kubenswrapper[5072]: I1124 11:26:20.554926 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.432288988 podStartE2EDuration="39.554905697s" podCreationTimestamp="2025-11-24 11:25:41 +0000 UTC" firstStartedPulling="2025-11-24 11:25:42.418185363 +0000 UTC m=+994.129709839" lastFinishedPulling="2025-11-24 11:26:19.540802072 +0000 UTC m=+1031.252326548" observedRunningTime="2025-11-24 11:26:20.546823233 +0000 UTC m=+1032.258347719" watchObservedRunningTime="2025-11-24 11:26:20.554905697 +0000 UTC m=+1032.266430183" Nov 24 11:26:21 crc kubenswrapper[5072]: I1124 11:26:21.519365 5072 generic.go:334] "Generic (PLEG): container finished" podID="8b0e75bc-78b4-45e2-9c55-7b573ab3cc15" containerID="81e61b7a968f85778c1b121aef76aad86078bce3bbd9f05b3195cf88dc6a517a" exitCode=0 Nov 24 11:26:21 crc kubenswrapper[5072]: I1124 11:26:21.519726 5072 generic.go:334] "Generic (PLEG): container finished" podID="8b0e75bc-78b4-45e2-9c55-7b573ab3cc15" containerID="6252bd6b25c76505e4286cfb1d08d90db2cce33ad288b8059ef7e4a6c64394d8" exitCode=2 Nov 24 11:26:21 crc kubenswrapper[5072]: I1124 11:26:21.519735 5072 generic.go:334] "Generic (PLEG): container finished" podID="8b0e75bc-78b4-45e2-9c55-7b573ab3cc15" containerID="08862e0312856263cde359eba19295ccb970707b32c1800e019b48031123b752" exitCode=0 Nov 24 11:26:21 crc kubenswrapper[5072]: I1124 11:26:21.519410 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8b0e75bc-78b4-45e2-9c55-7b573ab3cc15","Type":"ContainerDied","Data":"81e61b7a968f85778c1b121aef76aad86078bce3bbd9f05b3195cf88dc6a517a"} Nov 24 11:26:21 crc kubenswrapper[5072]: I1124 11:26:21.519800 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8b0e75bc-78b4-45e2-9c55-7b573ab3cc15","Type":"ContainerDied","Data":"6252bd6b25c76505e4286cfb1d08d90db2cce33ad288b8059ef7e4a6c64394d8"} Nov 24 11:26:21 crc kubenswrapper[5072]: I1124 11:26:21.519814 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8b0e75bc-78b4-45e2-9c55-7b573ab3cc15","Type":"ContainerDied","Data":"08862e0312856263cde359eba19295ccb970707b32c1800e019b48031123b752"} Nov 24 11:26:21 crc kubenswrapper[5072]: I1124 11:26:21.522702 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-8npk7" event={"ID":"ab063039-b4d9-45d8-9336-35316fd1ab08","Type":"ContainerStarted","Data":"8a22f32584c45f6be5f8cd8133d0159b79ad525fbafc02835bd59e52937a16e9"} Nov 24 11:26:21 crc kubenswrapper[5072]: I1124 11:26:21.543967 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-8npk7" podStartSLOduration=2.5712380379999997 podStartE2EDuration="40.543948618s" podCreationTimestamp="2025-11-24 11:25:41 +0000 UTC" firstStartedPulling="2025-11-24 11:25:42.585301816 +0000 UTC m=+994.296826292" lastFinishedPulling="2025-11-24 11:26:20.558012386 +0000 UTC m=+1032.269536872" observedRunningTime="2025-11-24 11:26:21.542770628 +0000 UTC m=+1033.254295124" watchObservedRunningTime="2025-11-24 11:26:21.543948618 +0000 UTC m=+1033.255473094" Nov 24 11:26:22 crc kubenswrapper[5072]: I1124 11:26:22.532606 5072 generic.go:334] "Generic (PLEG): container finished" podID="feff4031-5012-468f-8dd6-d58c5dae8d29" containerID="177d910126f83504bed2ff81ce80cbea56bdbb20d350d92a1c83d12f5b98f316" exitCode=0 Nov 24 11:26:22 crc kubenswrapper[5072]: I1124 11:26:22.532709 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-g5npx" event={"ID":"feff4031-5012-468f-8dd6-d58c5dae8d29","Type":"ContainerDied","Data":"177d910126f83504bed2ff81ce80cbea56bdbb20d350d92a1c83d12f5b98f316"} Nov 24 11:26:22 crc kubenswrapper[5072]: I1124 11:26:22.539775 5072 generic.go:334] "Generic (PLEG): container finished" podID="8b0e75bc-78b4-45e2-9c55-7b573ab3cc15" containerID="43986ad77a0fa21d6223cb16aed3a85747ac1c462e8ad731db536723897da2b2" exitCode=0 Nov 24 11:26:22 crc kubenswrapper[5072]: I1124 11:26:22.539823 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8b0e75bc-78b4-45e2-9c55-7b573ab3cc15","Type":"ContainerDied","Data":"43986ad77a0fa21d6223cb16aed3a85747ac1c462e8ad731db536723897da2b2"} Nov 24 11:26:22 crc kubenswrapper[5072]: I1124 11:26:22.665677 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 11:26:22 crc kubenswrapper[5072]: I1124 11:26:22.786282 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8b0e75bc-78b4-45e2-9c55-7b573ab3cc15-run-httpd\") pod \"8b0e75bc-78b4-45e2-9c55-7b573ab3cc15\" (UID: \"8b0e75bc-78b4-45e2-9c55-7b573ab3cc15\") " Nov 24 11:26:22 crc kubenswrapper[5072]: I1124 11:26:22.786406 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftdk2\" (UniqueName: \"kubernetes.io/projected/8b0e75bc-78b4-45e2-9c55-7b573ab3cc15-kube-api-access-ftdk2\") pod \"8b0e75bc-78b4-45e2-9c55-7b573ab3cc15\" (UID: \"8b0e75bc-78b4-45e2-9c55-7b573ab3cc15\") " Nov 24 11:26:22 crc kubenswrapper[5072]: I1124 11:26:22.786574 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8b0e75bc-78b4-45e2-9c55-7b573ab3cc15-sg-core-conf-yaml\") pod \"8b0e75bc-78b4-45e2-9c55-7b573ab3cc15\" (UID: \"8b0e75bc-78b4-45e2-9c55-7b573ab3cc15\") " Nov 24 11:26:22 crc kubenswrapper[5072]: I1124 11:26:22.786631 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8b0e75bc-78b4-45e2-9c55-7b573ab3cc15-log-httpd\") pod \"8b0e75bc-78b4-45e2-9c55-7b573ab3cc15\" (UID: \"8b0e75bc-78b4-45e2-9c55-7b573ab3cc15\") " Nov 24 11:26:22 crc kubenswrapper[5072]: I1124 11:26:22.786722 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b0e75bc-78b4-45e2-9c55-7b573ab3cc15-combined-ca-bundle\") pod \"8b0e75bc-78b4-45e2-9c55-7b573ab3cc15\" (UID: \"8b0e75bc-78b4-45e2-9c55-7b573ab3cc15\") " Nov 24 11:26:22 crc kubenswrapper[5072]: I1124 11:26:22.786766 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b0e75bc-78b4-45e2-9c55-7b573ab3cc15-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "8b0e75bc-78b4-45e2-9c55-7b573ab3cc15" (UID: "8b0e75bc-78b4-45e2-9c55-7b573ab3cc15"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:26:22 crc kubenswrapper[5072]: I1124 11:26:22.786996 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b0e75bc-78b4-45e2-9c55-7b573ab3cc15-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "8b0e75bc-78b4-45e2-9c55-7b573ab3cc15" (UID: "8b0e75bc-78b4-45e2-9c55-7b573ab3cc15"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:26:22 crc kubenswrapper[5072]: I1124 11:26:22.787345 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b0e75bc-78b4-45e2-9c55-7b573ab3cc15-config-data\") pod \"8b0e75bc-78b4-45e2-9c55-7b573ab3cc15\" (UID: \"8b0e75bc-78b4-45e2-9c55-7b573ab3cc15\") " Nov 24 11:26:22 crc kubenswrapper[5072]: I1124 11:26:22.787418 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b0e75bc-78b4-45e2-9c55-7b573ab3cc15-scripts\") pod \"8b0e75bc-78b4-45e2-9c55-7b573ab3cc15\" (UID: \"8b0e75bc-78b4-45e2-9c55-7b573ab3cc15\") " Nov 24 11:26:22 crc kubenswrapper[5072]: I1124 11:26:22.788119 5072 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8b0e75bc-78b4-45e2-9c55-7b573ab3cc15-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:22 crc kubenswrapper[5072]: I1124 11:26:22.788157 5072 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8b0e75bc-78b4-45e2-9c55-7b573ab3cc15-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:22 crc kubenswrapper[5072]: I1124 11:26:22.793456 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b0e75bc-78b4-45e2-9c55-7b573ab3cc15-kube-api-access-ftdk2" (OuterVolumeSpecName: "kube-api-access-ftdk2") pod "8b0e75bc-78b4-45e2-9c55-7b573ab3cc15" (UID: "8b0e75bc-78b4-45e2-9c55-7b573ab3cc15"). InnerVolumeSpecName "kube-api-access-ftdk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:26:22 crc kubenswrapper[5072]: I1124 11:26:22.793464 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b0e75bc-78b4-45e2-9c55-7b573ab3cc15-scripts" (OuterVolumeSpecName: "scripts") pod "8b0e75bc-78b4-45e2-9c55-7b573ab3cc15" (UID: "8b0e75bc-78b4-45e2-9c55-7b573ab3cc15"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:26:22 crc kubenswrapper[5072]: I1124 11:26:22.816038 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b0e75bc-78b4-45e2-9c55-7b573ab3cc15-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "8b0e75bc-78b4-45e2-9c55-7b573ab3cc15" (UID: "8b0e75bc-78b4-45e2-9c55-7b573ab3cc15"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:26:22 crc kubenswrapper[5072]: I1124 11:26:22.858354 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b0e75bc-78b4-45e2-9c55-7b573ab3cc15-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8b0e75bc-78b4-45e2-9c55-7b573ab3cc15" (UID: "8b0e75bc-78b4-45e2-9c55-7b573ab3cc15"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:26:22 crc kubenswrapper[5072]: I1124 11:26:22.878178 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b0e75bc-78b4-45e2-9c55-7b573ab3cc15-config-data" (OuterVolumeSpecName: "config-data") pod "8b0e75bc-78b4-45e2-9c55-7b573ab3cc15" (UID: "8b0e75bc-78b4-45e2-9c55-7b573ab3cc15"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:26:22 crc kubenswrapper[5072]: I1124 11:26:22.889648 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ftdk2\" (UniqueName: \"kubernetes.io/projected/8b0e75bc-78b4-45e2-9c55-7b573ab3cc15-kube-api-access-ftdk2\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:22 crc kubenswrapper[5072]: I1124 11:26:22.889710 5072 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8b0e75bc-78b4-45e2-9c55-7b573ab3cc15-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:22 crc kubenswrapper[5072]: I1124 11:26:22.889722 5072 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b0e75bc-78b4-45e2-9c55-7b573ab3cc15-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:22 crc kubenswrapper[5072]: I1124 11:26:22.889731 5072 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b0e75bc-78b4-45e2-9c55-7b573ab3cc15-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:22 crc kubenswrapper[5072]: I1124 11:26:22.889743 5072 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b0e75bc-78b4-45e2-9c55-7b573ab3cc15-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:23 crc kubenswrapper[5072]: I1124 11:26:23.553496 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8b0e75bc-78b4-45e2-9c55-7b573ab3cc15","Type":"ContainerDied","Data":"e223c0f113b92b9808ab93835aff54c6d0e81c819bc94ad66797878bff8a649e"} Nov 24 11:26:23 crc kubenswrapper[5072]: I1124 11:26:23.553556 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 11:26:23 crc kubenswrapper[5072]: I1124 11:26:23.553598 5072 scope.go:117] "RemoveContainer" containerID="81e61b7a968f85778c1b121aef76aad86078bce3bbd9f05b3195cf88dc6a517a" Nov 24 11:26:23 crc kubenswrapper[5072]: I1124 11:26:23.591027 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:26:23 crc kubenswrapper[5072]: I1124 11:26:23.599420 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:26:23 crc kubenswrapper[5072]: I1124 11:26:23.620427 5072 scope.go:117] "RemoveContainer" containerID="6252bd6b25c76505e4286cfb1d08d90db2cce33ad288b8059ef7e4a6c64394d8" Nov 24 11:26:23 crc kubenswrapper[5072]: I1124 11:26:23.637833 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:26:23 crc kubenswrapper[5072]: E1124 11:26:23.638234 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b0e75bc-78b4-45e2-9c55-7b573ab3cc15" containerName="ceilometer-notification-agent" Nov 24 11:26:23 crc kubenswrapper[5072]: I1124 11:26:23.638256 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b0e75bc-78b4-45e2-9c55-7b573ab3cc15" containerName="ceilometer-notification-agent" Nov 24 11:26:23 crc kubenswrapper[5072]: E1124 11:26:23.638269 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09cc3e8f-663e-448b-b90f-8d794006c335" containerName="init" Nov 24 11:26:23 crc kubenswrapper[5072]: I1124 11:26:23.638277 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="09cc3e8f-663e-448b-b90f-8d794006c335" containerName="init" Nov 24 11:26:23 crc kubenswrapper[5072]: E1124 11:26:23.638293 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b0e75bc-78b4-45e2-9c55-7b573ab3cc15" containerName="sg-core" Nov 24 11:26:23 crc kubenswrapper[5072]: I1124 11:26:23.638301 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b0e75bc-78b4-45e2-9c55-7b573ab3cc15" containerName="sg-core" Nov 24 11:26:23 crc kubenswrapper[5072]: E1124 11:26:23.638323 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b0e75bc-78b4-45e2-9c55-7b573ab3cc15" containerName="ceilometer-central-agent" Nov 24 11:26:23 crc kubenswrapper[5072]: I1124 11:26:23.638329 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b0e75bc-78b4-45e2-9c55-7b573ab3cc15" containerName="ceilometer-central-agent" Nov 24 11:26:23 crc kubenswrapper[5072]: E1124 11:26:23.638340 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b0e75bc-78b4-45e2-9c55-7b573ab3cc15" containerName="proxy-httpd" Nov 24 11:26:23 crc kubenswrapper[5072]: I1124 11:26:23.638346 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b0e75bc-78b4-45e2-9c55-7b573ab3cc15" containerName="proxy-httpd" Nov 24 11:26:23 crc kubenswrapper[5072]: E1124 11:26:23.638357 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09cc3e8f-663e-448b-b90f-8d794006c335" containerName="dnsmasq-dns" Nov 24 11:26:23 crc kubenswrapper[5072]: I1124 11:26:23.638364 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="09cc3e8f-663e-448b-b90f-8d794006c335" containerName="dnsmasq-dns" Nov 24 11:26:23 crc kubenswrapper[5072]: I1124 11:26:23.638677 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="09cc3e8f-663e-448b-b90f-8d794006c335" containerName="dnsmasq-dns" Nov 24 11:26:23 crc kubenswrapper[5072]: I1124 11:26:23.638700 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b0e75bc-78b4-45e2-9c55-7b573ab3cc15" containerName="proxy-httpd" Nov 24 11:26:23 crc kubenswrapper[5072]: I1124 11:26:23.638720 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b0e75bc-78b4-45e2-9c55-7b573ab3cc15" containerName="ceilometer-central-agent" Nov 24 11:26:23 crc kubenswrapper[5072]: I1124 11:26:23.638735 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b0e75bc-78b4-45e2-9c55-7b573ab3cc15" containerName="sg-core" Nov 24 11:26:23 crc kubenswrapper[5072]: I1124 11:26:23.638748 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b0e75bc-78b4-45e2-9c55-7b573ab3cc15" containerName="ceilometer-notification-agent" Nov 24 11:26:23 crc kubenswrapper[5072]: I1124 11:26:23.640302 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 11:26:23 crc kubenswrapper[5072]: I1124 11:26:23.645304 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 11:26:23 crc kubenswrapper[5072]: I1124 11:26:23.645547 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 11:26:23 crc kubenswrapper[5072]: I1124 11:26:23.660990 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:26:23 crc kubenswrapper[5072]: I1124 11:26:23.664353 5072 scope.go:117] "RemoveContainer" containerID="43986ad77a0fa21d6223cb16aed3a85747ac1c462e8ad731db536723897da2b2" Nov 24 11:26:23 crc kubenswrapper[5072]: I1124 11:26:23.698397 5072 scope.go:117] "RemoveContainer" containerID="08862e0312856263cde359eba19295ccb970707b32c1800e019b48031123b752" Nov 24 11:26:23 crc kubenswrapper[5072]: I1124 11:26:23.704183 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b596a610-936b-465e-aa9d-cb3b8f7811a4-log-httpd\") pod \"ceilometer-0\" (UID: \"b596a610-936b-465e-aa9d-cb3b8f7811a4\") " pod="openstack/ceilometer-0" Nov 24 11:26:23 crc kubenswrapper[5072]: I1124 11:26:23.704246 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b596a610-936b-465e-aa9d-cb3b8f7811a4-scripts\") pod \"ceilometer-0\" (UID: \"b596a610-936b-465e-aa9d-cb3b8f7811a4\") " pod="openstack/ceilometer-0" Nov 24 11:26:23 crc kubenswrapper[5072]: I1124 11:26:23.704275 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b596a610-936b-465e-aa9d-cb3b8f7811a4-run-httpd\") pod \"ceilometer-0\" (UID: \"b596a610-936b-465e-aa9d-cb3b8f7811a4\") " pod="openstack/ceilometer-0" Nov 24 11:26:23 crc kubenswrapper[5072]: I1124 11:26:23.704386 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b596a610-936b-465e-aa9d-cb3b8f7811a4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b596a610-936b-465e-aa9d-cb3b8f7811a4\") " pod="openstack/ceilometer-0" Nov 24 11:26:23 crc kubenswrapper[5072]: I1124 11:26:23.704419 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8pz6\" (UniqueName: \"kubernetes.io/projected/b596a610-936b-465e-aa9d-cb3b8f7811a4-kube-api-access-n8pz6\") pod \"ceilometer-0\" (UID: \"b596a610-936b-465e-aa9d-cb3b8f7811a4\") " pod="openstack/ceilometer-0" Nov 24 11:26:23 crc kubenswrapper[5072]: I1124 11:26:23.704459 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b596a610-936b-465e-aa9d-cb3b8f7811a4-config-data\") pod \"ceilometer-0\" (UID: \"b596a610-936b-465e-aa9d-cb3b8f7811a4\") " pod="openstack/ceilometer-0" Nov 24 11:26:23 crc kubenswrapper[5072]: I1124 11:26:23.704488 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b596a610-936b-465e-aa9d-cb3b8f7811a4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b596a610-936b-465e-aa9d-cb3b8f7811a4\") " pod="openstack/ceilometer-0" Nov 24 11:26:23 crc kubenswrapper[5072]: I1124 11:26:23.806446 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b596a610-936b-465e-aa9d-cb3b8f7811a4-log-httpd\") pod \"ceilometer-0\" (UID: \"b596a610-936b-465e-aa9d-cb3b8f7811a4\") " pod="openstack/ceilometer-0" Nov 24 11:26:23 crc kubenswrapper[5072]: I1124 11:26:23.806773 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b596a610-936b-465e-aa9d-cb3b8f7811a4-scripts\") pod \"ceilometer-0\" (UID: \"b596a610-936b-465e-aa9d-cb3b8f7811a4\") " pod="openstack/ceilometer-0" Nov 24 11:26:23 crc kubenswrapper[5072]: I1124 11:26:23.806814 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b596a610-936b-465e-aa9d-cb3b8f7811a4-run-httpd\") pod \"ceilometer-0\" (UID: \"b596a610-936b-465e-aa9d-cb3b8f7811a4\") " pod="openstack/ceilometer-0" Nov 24 11:26:23 crc kubenswrapper[5072]: I1124 11:26:23.806888 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b596a610-936b-465e-aa9d-cb3b8f7811a4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b596a610-936b-465e-aa9d-cb3b8f7811a4\") " pod="openstack/ceilometer-0" Nov 24 11:26:23 crc kubenswrapper[5072]: I1124 11:26:23.806922 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8pz6\" (UniqueName: \"kubernetes.io/projected/b596a610-936b-465e-aa9d-cb3b8f7811a4-kube-api-access-n8pz6\") pod \"ceilometer-0\" (UID: \"b596a610-936b-465e-aa9d-cb3b8f7811a4\") " pod="openstack/ceilometer-0" Nov 24 11:26:23 crc kubenswrapper[5072]: I1124 11:26:23.806977 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b596a610-936b-465e-aa9d-cb3b8f7811a4-config-data\") pod \"ceilometer-0\" (UID: \"b596a610-936b-465e-aa9d-cb3b8f7811a4\") " pod="openstack/ceilometer-0" Nov 24 11:26:23 crc kubenswrapper[5072]: I1124 11:26:23.807020 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b596a610-936b-465e-aa9d-cb3b8f7811a4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b596a610-936b-465e-aa9d-cb3b8f7811a4\") " pod="openstack/ceilometer-0" Nov 24 11:26:23 crc kubenswrapper[5072]: I1124 11:26:23.808604 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b596a610-936b-465e-aa9d-cb3b8f7811a4-log-httpd\") pod \"ceilometer-0\" (UID: \"b596a610-936b-465e-aa9d-cb3b8f7811a4\") " pod="openstack/ceilometer-0" Nov 24 11:26:23 crc kubenswrapper[5072]: I1124 11:26:23.810953 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b596a610-936b-465e-aa9d-cb3b8f7811a4-run-httpd\") pod \"ceilometer-0\" (UID: \"b596a610-936b-465e-aa9d-cb3b8f7811a4\") " pod="openstack/ceilometer-0" Nov 24 11:26:23 crc kubenswrapper[5072]: I1124 11:26:23.813197 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b596a610-936b-465e-aa9d-cb3b8f7811a4-scripts\") pod \"ceilometer-0\" (UID: \"b596a610-936b-465e-aa9d-cb3b8f7811a4\") " pod="openstack/ceilometer-0" Nov 24 11:26:23 crc kubenswrapper[5072]: I1124 11:26:23.814090 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b596a610-936b-465e-aa9d-cb3b8f7811a4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b596a610-936b-465e-aa9d-cb3b8f7811a4\") " pod="openstack/ceilometer-0" Nov 24 11:26:23 crc kubenswrapper[5072]: I1124 11:26:23.815800 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b596a610-936b-465e-aa9d-cb3b8f7811a4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b596a610-936b-465e-aa9d-cb3b8f7811a4\") " pod="openstack/ceilometer-0" Nov 24 11:26:23 crc kubenswrapper[5072]: I1124 11:26:23.817463 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b596a610-936b-465e-aa9d-cb3b8f7811a4-config-data\") pod \"ceilometer-0\" (UID: \"b596a610-936b-465e-aa9d-cb3b8f7811a4\") " pod="openstack/ceilometer-0" Nov 24 11:26:23 crc kubenswrapper[5072]: I1124 11:26:23.849267 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8pz6\" (UniqueName: \"kubernetes.io/projected/b596a610-936b-465e-aa9d-cb3b8f7811a4-kube-api-access-n8pz6\") pod \"ceilometer-0\" (UID: \"b596a610-936b-465e-aa9d-cb3b8f7811a4\") " pod="openstack/ceilometer-0" Nov 24 11:26:23 crc kubenswrapper[5072]: I1124 11:26:23.928856 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-g5npx" Nov 24 11:26:23 crc kubenswrapper[5072]: I1124 11:26:23.966906 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 11:26:24 crc kubenswrapper[5072]: I1124 11:26:24.010003 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/feff4031-5012-468f-8dd6-d58c5dae8d29-combined-ca-bundle\") pod \"feff4031-5012-468f-8dd6-d58c5dae8d29\" (UID: \"feff4031-5012-468f-8dd6-d58c5dae8d29\") " Nov 24 11:26:24 crc kubenswrapper[5072]: I1124 11:26:24.010269 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvgkt\" (UniqueName: \"kubernetes.io/projected/feff4031-5012-468f-8dd6-d58c5dae8d29-kube-api-access-rvgkt\") pod \"feff4031-5012-468f-8dd6-d58c5dae8d29\" (UID: \"feff4031-5012-468f-8dd6-d58c5dae8d29\") " Nov 24 11:26:24 crc kubenswrapper[5072]: I1124 11:26:24.010329 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/feff4031-5012-468f-8dd6-d58c5dae8d29-db-sync-config-data\") pod \"feff4031-5012-468f-8dd6-d58c5dae8d29\" (UID: \"feff4031-5012-468f-8dd6-d58c5dae8d29\") " Nov 24 11:26:24 crc kubenswrapper[5072]: I1124 11:26:24.013555 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/feff4031-5012-468f-8dd6-d58c5dae8d29-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "feff4031-5012-468f-8dd6-d58c5dae8d29" (UID: "feff4031-5012-468f-8dd6-d58c5dae8d29"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:26:24 crc kubenswrapper[5072]: I1124 11:26:24.013692 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/feff4031-5012-468f-8dd6-d58c5dae8d29-kube-api-access-rvgkt" (OuterVolumeSpecName: "kube-api-access-rvgkt") pod "feff4031-5012-468f-8dd6-d58c5dae8d29" (UID: "feff4031-5012-468f-8dd6-d58c5dae8d29"). InnerVolumeSpecName "kube-api-access-rvgkt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:26:24 crc kubenswrapper[5072]: I1124 11:26:24.033157 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/feff4031-5012-468f-8dd6-d58c5dae8d29-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "feff4031-5012-468f-8dd6-d58c5dae8d29" (UID: "feff4031-5012-468f-8dd6-d58c5dae8d29"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:26:24 crc kubenswrapper[5072]: I1124 11:26:24.112642 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rvgkt\" (UniqueName: \"kubernetes.io/projected/feff4031-5012-468f-8dd6-d58c5dae8d29-kube-api-access-rvgkt\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:24 crc kubenswrapper[5072]: I1124 11:26:24.112677 5072 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/feff4031-5012-468f-8dd6-d58c5dae8d29-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:24 crc kubenswrapper[5072]: I1124 11:26:24.112690 5072 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/feff4031-5012-468f-8dd6-d58c5dae8d29-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:24 crc kubenswrapper[5072]: I1124 11:26:24.400454 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:26:24 crc kubenswrapper[5072]: W1124 11:26:24.414495 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb596a610_936b_465e_aa9d_cb3b8f7811a4.slice/crio-ff55489101bfec25266b04b65979d1f0dbf879163397987b45ca098ceeb83a17 WatchSource:0}: Error finding container ff55489101bfec25266b04b65979d1f0dbf879163397987b45ca098ceeb83a17: Status 404 returned error can't find the container with id ff55489101bfec25266b04b65979d1f0dbf879163397987b45ca098ceeb83a17 Nov 24 11:26:24 crc kubenswrapper[5072]: I1124 11:26:24.566773 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-g5npx" event={"ID":"feff4031-5012-468f-8dd6-d58c5dae8d29","Type":"ContainerDied","Data":"bdbd39a144d44d45a300b33842ecdeb7cb131ab8fd0489d4eb9c0865a9231705"} Nov 24 11:26:24 crc kubenswrapper[5072]: I1124 11:26:24.566848 5072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bdbd39a144d44d45a300b33842ecdeb7cb131ab8fd0489d4eb9c0865a9231705" Nov 24 11:26:24 crc kubenswrapper[5072]: I1124 11:26:24.566802 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-g5npx" Nov 24 11:26:24 crc kubenswrapper[5072]: I1124 11:26:24.569365 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b596a610-936b-465e-aa9d-cb3b8f7811a4","Type":"ContainerStarted","Data":"ff55489101bfec25266b04b65979d1f0dbf879163397987b45ca098ceeb83a17"} Nov 24 11:26:24 crc kubenswrapper[5072]: I1124 11:26:24.943425 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-55f6867c5c-rjpdx"] Nov 24 11:26:24 crc kubenswrapper[5072]: E1124 11:26:24.944157 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="feff4031-5012-468f-8dd6-d58c5dae8d29" containerName="barbican-db-sync" Nov 24 11:26:24 crc kubenswrapper[5072]: I1124 11:26:24.944181 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="feff4031-5012-468f-8dd6-d58c5dae8d29" containerName="barbican-db-sync" Nov 24 11:26:24 crc kubenswrapper[5072]: I1124 11:26:24.948640 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="feff4031-5012-468f-8dd6-d58c5dae8d29" containerName="barbican-db-sync" Nov 24 11:26:24 crc kubenswrapper[5072]: I1124 11:26:24.949944 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-55f6867c5c-rjpdx" Nov 24 11:26:24 crc kubenswrapper[5072]: I1124 11:26:24.953158 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-rbcpr" Nov 24 11:26:24 crc kubenswrapper[5072]: I1124 11:26:24.965561 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Nov 24 11:26:24 crc kubenswrapper[5072]: I1124 11:26:24.965839 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 24 11:26:24 crc kubenswrapper[5072]: I1124 11:26:24.987543 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-56f6884b8b-d9lh4"] Nov 24 11:26:24 crc kubenswrapper[5072]: I1124 11:26:24.992329 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-56f6884b8b-d9lh4" Nov 24 11:26:24 crc kubenswrapper[5072]: I1124 11:26:24.995127 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Nov 24 11:26:24 crc kubenswrapper[5072]: I1124 11:26:24.997434 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-55f6867c5c-rjpdx"] Nov 24 11:26:25 crc kubenswrapper[5072]: I1124 11:26:25.026566 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/522a3a4f-dbc9-4b6a-9bff-5df22b4cba44-combined-ca-bundle\") pod \"barbican-worker-55f6867c5c-rjpdx\" (UID: \"522a3a4f-dbc9-4b6a-9bff-5df22b4cba44\") " pod="openstack/barbican-worker-55f6867c5c-rjpdx" Nov 24 11:26:25 crc kubenswrapper[5072]: I1124 11:26:25.026665 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17dcf560-c08b-4adb-b4e1-90887cddba39-logs\") pod \"barbican-keystone-listener-56f6884b8b-d9lh4\" (UID: \"17dcf560-c08b-4adb-b4e1-90887cddba39\") " pod="openstack/barbican-keystone-listener-56f6884b8b-d9lh4" Nov 24 11:26:25 crc kubenswrapper[5072]: I1124 11:26:25.026693 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9977c\" (UniqueName: \"kubernetes.io/projected/522a3a4f-dbc9-4b6a-9bff-5df22b4cba44-kube-api-access-9977c\") pod \"barbican-worker-55f6867c5c-rjpdx\" (UID: \"522a3a4f-dbc9-4b6a-9bff-5df22b4cba44\") " pod="openstack/barbican-worker-55f6867c5c-rjpdx" Nov 24 11:26:25 crc kubenswrapper[5072]: I1124 11:26:25.026716 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/522a3a4f-dbc9-4b6a-9bff-5df22b4cba44-config-data\") pod \"barbican-worker-55f6867c5c-rjpdx\" (UID: \"522a3a4f-dbc9-4b6a-9bff-5df22b4cba44\") " pod="openstack/barbican-worker-55f6867c5c-rjpdx" Nov 24 11:26:25 crc kubenswrapper[5072]: I1124 11:26:25.026751 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/522a3a4f-dbc9-4b6a-9bff-5df22b4cba44-logs\") pod \"barbican-worker-55f6867c5c-rjpdx\" (UID: \"522a3a4f-dbc9-4b6a-9bff-5df22b4cba44\") " pod="openstack/barbican-worker-55f6867c5c-rjpdx" Nov 24 11:26:25 crc kubenswrapper[5072]: I1124 11:26:25.026779 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8wmw\" (UniqueName: \"kubernetes.io/projected/17dcf560-c08b-4adb-b4e1-90887cddba39-kube-api-access-v8wmw\") pod \"barbican-keystone-listener-56f6884b8b-d9lh4\" (UID: \"17dcf560-c08b-4adb-b4e1-90887cddba39\") " pod="openstack/barbican-keystone-listener-56f6884b8b-d9lh4" Nov 24 11:26:25 crc kubenswrapper[5072]: I1124 11:26:25.026807 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17dcf560-c08b-4adb-b4e1-90887cddba39-config-data\") pod \"barbican-keystone-listener-56f6884b8b-d9lh4\" (UID: \"17dcf560-c08b-4adb-b4e1-90887cddba39\") " pod="openstack/barbican-keystone-listener-56f6884b8b-d9lh4" Nov 24 11:26:25 crc kubenswrapper[5072]: I1124 11:26:25.026833 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17dcf560-c08b-4adb-b4e1-90887cddba39-combined-ca-bundle\") pod \"barbican-keystone-listener-56f6884b8b-d9lh4\" (UID: \"17dcf560-c08b-4adb-b4e1-90887cddba39\") " pod="openstack/barbican-keystone-listener-56f6884b8b-d9lh4" Nov 24 11:26:25 crc kubenswrapper[5072]: I1124 11:26:25.026861 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/522a3a4f-dbc9-4b6a-9bff-5df22b4cba44-config-data-custom\") pod \"barbican-worker-55f6867c5c-rjpdx\" (UID: \"522a3a4f-dbc9-4b6a-9bff-5df22b4cba44\") " pod="openstack/barbican-worker-55f6867c5c-rjpdx" Nov 24 11:26:25 crc kubenswrapper[5072]: I1124 11:26:25.026922 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/17dcf560-c08b-4adb-b4e1-90887cddba39-config-data-custom\") pod \"barbican-keystone-listener-56f6884b8b-d9lh4\" (UID: \"17dcf560-c08b-4adb-b4e1-90887cddba39\") " pod="openstack/barbican-keystone-listener-56f6884b8b-d9lh4" Nov 24 11:26:25 crc kubenswrapper[5072]: I1124 11:26:25.032304 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b0e75bc-78b4-45e2-9c55-7b573ab3cc15" path="/var/lib/kubelet/pods/8b0e75bc-78b4-45e2-9c55-7b573ab3cc15/volumes" Nov 24 11:26:25 crc kubenswrapper[5072]: I1124 11:26:25.033302 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-56f6884b8b-d9lh4"] Nov 24 11:26:25 crc kubenswrapper[5072]: I1124 11:26:25.054576 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bb684768f-cpgh9"] Nov 24 11:26:25 crc kubenswrapper[5072]: I1124 11:26:25.055867 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb684768f-cpgh9" Nov 24 11:26:25 crc kubenswrapper[5072]: I1124 11:26:25.093615 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bb684768f-cpgh9"] Nov 24 11:26:25 crc kubenswrapper[5072]: I1124 11:26:25.129761 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/17dcf560-c08b-4adb-b4e1-90887cddba39-config-data-custom\") pod \"barbican-keystone-listener-56f6884b8b-d9lh4\" (UID: \"17dcf560-c08b-4adb-b4e1-90887cddba39\") " pod="openstack/barbican-keystone-listener-56f6884b8b-d9lh4" Nov 24 11:26:25 crc kubenswrapper[5072]: I1124 11:26:25.129811 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b11b3a0a-db05-460b-9828-780b3c846f57-dns-svc\") pod \"dnsmasq-dns-6bb684768f-cpgh9\" (UID: \"b11b3a0a-db05-460b-9828-780b3c846f57\") " pod="openstack/dnsmasq-dns-6bb684768f-cpgh9" Nov 24 11:26:25 crc kubenswrapper[5072]: I1124 11:26:25.129839 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b11b3a0a-db05-460b-9828-780b3c846f57-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb684768f-cpgh9\" (UID: \"b11b3a0a-db05-460b-9828-780b3c846f57\") " pod="openstack/dnsmasq-dns-6bb684768f-cpgh9" Nov 24 11:26:25 crc kubenswrapper[5072]: I1124 11:26:25.129870 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b11b3a0a-db05-460b-9828-780b3c846f57-config\") pod \"dnsmasq-dns-6bb684768f-cpgh9\" (UID: \"b11b3a0a-db05-460b-9828-780b3c846f57\") " pod="openstack/dnsmasq-dns-6bb684768f-cpgh9" Nov 24 11:26:25 crc kubenswrapper[5072]: I1124 11:26:25.129900 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/522a3a4f-dbc9-4b6a-9bff-5df22b4cba44-combined-ca-bundle\") pod \"barbican-worker-55f6867c5c-rjpdx\" (UID: \"522a3a4f-dbc9-4b6a-9bff-5df22b4cba44\") " pod="openstack/barbican-worker-55f6867c5c-rjpdx" Nov 24 11:26:25 crc kubenswrapper[5072]: I1124 11:26:25.129924 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b11b3a0a-db05-460b-9828-780b3c846f57-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb684768f-cpgh9\" (UID: \"b11b3a0a-db05-460b-9828-780b3c846f57\") " pod="openstack/dnsmasq-dns-6bb684768f-cpgh9" Nov 24 11:26:25 crc kubenswrapper[5072]: I1124 11:26:25.129961 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17dcf560-c08b-4adb-b4e1-90887cddba39-logs\") pod \"barbican-keystone-listener-56f6884b8b-d9lh4\" (UID: \"17dcf560-c08b-4adb-b4e1-90887cddba39\") " pod="openstack/barbican-keystone-listener-56f6884b8b-d9lh4" Nov 24 11:26:25 crc kubenswrapper[5072]: I1124 11:26:25.129979 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9977c\" (UniqueName: \"kubernetes.io/projected/522a3a4f-dbc9-4b6a-9bff-5df22b4cba44-kube-api-access-9977c\") pod \"barbican-worker-55f6867c5c-rjpdx\" (UID: \"522a3a4f-dbc9-4b6a-9bff-5df22b4cba44\") " pod="openstack/barbican-worker-55f6867c5c-rjpdx" Nov 24 11:26:25 crc kubenswrapper[5072]: I1124 11:26:25.129998 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/522a3a4f-dbc9-4b6a-9bff-5df22b4cba44-config-data\") pod \"barbican-worker-55f6867c5c-rjpdx\" (UID: \"522a3a4f-dbc9-4b6a-9bff-5df22b4cba44\") " pod="openstack/barbican-worker-55f6867c5c-rjpdx" Nov 24 11:26:25 crc kubenswrapper[5072]: I1124 11:26:25.130023 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/522a3a4f-dbc9-4b6a-9bff-5df22b4cba44-logs\") pod \"barbican-worker-55f6867c5c-rjpdx\" (UID: \"522a3a4f-dbc9-4b6a-9bff-5df22b4cba44\") " pod="openstack/barbican-worker-55f6867c5c-rjpdx" Nov 24 11:26:25 crc kubenswrapper[5072]: I1124 11:26:25.130042 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8wmw\" (UniqueName: \"kubernetes.io/projected/17dcf560-c08b-4adb-b4e1-90887cddba39-kube-api-access-v8wmw\") pod \"barbican-keystone-listener-56f6884b8b-d9lh4\" (UID: \"17dcf560-c08b-4adb-b4e1-90887cddba39\") " pod="openstack/barbican-keystone-listener-56f6884b8b-d9lh4" Nov 24 11:26:25 crc kubenswrapper[5072]: I1124 11:26:25.130059 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17dcf560-c08b-4adb-b4e1-90887cddba39-config-data\") pod \"barbican-keystone-listener-56f6884b8b-d9lh4\" (UID: \"17dcf560-c08b-4adb-b4e1-90887cddba39\") " pod="openstack/barbican-keystone-listener-56f6884b8b-d9lh4" Nov 24 11:26:25 crc kubenswrapper[5072]: I1124 11:26:25.130078 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17dcf560-c08b-4adb-b4e1-90887cddba39-combined-ca-bundle\") pod \"barbican-keystone-listener-56f6884b8b-d9lh4\" (UID: \"17dcf560-c08b-4adb-b4e1-90887cddba39\") " pod="openstack/barbican-keystone-listener-56f6884b8b-d9lh4" Nov 24 11:26:25 crc kubenswrapper[5072]: I1124 11:26:25.130091 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/522a3a4f-dbc9-4b6a-9bff-5df22b4cba44-config-data-custom\") pod \"barbican-worker-55f6867c5c-rjpdx\" (UID: \"522a3a4f-dbc9-4b6a-9bff-5df22b4cba44\") " pod="openstack/barbican-worker-55f6867c5c-rjpdx" Nov 24 11:26:25 crc kubenswrapper[5072]: I1124 11:26:25.130107 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5dss\" (UniqueName: \"kubernetes.io/projected/b11b3a0a-db05-460b-9828-780b3c846f57-kube-api-access-g5dss\") pod \"dnsmasq-dns-6bb684768f-cpgh9\" (UID: \"b11b3a0a-db05-460b-9828-780b3c846f57\") " pod="openstack/dnsmasq-dns-6bb684768f-cpgh9" Nov 24 11:26:25 crc kubenswrapper[5072]: I1124 11:26:25.131963 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/522a3a4f-dbc9-4b6a-9bff-5df22b4cba44-logs\") pod \"barbican-worker-55f6867c5c-rjpdx\" (UID: \"522a3a4f-dbc9-4b6a-9bff-5df22b4cba44\") " pod="openstack/barbican-worker-55f6867c5c-rjpdx" Nov 24 11:26:25 crc kubenswrapper[5072]: I1124 11:26:25.135254 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17dcf560-c08b-4adb-b4e1-90887cddba39-logs\") pod \"barbican-keystone-listener-56f6884b8b-d9lh4\" (UID: \"17dcf560-c08b-4adb-b4e1-90887cddba39\") " pod="openstack/barbican-keystone-listener-56f6884b8b-d9lh4" Nov 24 11:26:25 crc kubenswrapper[5072]: I1124 11:26:25.137644 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17dcf560-c08b-4adb-b4e1-90887cddba39-config-data\") pod \"barbican-keystone-listener-56f6884b8b-d9lh4\" (UID: \"17dcf560-c08b-4adb-b4e1-90887cddba39\") " pod="openstack/barbican-keystone-listener-56f6884b8b-d9lh4" Nov 24 11:26:25 crc kubenswrapper[5072]: I1124 11:26:25.143981 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/522a3a4f-dbc9-4b6a-9bff-5df22b4cba44-config-data-custom\") pod \"barbican-worker-55f6867c5c-rjpdx\" (UID: \"522a3a4f-dbc9-4b6a-9bff-5df22b4cba44\") " pod="openstack/barbican-worker-55f6867c5c-rjpdx" Nov 24 11:26:25 crc kubenswrapper[5072]: I1124 11:26:25.144083 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/522a3a4f-dbc9-4b6a-9bff-5df22b4cba44-combined-ca-bundle\") pod \"barbican-worker-55f6867c5c-rjpdx\" (UID: \"522a3a4f-dbc9-4b6a-9bff-5df22b4cba44\") " pod="openstack/barbican-worker-55f6867c5c-rjpdx" Nov 24 11:26:25 crc kubenswrapper[5072]: I1124 11:26:25.144478 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17dcf560-c08b-4adb-b4e1-90887cddba39-combined-ca-bundle\") pod \"barbican-keystone-listener-56f6884b8b-d9lh4\" (UID: \"17dcf560-c08b-4adb-b4e1-90887cddba39\") " pod="openstack/barbican-keystone-listener-56f6884b8b-d9lh4" Nov 24 11:26:25 crc kubenswrapper[5072]: I1124 11:26:25.145419 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/522a3a4f-dbc9-4b6a-9bff-5df22b4cba44-config-data\") pod \"barbican-worker-55f6867c5c-rjpdx\" (UID: \"522a3a4f-dbc9-4b6a-9bff-5df22b4cba44\") " pod="openstack/barbican-worker-55f6867c5c-rjpdx" Nov 24 11:26:25 crc kubenswrapper[5072]: I1124 11:26:25.146061 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/17dcf560-c08b-4adb-b4e1-90887cddba39-config-data-custom\") pod \"barbican-keystone-listener-56f6884b8b-d9lh4\" (UID: \"17dcf560-c08b-4adb-b4e1-90887cddba39\") " pod="openstack/barbican-keystone-listener-56f6884b8b-d9lh4" Nov 24 11:26:25 crc kubenswrapper[5072]: I1124 11:26:25.148239 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8wmw\" (UniqueName: \"kubernetes.io/projected/17dcf560-c08b-4adb-b4e1-90887cddba39-kube-api-access-v8wmw\") pod \"barbican-keystone-listener-56f6884b8b-d9lh4\" (UID: \"17dcf560-c08b-4adb-b4e1-90887cddba39\") " pod="openstack/barbican-keystone-listener-56f6884b8b-d9lh4" Nov 24 11:26:25 crc kubenswrapper[5072]: I1124 11:26:25.161504 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9977c\" (UniqueName: \"kubernetes.io/projected/522a3a4f-dbc9-4b6a-9bff-5df22b4cba44-kube-api-access-9977c\") pod \"barbican-worker-55f6867c5c-rjpdx\" (UID: \"522a3a4f-dbc9-4b6a-9bff-5df22b4cba44\") " pod="openstack/barbican-worker-55f6867c5c-rjpdx" Nov 24 11:26:25 crc kubenswrapper[5072]: I1124 11:26:25.176535 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-78b9c4bd46-swfr9"] Nov 24 11:26:25 crc kubenswrapper[5072]: I1124 11:26:25.177877 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-78b9c4bd46-swfr9" Nov 24 11:26:25 crc kubenswrapper[5072]: I1124 11:26:25.182243 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Nov 24 11:26:25 crc kubenswrapper[5072]: I1124 11:26:25.184140 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-78b9c4bd46-swfr9"] Nov 24 11:26:25 crc kubenswrapper[5072]: I1124 11:26:25.232007 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2e3a041-841d-423f-80a2-69a532d7975e-combined-ca-bundle\") pod \"barbican-api-78b9c4bd46-swfr9\" (UID: \"e2e3a041-841d-423f-80a2-69a532d7975e\") " pod="openstack/barbican-api-78b9c4bd46-swfr9" Nov 24 11:26:25 crc kubenswrapper[5072]: I1124 11:26:25.232080 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2e3a041-841d-423f-80a2-69a532d7975e-logs\") pod \"barbican-api-78b9c4bd46-swfr9\" (UID: \"e2e3a041-841d-423f-80a2-69a532d7975e\") " pod="openstack/barbican-api-78b9c4bd46-swfr9" Nov 24 11:26:25 crc kubenswrapper[5072]: I1124 11:26:25.232130 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b11b3a0a-db05-460b-9828-780b3c846f57-dns-svc\") pod \"dnsmasq-dns-6bb684768f-cpgh9\" (UID: \"b11b3a0a-db05-460b-9828-780b3c846f57\") " pod="openstack/dnsmasq-dns-6bb684768f-cpgh9" Nov 24 11:26:25 crc kubenswrapper[5072]: I1124 11:26:25.232156 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b11b3a0a-db05-460b-9828-780b3c846f57-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb684768f-cpgh9\" (UID: \"b11b3a0a-db05-460b-9828-780b3c846f57\") " pod="openstack/dnsmasq-dns-6bb684768f-cpgh9" Nov 24 11:26:25 crc kubenswrapper[5072]: I1124 11:26:25.232365 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5tkl\" (UniqueName: \"kubernetes.io/projected/e2e3a041-841d-423f-80a2-69a532d7975e-kube-api-access-m5tkl\") pod \"barbican-api-78b9c4bd46-swfr9\" (UID: \"e2e3a041-841d-423f-80a2-69a532d7975e\") " pod="openstack/barbican-api-78b9c4bd46-swfr9" Nov 24 11:26:25 crc kubenswrapper[5072]: I1124 11:26:25.232532 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b11b3a0a-db05-460b-9828-780b3c846f57-config\") pod \"dnsmasq-dns-6bb684768f-cpgh9\" (UID: \"b11b3a0a-db05-460b-9828-780b3c846f57\") " pod="openstack/dnsmasq-dns-6bb684768f-cpgh9" Nov 24 11:26:25 crc kubenswrapper[5072]: I1124 11:26:25.232616 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b11b3a0a-db05-460b-9828-780b3c846f57-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb684768f-cpgh9\" (UID: \"b11b3a0a-db05-460b-9828-780b3c846f57\") " pod="openstack/dnsmasq-dns-6bb684768f-cpgh9" Nov 24 11:26:25 crc kubenswrapper[5072]: I1124 11:26:25.232652 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e2e3a041-841d-423f-80a2-69a532d7975e-config-data-custom\") pod \"barbican-api-78b9c4bd46-swfr9\" (UID: \"e2e3a041-841d-423f-80a2-69a532d7975e\") " pod="openstack/barbican-api-78b9c4bd46-swfr9" Nov 24 11:26:25 crc kubenswrapper[5072]: I1124 11:26:25.232678 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2e3a041-841d-423f-80a2-69a532d7975e-config-data\") pod \"barbican-api-78b9c4bd46-swfr9\" (UID: \"e2e3a041-841d-423f-80a2-69a532d7975e\") " pod="openstack/barbican-api-78b9c4bd46-swfr9" Nov 24 11:26:25 crc kubenswrapper[5072]: I1124 11:26:25.232703 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5dss\" (UniqueName: \"kubernetes.io/projected/b11b3a0a-db05-460b-9828-780b3c846f57-kube-api-access-g5dss\") pod \"dnsmasq-dns-6bb684768f-cpgh9\" (UID: \"b11b3a0a-db05-460b-9828-780b3c846f57\") " pod="openstack/dnsmasq-dns-6bb684768f-cpgh9" Nov 24 11:26:25 crc kubenswrapper[5072]: I1124 11:26:25.233112 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b11b3a0a-db05-460b-9828-780b3c846f57-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb684768f-cpgh9\" (UID: \"b11b3a0a-db05-460b-9828-780b3c846f57\") " pod="openstack/dnsmasq-dns-6bb684768f-cpgh9" Nov 24 11:26:25 crc kubenswrapper[5072]: I1124 11:26:25.233800 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b11b3a0a-db05-460b-9828-780b3c846f57-config\") pod \"dnsmasq-dns-6bb684768f-cpgh9\" (UID: \"b11b3a0a-db05-460b-9828-780b3c846f57\") " pod="openstack/dnsmasq-dns-6bb684768f-cpgh9" Nov 24 11:26:25 crc kubenswrapper[5072]: I1124 11:26:25.233850 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b11b3a0a-db05-460b-9828-780b3c846f57-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb684768f-cpgh9\" (UID: \"b11b3a0a-db05-460b-9828-780b3c846f57\") " pod="openstack/dnsmasq-dns-6bb684768f-cpgh9" Nov 24 11:26:25 crc kubenswrapper[5072]: I1124 11:26:25.234324 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b11b3a0a-db05-460b-9828-780b3c846f57-dns-svc\") pod \"dnsmasq-dns-6bb684768f-cpgh9\" (UID: \"b11b3a0a-db05-460b-9828-780b3c846f57\") " pod="openstack/dnsmasq-dns-6bb684768f-cpgh9" Nov 24 11:26:25 crc kubenswrapper[5072]: I1124 11:26:25.249009 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5dss\" (UniqueName: \"kubernetes.io/projected/b11b3a0a-db05-460b-9828-780b3c846f57-kube-api-access-g5dss\") pod \"dnsmasq-dns-6bb684768f-cpgh9\" (UID: \"b11b3a0a-db05-460b-9828-780b3c846f57\") " pod="openstack/dnsmasq-dns-6bb684768f-cpgh9" Nov 24 11:26:25 crc kubenswrapper[5072]: I1124 11:26:25.326752 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-55f6867c5c-rjpdx" Nov 24 11:26:25 crc kubenswrapper[5072]: I1124 11:26:25.333718 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e2e3a041-841d-423f-80a2-69a532d7975e-config-data-custom\") pod \"barbican-api-78b9c4bd46-swfr9\" (UID: \"e2e3a041-841d-423f-80a2-69a532d7975e\") " pod="openstack/barbican-api-78b9c4bd46-swfr9" Nov 24 11:26:25 crc kubenswrapper[5072]: I1124 11:26:25.333755 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2e3a041-841d-423f-80a2-69a532d7975e-config-data\") pod \"barbican-api-78b9c4bd46-swfr9\" (UID: \"e2e3a041-841d-423f-80a2-69a532d7975e\") " pod="openstack/barbican-api-78b9c4bd46-swfr9" Nov 24 11:26:25 crc kubenswrapper[5072]: I1124 11:26:25.333793 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2e3a041-841d-423f-80a2-69a532d7975e-combined-ca-bundle\") pod \"barbican-api-78b9c4bd46-swfr9\" (UID: \"e2e3a041-841d-423f-80a2-69a532d7975e\") " pod="openstack/barbican-api-78b9c4bd46-swfr9" Nov 24 11:26:25 crc kubenswrapper[5072]: I1124 11:26:25.333810 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2e3a041-841d-423f-80a2-69a532d7975e-logs\") pod \"barbican-api-78b9c4bd46-swfr9\" (UID: \"e2e3a041-841d-423f-80a2-69a532d7975e\") " pod="openstack/barbican-api-78b9c4bd46-swfr9" Nov 24 11:26:25 crc kubenswrapper[5072]: I1124 11:26:25.333864 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5tkl\" (UniqueName: \"kubernetes.io/projected/e2e3a041-841d-423f-80a2-69a532d7975e-kube-api-access-m5tkl\") pod \"barbican-api-78b9c4bd46-swfr9\" (UID: \"e2e3a041-841d-423f-80a2-69a532d7975e\") " pod="openstack/barbican-api-78b9c4bd46-swfr9" Nov 24 11:26:25 crc kubenswrapper[5072]: I1124 11:26:25.337438 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e2e3a041-841d-423f-80a2-69a532d7975e-config-data-custom\") pod \"barbican-api-78b9c4bd46-swfr9\" (UID: \"e2e3a041-841d-423f-80a2-69a532d7975e\") " pod="openstack/barbican-api-78b9c4bd46-swfr9" Nov 24 11:26:25 crc kubenswrapper[5072]: I1124 11:26:25.337651 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2e3a041-841d-423f-80a2-69a532d7975e-logs\") pod \"barbican-api-78b9c4bd46-swfr9\" (UID: \"e2e3a041-841d-423f-80a2-69a532d7975e\") " pod="openstack/barbican-api-78b9c4bd46-swfr9" Nov 24 11:26:25 crc kubenswrapper[5072]: I1124 11:26:25.340640 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2e3a041-841d-423f-80a2-69a532d7975e-config-data\") pod \"barbican-api-78b9c4bd46-swfr9\" (UID: \"e2e3a041-841d-423f-80a2-69a532d7975e\") " pod="openstack/barbican-api-78b9c4bd46-swfr9" Nov 24 11:26:25 crc kubenswrapper[5072]: I1124 11:26:25.345621 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2e3a041-841d-423f-80a2-69a532d7975e-combined-ca-bundle\") pod \"barbican-api-78b9c4bd46-swfr9\" (UID: \"e2e3a041-841d-423f-80a2-69a532d7975e\") " pod="openstack/barbican-api-78b9c4bd46-swfr9" Nov 24 11:26:25 crc kubenswrapper[5072]: I1124 11:26:25.353453 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5tkl\" (UniqueName: \"kubernetes.io/projected/e2e3a041-841d-423f-80a2-69a532d7975e-kube-api-access-m5tkl\") pod \"barbican-api-78b9c4bd46-swfr9\" (UID: \"e2e3a041-841d-423f-80a2-69a532d7975e\") " pod="openstack/barbican-api-78b9c4bd46-swfr9" Nov 24 11:26:25 crc kubenswrapper[5072]: I1124 11:26:25.368074 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-56f6884b8b-d9lh4" Nov 24 11:26:25 crc kubenswrapper[5072]: I1124 11:26:25.384326 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb684768f-cpgh9" Nov 24 11:26:25 crc kubenswrapper[5072]: I1124 11:26:25.493611 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-78b9c4bd46-swfr9" Nov 24 11:26:25 crc kubenswrapper[5072]: I1124 11:26:25.580517 5072 generic.go:334] "Generic (PLEG): container finished" podID="ab063039-b4d9-45d8-9336-35316fd1ab08" containerID="8a22f32584c45f6be5f8cd8133d0159b79ad525fbafc02835bd59e52937a16e9" exitCode=0 Nov 24 11:26:25 crc kubenswrapper[5072]: I1124 11:26:25.581340 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-8npk7" event={"ID":"ab063039-b4d9-45d8-9336-35316fd1ab08","Type":"ContainerDied","Data":"8a22f32584c45f6be5f8cd8133d0159b79ad525fbafc02835bd59e52937a16e9"} Nov 24 11:26:25 crc kubenswrapper[5072]: I1124 11:26:25.589021 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b596a610-936b-465e-aa9d-cb3b8f7811a4","Type":"ContainerStarted","Data":"0dea738dbd0d20ab607009a71fc10cafb721363f18aae1e2bccbd2b2f516fc90"} Nov 24 11:26:25 crc kubenswrapper[5072]: I1124 11:26:25.807081 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-55f6867c5c-rjpdx"] Nov 24 11:26:25 crc kubenswrapper[5072]: W1124 11:26:25.814801 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod522a3a4f_dbc9_4b6a_9bff_5df22b4cba44.slice/crio-58129a5fd1ea11f0e0f572e494a59e64d80348a7a904d680dff1ebbc6db2cca9 WatchSource:0}: Error finding container 58129a5fd1ea11f0e0f572e494a59e64d80348a7a904d680dff1ebbc6db2cca9: Status 404 returned error can't find the container with id 58129a5fd1ea11f0e0f572e494a59e64d80348a7a904d680dff1ebbc6db2cca9 Nov 24 11:26:25 crc kubenswrapper[5072]: W1124 11:26:25.889093 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod17dcf560_c08b_4adb_b4e1_90887cddba39.slice/crio-adc5e5698377444a381c9daf3c587ff343f37978a07d1b03407f535820cf4efe WatchSource:0}: Error finding container adc5e5698377444a381c9daf3c587ff343f37978a07d1b03407f535820cf4efe: Status 404 returned error can't find the container with id adc5e5698377444a381c9daf3c587ff343f37978a07d1b03407f535820cf4efe Nov 24 11:26:25 crc kubenswrapper[5072]: I1124 11:26:25.890297 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-56f6884b8b-d9lh4"] Nov 24 11:26:26 crc kubenswrapper[5072]: W1124 11:26:26.035111 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb11b3a0a_db05_460b_9828_780b3c846f57.slice/crio-e0a96074778574a1af2ff0356ae9c9b8ec6c422e885ddaa0f663855f13a8a115 WatchSource:0}: Error finding container e0a96074778574a1af2ff0356ae9c9b8ec6c422e885ddaa0f663855f13a8a115: Status 404 returned error can't find the container with id e0a96074778574a1af2ff0356ae9c9b8ec6c422e885ddaa0f663855f13a8a115 Nov 24 11:26:26 crc kubenswrapper[5072]: I1124 11:26:26.039068 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bb684768f-cpgh9"] Nov 24 11:26:26 crc kubenswrapper[5072]: W1124 11:26:26.053895 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode2e3a041_841d_423f_80a2_69a532d7975e.slice/crio-330e6bc24e2590bdbb1d631e734508b38fee4811a9e73783fbbc1db9c17cf857 WatchSource:0}: Error finding container 330e6bc24e2590bdbb1d631e734508b38fee4811a9e73783fbbc1db9c17cf857: Status 404 returned error can't find the container with id 330e6bc24e2590bdbb1d631e734508b38fee4811a9e73783fbbc1db9c17cf857 Nov 24 11:26:26 crc kubenswrapper[5072]: I1124 11:26:26.059018 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-78b9c4bd46-swfr9"] Nov 24 11:26:26 crc kubenswrapper[5072]: I1124 11:26:26.603495 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-55f6867c5c-rjpdx" event={"ID":"522a3a4f-dbc9-4b6a-9bff-5df22b4cba44","Type":"ContainerStarted","Data":"58129a5fd1ea11f0e0f572e494a59e64d80348a7a904d680dff1ebbc6db2cca9"} Nov 24 11:26:26 crc kubenswrapper[5072]: I1124 11:26:26.605874 5072 generic.go:334] "Generic (PLEG): container finished" podID="b11b3a0a-db05-460b-9828-780b3c846f57" containerID="44084e54c129aa5894d5bb2fafb178593cd8b8a363ad43aa4c52e31c04b9a770" exitCode=0 Nov 24 11:26:26 crc kubenswrapper[5072]: I1124 11:26:26.605938 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb684768f-cpgh9" event={"ID":"b11b3a0a-db05-460b-9828-780b3c846f57","Type":"ContainerDied","Data":"44084e54c129aa5894d5bb2fafb178593cd8b8a363ad43aa4c52e31c04b9a770"} Nov 24 11:26:26 crc kubenswrapper[5072]: I1124 11:26:26.605966 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb684768f-cpgh9" event={"ID":"b11b3a0a-db05-460b-9828-780b3c846f57","Type":"ContainerStarted","Data":"e0a96074778574a1af2ff0356ae9c9b8ec6c422e885ddaa0f663855f13a8a115"} Nov 24 11:26:26 crc kubenswrapper[5072]: I1124 11:26:26.612974 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b596a610-936b-465e-aa9d-cb3b8f7811a4","Type":"ContainerStarted","Data":"41e23318d797772b3402c14be3112eeff9df54547c6f7c9ab1098c4abcfe8773"} Nov 24 11:26:26 crc kubenswrapper[5072]: I1124 11:26:26.613028 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b596a610-936b-465e-aa9d-cb3b8f7811a4","Type":"ContainerStarted","Data":"000fde3ba0f07a2e05d9e3c475c3113c4786af8bf4e719407ca1f4881edfff42"} Nov 24 11:26:26 crc kubenswrapper[5072]: I1124 11:26:26.617573 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-78b9c4bd46-swfr9" event={"ID":"e2e3a041-841d-423f-80a2-69a532d7975e","Type":"ContainerStarted","Data":"8fb527f5d6ddd8d4b88947f9401ba87140e158e8c5717cf73cb7fc32c96fa384"} Nov 24 11:26:26 crc kubenswrapper[5072]: I1124 11:26:26.617608 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-78b9c4bd46-swfr9" event={"ID":"e2e3a041-841d-423f-80a2-69a532d7975e","Type":"ContainerStarted","Data":"432125256d8e6ebf3f40b12d3968a14a8bf85de1183cd18ef41c27797db697c7"} Nov 24 11:26:26 crc kubenswrapper[5072]: I1124 11:26:26.617618 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-78b9c4bd46-swfr9" event={"ID":"e2e3a041-841d-423f-80a2-69a532d7975e","Type":"ContainerStarted","Data":"330e6bc24e2590bdbb1d631e734508b38fee4811a9e73783fbbc1db9c17cf857"} Nov 24 11:26:26 crc kubenswrapper[5072]: I1124 11:26:26.618338 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-78b9c4bd46-swfr9" Nov 24 11:26:26 crc kubenswrapper[5072]: I1124 11:26:26.618362 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-78b9c4bd46-swfr9" Nov 24 11:26:26 crc kubenswrapper[5072]: I1124 11:26:26.620469 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-56f6884b8b-d9lh4" event={"ID":"17dcf560-c08b-4adb-b4e1-90887cddba39","Type":"ContainerStarted","Data":"adc5e5698377444a381c9daf3c587ff343f37978a07d1b03407f535820cf4efe"} Nov 24 11:26:26 crc kubenswrapper[5072]: I1124 11:26:26.646140 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-78b9c4bd46-swfr9" podStartSLOduration=1.6461155729999999 podStartE2EDuration="1.646115573s" podCreationTimestamp="2025-11-24 11:26:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:26:26.64284592 +0000 UTC m=+1038.354370396" watchObservedRunningTime="2025-11-24 11:26:26.646115573 +0000 UTC m=+1038.357640049" Nov 24 11:26:27 crc kubenswrapper[5072]: I1124 11:26:27.217187 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-8npk7" Nov 24 11:26:27 crc kubenswrapper[5072]: I1124 11:26:27.373719 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab063039-b4d9-45d8-9336-35316fd1ab08-scripts\") pod \"ab063039-b4d9-45d8-9336-35316fd1ab08\" (UID: \"ab063039-b4d9-45d8-9336-35316fd1ab08\") " Nov 24 11:26:27 crc kubenswrapper[5072]: I1124 11:26:27.373768 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ab063039-b4d9-45d8-9336-35316fd1ab08-db-sync-config-data\") pod \"ab063039-b4d9-45d8-9336-35316fd1ab08\" (UID: \"ab063039-b4d9-45d8-9336-35316fd1ab08\") " Nov 24 11:26:27 crc kubenswrapper[5072]: I1124 11:26:27.373831 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab063039-b4d9-45d8-9336-35316fd1ab08-config-data\") pod \"ab063039-b4d9-45d8-9336-35316fd1ab08\" (UID: \"ab063039-b4d9-45d8-9336-35316fd1ab08\") " Nov 24 11:26:27 crc kubenswrapper[5072]: I1124 11:26:27.373855 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tl5c\" (UniqueName: \"kubernetes.io/projected/ab063039-b4d9-45d8-9336-35316fd1ab08-kube-api-access-8tl5c\") pod \"ab063039-b4d9-45d8-9336-35316fd1ab08\" (UID: \"ab063039-b4d9-45d8-9336-35316fd1ab08\") " Nov 24 11:26:27 crc kubenswrapper[5072]: I1124 11:26:27.373933 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab063039-b4d9-45d8-9336-35316fd1ab08-combined-ca-bundle\") pod \"ab063039-b4d9-45d8-9336-35316fd1ab08\" (UID: \"ab063039-b4d9-45d8-9336-35316fd1ab08\") " Nov 24 11:26:27 crc kubenswrapper[5072]: I1124 11:26:27.373958 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ab063039-b4d9-45d8-9336-35316fd1ab08-etc-machine-id\") pod \"ab063039-b4d9-45d8-9336-35316fd1ab08\" (UID: \"ab063039-b4d9-45d8-9336-35316fd1ab08\") " Nov 24 11:26:27 crc kubenswrapper[5072]: I1124 11:26:27.374420 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab063039-b4d9-45d8-9336-35316fd1ab08-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "ab063039-b4d9-45d8-9336-35316fd1ab08" (UID: "ab063039-b4d9-45d8-9336-35316fd1ab08"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 11:26:27 crc kubenswrapper[5072]: I1124 11:26:27.379278 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab063039-b4d9-45d8-9336-35316fd1ab08-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "ab063039-b4d9-45d8-9336-35316fd1ab08" (UID: "ab063039-b4d9-45d8-9336-35316fd1ab08"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:26:27 crc kubenswrapper[5072]: I1124 11:26:27.380452 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab063039-b4d9-45d8-9336-35316fd1ab08-scripts" (OuterVolumeSpecName: "scripts") pod "ab063039-b4d9-45d8-9336-35316fd1ab08" (UID: "ab063039-b4d9-45d8-9336-35316fd1ab08"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:26:27 crc kubenswrapper[5072]: I1124 11:26:27.386524 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab063039-b4d9-45d8-9336-35316fd1ab08-kube-api-access-8tl5c" (OuterVolumeSpecName: "kube-api-access-8tl5c") pod "ab063039-b4d9-45d8-9336-35316fd1ab08" (UID: "ab063039-b4d9-45d8-9336-35316fd1ab08"). InnerVolumeSpecName "kube-api-access-8tl5c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:26:27 crc kubenswrapper[5072]: I1124 11:26:27.410585 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab063039-b4d9-45d8-9336-35316fd1ab08-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ab063039-b4d9-45d8-9336-35316fd1ab08" (UID: "ab063039-b4d9-45d8-9336-35316fd1ab08"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:26:27 crc kubenswrapper[5072]: I1124 11:26:27.423584 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab063039-b4d9-45d8-9336-35316fd1ab08-config-data" (OuterVolumeSpecName: "config-data") pod "ab063039-b4d9-45d8-9336-35316fd1ab08" (UID: "ab063039-b4d9-45d8-9336-35316fd1ab08"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:26:27 crc kubenswrapper[5072]: I1124 11:26:27.475666 5072 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab063039-b4d9-45d8-9336-35316fd1ab08-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:27 crc kubenswrapper[5072]: I1124 11:26:27.475702 5072 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ab063039-b4d9-45d8-9336-35316fd1ab08-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:27 crc kubenswrapper[5072]: I1124 11:26:27.475715 5072 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab063039-b4d9-45d8-9336-35316fd1ab08-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:27 crc kubenswrapper[5072]: I1124 11:26:27.475726 5072 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ab063039-b4d9-45d8-9336-35316fd1ab08-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:27 crc kubenswrapper[5072]: I1124 11:26:27.475738 5072 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab063039-b4d9-45d8-9336-35316fd1ab08-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:27 crc kubenswrapper[5072]: I1124 11:26:27.475752 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tl5c\" (UniqueName: \"kubernetes.io/projected/ab063039-b4d9-45d8-9336-35316fd1ab08-kube-api-access-8tl5c\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:27 crc kubenswrapper[5072]: I1124 11:26:27.650018 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-8npk7" event={"ID":"ab063039-b4d9-45d8-9336-35316fd1ab08","Type":"ContainerDied","Data":"ed8d58bb6d200b2eed07554c18358fbda3effb95e82793acbfa6e6f8373b4e18"} Nov 24 11:26:27 crc kubenswrapper[5072]: I1124 11:26:27.650064 5072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed8d58bb6d200b2eed07554c18358fbda3effb95e82793acbfa6e6f8373b4e18" Nov 24 11:26:27 crc kubenswrapper[5072]: I1124 11:26:27.650035 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-8npk7" Nov 24 11:26:27 crc kubenswrapper[5072]: I1124 11:26:27.653693 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb684768f-cpgh9" event={"ID":"b11b3a0a-db05-460b-9828-780b3c846f57","Type":"ContainerStarted","Data":"77cdf06c9f719e88bcbde84d955642437319ff624182ab29307049a391ad780b"} Nov 24 11:26:27 crc kubenswrapper[5072]: I1124 11:26:27.653757 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6bb684768f-cpgh9" Nov 24 11:26:27 crc kubenswrapper[5072]: I1124 11:26:27.677923 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6bb684768f-cpgh9" podStartSLOduration=3.677906036 podStartE2EDuration="3.677906036s" podCreationTimestamp="2025-11-24 11:26:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:26:27.671481483 +0000 UTC m=+1039.383005979" watchObservedRunningTime="2025-11-24 11:26:27.677906036 +0000 UTC m=+1039.389430512" Nov 24 11:26:27 crc kubenswrapper[5072]: I1124 11:26:27.704511 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7785cf9ff8-jrntg"] Nov 24 11:26:27 crc kubenswrapper[5072]: E1124 11:26:27.704925 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab063039-b4d9-45d8-9336-35316fd1ab08" containerName="cinder-db-sync" Nov 24 11:26:27 crc kubenswrapper[5072]: I1124 11:26:27.704946 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab063039-b4d9-45d8-9336-35316fd1ab08" containerName="cinder-db-sync" Nov 24 11:26:27 crc kubenswrapper[5072]: I1124 11:26:27.705175 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab063039-b4d9-45d8-9336-35316fd1ab08" containerName="cinder-db-sync" Nov 24 11:26:27 crc kubenswrapper[5072]: I1124 11:26:27.707393 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7785cf9ff8-jrntg" Nov 24 11:26:27 crc kubenswrapper[5072]: I1124 11:26:27.713737 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Nov 24 11:26:27 crc kubenswrapper[5072]: I1124 11:26:27.718159 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Nov 24 11:26:27 crc kubenswrapper[5072]: I1124 11:26:27.719288 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7785cf9ff8-jrntg"] Nov 24 11:26:27 crc kubenswrapper[5072]: I1124 11:26:27.784282 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/02bf4aaa-02e9-42b0-96e7-182557310711-config-data-custom\") pod \"barbican-api-7785cf9ff8-jrntg\" (UID: \"02bf4aaa-02e9-42b0-96e7-182557310711\") " pod="openstack/barbican-api-7785cf9ff8-jrntg" Nov 24 11:26:27 crc kubenswrapper[5072]: I1124 11:26:27.784321 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02bf4aaa-02e9-42b0-96e7-182557310711-combined-ca-bundle\") pod \"barbican-api-7785cf9ff8-jrntg\" (UID: \"02bf4aaa-02e9-42b0-96e7-182557310711\") " pod="openstack/barbican-api-7785cf9ff8-jrntg" Nov 24 11:26:27 crc kubenswrapper[5072]: I1124 11:26:27.784426 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/02bf4aaa-02e9-42b0-96e7-182557310711-public-tls-certs\") pod \"barbican-api-7785cf9ff8-jrntg\" (UID: \"02bf4aaa-02e9-42b0-96e7-182557310711\") " pod="openstack/barbican-api-7785cf9ff8-jrntg" Nov 24 11:26:27 crc kubenswrapper[5072]: I1124 11:26:27.784447 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zq5xs\" (UniqueName: \"kubernetes.io/projected/02bf4aaa-02e9-42b0-96e7-182557310711-kube-api-access-zq5xs\") pod \"barbican-api-7785cf9ff8-jrntg\" (UID: \"02bf4aaa-02e9-42b0-96e7-182557310711\") " pod="openstack/barbican-api-7785cf9ff8-jrntg" Nov 24 11:26:27 crc kubenswrapper[5072]: I1124 11:26:27.784478 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02bf4aaa-02e9-42b0-96e7-182557310711-logs\") pod \"barbican-api-7785cf9ff8-jrntg\" (UID: \"02bf4aaa-02e9-42b0-96e7-182557310711\") " pod="openstack/barbican-api-7785cf9ff8-jrntg" Nov 24 11:26:27 crc kubenswrapper[5072]: I1124 11:26:27.784544 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/02bf4aaa-02e9-42b0-96e7-182557310711-internal-tls-certs\") pod \"barbican-api-7785cf9ff8-jrntg\" (UID: \"02bf4aaa-02e9-42b0-96e7-182557310711\") " pod="openstack/barbican-api-7785cf9ff8-jrntg" Nov 24 11:26:27 crc kubenswrapper[5072]: I1124 11:26:27.784564 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02bf4aaa-02e9-42b0-96e7-182557310711-config-data\") pod \"barbican-api-7785cf9ff8-jrntg\" (UID: \"02bf4aaa-02e9-42b0-96e7-182557310711\") " pod="openstack/barbican-api-7785cf9ff8-jrntg" Nov 24 11:26:27 crc kubenswrapper[5072]: I1124 11:26:27.887224 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/02bf4aaa-02e9-42b0-96e7-182557310711-config-data-custom\") pod \"barbican-api-7785cf9ff8-jrntg\" (UID: \"02bf4aaa-02e9-42b0-96e7-182557310711\") " pod="openstack/barbican-api-7785cf9ff8-jrntg" Nov 24 11:26:27 crc kubenswrapper[5072]: I1124 11:26:27.887504 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02bf4aaa-02e9-42b0-96e7-182557310711-combined-ca-bundle\") pod \"barbican-api-7785cf9ff8-jrntg\" (UID: \"02bf4aaa-02e9-42b0-96e7-182557310711\") " pod="openstack/barbican-api-7785cf9ff8-jrntg" Nov 24 11:26:27 crc kubenswrapper[5072]: I1124 11:26:27.887569 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/02bf4aaa-02e9-42b0-96e7-182557310711-public-tls-certs\") pod \"barbican-api-7785cf9ff8-jrntg\" (UID: \"02bf4aaa-02e9-42b0-96e7-182557310711\") " pod="openstack/barbican-api-7785cf9ff8-jrntg" Nov 24 11:26:27 crc kubenswrapper[5072]: I1124 11:26:27.887591 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zq5xs\" (UniqueName: \"kubernetes.io/projected/02bf4aaa-02e9-42b0-96e7-182557310711-kube-api-access-zq5xs\") pod \"barbican-api-7785cf9ff8-jrntg\" (UID: \"02bf4aaa-02e9-42b0-96e7-182557310711\") " pod="openstack/barbican-api-7785cf9ff8-jrntg" Nov 24 11:26:27 crc kubenswrapper[5072]: I1124 11:26:27.887612 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02bf4aaa-02e9-42b0-96e7-182557310711-logs\") pod \"barbican-api-7785cf9ff8-jrntg\" (UID: \"02bf4aaa-02e9-42b0-96e7-182557310711\") " pod="openstack/barbican-api-7785cf9ff8-jrntg" Nov 24 11:26:27 crc kubenswrapper[5072]: I1124 11:26:27.887669 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/02bf4aaa-02e9-42b0-96e7-182557310711-internal-tls-certs\") pod \"barbican-api-7785cf9ff8-jrntg\" (UID: \"02bf4aaa-02e9-42b0-96e7-182557310711\") " pod="openstack/barbican-api-7785cf9ff8-jrntg" Nov 24 11:26:27 crc kubenswrapper[5072]: I1124 11:26:27.887690 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02bf4aaa-02e9-42b0-96e7-182557310711-config-data\") pod \"barbican-api-7785cf9ff8-jrntg\" (UID: \"02bf4aaa-02e9-42b0-96e7-182557310711\") " pod="openstack/barbican-api-7785cf9ff8-jrntg" Nov 24 11:26:27 crc kubenswrapper[5072]: I1124 11:26:27.893363 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02bf4aaa-02e9-42b0-96e7-182557310711-config-data\") pod \"barbican-api-7785cf9ff8-jrntg\" (UID: \"02bf4aaa-02e9-42b0-96e7-182557310711\") " pod="openstack/barbican-api-7785cf9ff8-jrntg" Nov 24 11:26:27 crc kubenswrapper[5072]: I1124 11:26:27.899668 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02bf4aaa-02e9-42b0-96e7-182557310711-logs\") pod \"barbican-api-7785cf9ff8-jrntg\" (UID: \"02bf4aaa-02e9-42b0-96e7-182557310711\") " pod="openstack/barbican-api-7785cf9ff8-jrntg" Nov 24 11:26:27 crc kubenswrapper[5072]: I1124 11:26:27.906829 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/02bf4aaa-02e9-42b0-96e7-182557310711-internal-tls-certs\") pod \"barbican-api-7785cf9ff8-jrntg\" (UID: \"02bf4aaa-02e9-42b0-96e7-182557310711\") " pod="openstack/barbican-api-7785cf9ff8-jrntg" Nov 24 11:26:27 crc kubenswrapper[5072]: I1124 11:26:27.908214 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/02bf4aaa-02e9-42b0-96e7-182557310711-public-tls-certs\") pod \"barbican-api-7785cf9ff8-jrntg\" (UID: \"02bf4aaa-02e9-42b0-96e7-182557310711\") " pod="openstack/barbican-api-7785cf9ff8-jrntg" Nov 24 11:26:27 crc kubenswrapper[5072]: I1124 11:26:27.915956 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02bf4aaa-02e9-42b0-96e7-182557310711-combined-ca-bundle\") pod \"barbican-api-7785cf9ff8-jrntg\" (UID: \"02bf4aaa-02e9-42b0-96e7-182557310711\") " pod="openstack/barbican-api-7785cf9ff8-jrntg" Nov 24 11:26:27 crc kubenswrapper[5072]: I1124 11:26:27.920443 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/02bf4aaa-02e9-42b0-96e7-182557310711-config-data-custom\") pod \"barbican-api-7785cf9ff8-jrntg\" (UID: \"02bf4aaa-02e9-42b0-96e7-182557310711\") " pod="openstack/barbican-api-7785cf9ff8-jrntg" Nov 24 11:26:27 crc kubenswrapper[5072]: I1124 11:26:27.946981 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zq5xs\" (UniqueName: \"kubernetes.io/projected/02bf4aaa-02e9-42b0-96e7-182557310711-kube-api-access-zq5xs\") pod \"barbican-api-7785cf9ff8-jrntg\" (UID: \"02bf4aaa-02e9-42b0-96e7-182557310711\") " pod="openstack/barbican-api-7785cf9ff8-jrntg" Nov 24 11:26:27 crc kubenswrapper[5072]: I1124 11:26:27.978441 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 11:26:27 crc kubenswrapper[5072]: I1124 11:26:27.980095 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 24 11:26:27 crc kubenswrapper[5072]: I1124 11:26:27.989800 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 11:26:27 crc kubenswrapper[5072]: I1124 11:26:27.990444 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 24 11:26:27 crc kubenswrapper[5072]: I1124 11:26:27.990692 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-9mkjw" Nov 24 11:26:27 crc kubenswrapper[5072]: I1124 11:26:27.990855 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 24 11:26:27 crc kubenswrapper[5072]: I1124 11:26:27.994562 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 24 11:26:28 crc kubenswrapper[5072]: I1124 11:26:28.028541 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bb684768f-cpgh9"] Nov 24 11:26:28 crc kubenswrapper[5072]: I1124 11:26:28.039045 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7785cf9ff8-jrntg" Nov 24 11:26:28 crc kubenswrapper[5072]: I1124 11:26:28.083873 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-nf7ht"] Nov 24 11:26:28 crc kubenswrapper[5072]: I1124 11:26:28.093490 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b87dd9d8-b704-4a8b-9037-a27242b516da-scripts\") pod \"cinder-scheduler-0\" (UID: \"b87dd9d8-b704-4a8b-9037-a27242b516da\") " pod="openstack/cinder-scheduler-0" Nov 24 11:26:28 crc kubenswrapper[5072]: I1124 11:26:28.093528 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b87dd9d8-b704-4a8b-9037-a27242b516da-config-data\") pod \"cinder-scheduler-0\" (UID: \"b87dd9d8-b704-4a8b-9037-a27242b516da\") " pod="openstack/cinder-scheduler-0" Nov 24 11:26:28 crc kubenswrapper[5072]: I1124 11:26:28.093563 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b87dd9d8-b704-4a8b-9037-a27242b516da-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"b87dd9d8-b704-4a8b-9037-a27242b516da\") " pod="openstack/cinder-scheduler-0" Nov 24 11:26:28 crc kubenswrapper[5072]: I1124 11:26:28.093583 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b87dd9d8-b704-4a8b-9037-a27242b516da-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"b87dd9d8-b704-4a8b-9037-a27242b516da\") " pod="openstack/cinder-scheduler-0" Nov 24 11:26:28 crc kubenswrapper[5072]: I1124 11:26:28.093682 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zp7d\" (UniqueName: \"kubernetes.io/projected/b87dd9d8-b704-4a8b-9037-a27242b516da-kube-api-access-9zp7d\") pod \"cinder-scheduler-0\" (UID: \"b87dd9d8-b704-4a8b-9037-a27242b516da\") " pod="openstack/cinder-scheduler-0" Nov 24 11:26:28 crc kubenswrapper[5072]: I1124 11:26:28.093728 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b87dd9d8-b704-4a8b-9037-a27242b516da-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"b87dd9d8-b704-4a8b-9037-a27242b516da\") " pod="openstack/cinder-scheduler-0" Nov 24 11:26:28 crc kubenswrapper[5072]: I1124 11:26:28.101937 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d97fcdd8f-nf7ht" Nov 24 11:26:28 crc kubenswrapper[5072]: I1124 11:26:28.181161 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-nf7ht"] Nov 24 11:26:28 crc kubenswrapper[5072]: I1124 11:26:28.223663 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6203439c-7b33-45b5-b052-9a09e6df2f11-config\") pod \"dnsmasq-dns-6d97fcdd8f-nf7ht\" (UID: \"6203439c-7b33-45b5-b052-9a09e6df2f11\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-nf7ht" Nov 24 11:26:28 crc kubenswrapper[5072]: I1124 11:26:28.223729 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b87dd9d8-b704-4a8b-9037-a27242b516da-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"b87dd9d8-b704-4a8b-9037-a27242b516da\") " pod="openstack/cinder-scheduler-0" Nov 24 11:26:28 crc kubenswrapper[5072]: I1124 11:26:28.223753 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b87dd9d8-b704-4a8b-9037-a27242b516da-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"b87dd9d8-b704-4a8b-9037-a27242b516da\") " pod="openstack/cinder-scheduler-0" Nov 24 11:26:28 crc kubenswrapper[5072]: I1124 11:26:28.223808 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6203439c-7b33-45b5-b052-9a09e6df2f11-ovsdbserver-nb\") pod \"dnsmasq-dns-6d97fcdd8f-nf7ht\" (UID: \"6203439c-7b33-45b5-b052-9a09e6df2f11\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-nf7ht" Nov 24 11:26:28 crc kubenswrapper[5072]: I1124 11:26:28.223828 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6203439c-7b33-45b5-b052-9a09e6df2f11-ovsdbserver-sb\") pod \"dnsmasq-dns-6d97fcdd8f-nf7ht\" (UID: \"6203439c-7b33-45b5-b052-9a09e6df2f11\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-nf7ht" Nov 24 11:26:28 crc kubenswrapper[5072]: I1124 11:26:28.223881 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zp7d\" (UniqueName: \"kubernetes.io/projected/b87dd9d8-b704-4a8b-9037-a27242b516da-kube-api-access-9zp7d\") pod \"cinder-scheduler-0\" (UID: \"b87dd9d8-b704-4a8b-9037-a27242b516da\") " pod="openstack/cinder-scheduler-0" Nov 24 11:26:28 crc kubenswrapper[5072]: I1124 11:26:28.223898 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6203439c-7b33-45b5-b052-9a09e6df2f11-dns-svc\") pod \"dnsmasq-dns-6d97fcdd8f-nf7ht\" (UID: \"6203439c-7b33-45b5-b052-9a09e6df2f11\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-nf7ht" Nov 24 11:26:28 crc kubenswrapper[5072]: I1124 11:26:28.223941 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b87dd9d8-b704-4a8b-9037-a27242b516da-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"b87dd9d8-b704-4a8b-9037-a27242b516da\") " pod="openstack/cinder-scheduler-0" Nov 24 11:26:28 crc kubenswrapper[5072]: I1124 11:26:28.223993 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rwks\" (UniqueName: \"kubernetes.io/projected/6203439c-7b33-45b5-b052-9a09e6df2f11-kube-api-access-8rwks\") pod \"dnsmasq-dns-6d97fcdd8f-nf7ht\" (UID: \"6203439c-7b33-45b5-b052-9a09e6df2f11\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-nf7ht" Nov 24 11:26:28 crc kubenswrapper[5072]: I1124 11:26:28.224010 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b87dd9d8-b704-4a8b-9037-a27242b516da-scripts\") pod \"cinder-scheduler-0\" (UID: \"b87dd9d8-b704-4a8b-9037-a27242b516da\") " pod="openstack/cinder-scheduler-0" Nov 24 11:26:28 crc kubenswrapper[5072]: I1124 11:26:28.224029 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b87dd9d8-b704-4a8b-9037-a27242b516da-config-data\") pod \"cinder-scheduler-0\" (UID: \"b87dd9d8-b704-4a8b-9037-a27242b516da\") " pod="openstack/cinder-scheduler-0" Nov 24 11:26:28 crc kubenswrapper[5072]: I1124 11:26:28.224315 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b87dd9d8-b704-4a8b-9037-a27242b516da-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"b87dd9d8-b704-4a8b-9037-a27242b516da\") " pod="openstack/cinder-scheduler-0" Nov 24 11:26:28 crc kubenswrapper[5072]: I1124 11:26:28.262540 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 24 11:26:28 crc kubenswrapper[5072]: I1124 11:26:28.264387 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 24 11:26:28 crc kubenswrapper[5072]: I1124 11:26:28.269752 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 24 11:26:28 crc kubenswrapper[5072]: I1124 11:26:28.289563 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b87dd9d8-b704-4a8b-9037-a27242b516da-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"b87dd9d8-b704-4a8b-9037-a27242b516da\") " pod="openstack/cinder-scheduler-0" Nov 24 11:26:28 crc kubenswrapper[5072]: I1124 11:26:28.299653 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b87dd9d8-b704-4a8b-9037-a27242b516da-scripts\") pod \"cinder-scheduler-0\" (UID: \"b87dd9d8-b704-4a8b-9037-a27242b516da\") " pod="openstack/cinder-scheduler-0" Nov 24 11:26:28 crc kubenswrapper[5072]: I1124 11:26:28.300043 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b87dd9d8-b704-4a8b-9037-a27242b516da-config-data\") pod \"cinder-scheduler-0\" (UID: \"b87dd9d8-b704-4a8b-9037-a27242b516da\") " pod="openstack/cinder-scheduler-0" Nov 24 11:26:28 crc kubenswrapper[5072]: I1124 11:26:28.300243 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b87dd9d8-b704-4a8b-9037-a27242b516da-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"b87dd9d8-b704-4a8b-9037-a27242b516da\") " pod="openstack/cinder-scheduler-0" Nov 24 11:26:28 crc kubenswrapper[5072]: I1124 11:26:28.302397 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 24 11:26:28 crc kubenswrapper[5072]: I1124 11:26:28.308703 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zp7d\" (UniqueName: \"kubernetes.io/projected/b87dd9d8-b704-4a8b-9037-a27242b516da-kube-api-access-9zp7d\") pod \"cinder-scheduler-0\" (UID: \"b87dd9d8-b704-4a8b-9037-a27242b516da\") " pod="openstack/cinder-scheduler-0" Nov 24 11:26:28 crc kubenswrapper[5072]: I1124 11:26:28.332255 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6203439c-7b33-45b5-b052-9a09e6df2f11-dns-svc\") pod \"dnsmasq-dns-6d97fcdd8f-nf7ht\" (UID: \"6203439c-7b33-45b5-b052-9a09e6df2f11\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-nf7ht" Nov 24 11:26:28 crc kubenswrapper[5072]: I1124 11:26:28.332350 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rwks\" (UniqueName: \"kubernetes.io/projected/6203439c-7b33-45b5-b052-9a09e6df2f11-kube-api-access-8rwks\") pod \"dnsmasq-dns-6d97fcdd8f-nf7ht\" (UID: \"6203439c-7b33-45b5-b052-9a09e6df2f11\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-nf7ht" Nov 24 11:26:28 crc kubenswrapper[5072]: I1124 11:26:28.332387 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6203439c-7b33-45b5-b052-9a09e6df2f11-config\") pod \"dnsmasq-dns-6d97fcdd8f-nf7ht\" (UID: \"6203439c-7b33-45b5-b052-9a09e6df2f11\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-nf7ht" Nov 24 11:26:28 crc kubenswrapper[5072]: I1124 11:26:28.332440 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6203439c-7b33-45b5-b052-9a09e6df2f11-ovsdbserver-nb\") pod \"dnsmasq-dns-6d97fcdd8f-nf7ht\" (UID: \"6203439c-7b33-45b5-b052-9a09e6df2f11\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-nf7ht" Nov 24 11:26:28 crc kubenswrapper[5072]: I1124 11:26:28.332463 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6203439c-7b33-45b5-b052-9a09e6df2f11-ovsdbserver-sb\") pod \"dnsmasq-dns-6d97fcdd8f-nf7ht\" (UID: \"6203439c-7b33-45b5-b052-9a09e6df2f11\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-nf7ht" Nov 24 11:26:28 crc kubenswrapper[5072]: I1124 11:26:28.333418 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6203439c-7b33-45b5-b052-9a09e6df2f11-ovsdbserver-sb\") pod \"dnsmasq-dns-6d97fcdd8f-nf7ht\" (UID: \"6203439c-7b33-45b5-b052-9a09e6df2f11\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-nf7ht" Nov 24 11:26:28 crc kubenswrapper[5072]: I1124 11:26:28.333913 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6203439c-7b33-45b5-b052-9a09e6df2f11-dns-svc\") pod \"dnsmasq-dns-6d97fcdd8f-nf7ht\" (UID: \"6203439c-7b33-45b5-b052-9a09e6df2f11\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-nf7ht" Nov 24 11:26:28 crc kubenswrapper[5072]: I1124 11:26:28.334778 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6203439c-7b33-45b5-b052-9a09e6df2f11-config\") pod \"dnsmasq-dns-6d97fcdd8f-nf7ht\" (UID: \"6203439c-7b33-45b5-b052-9a09e6df2f11\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-nf7ht" Nov 24 11:26:28 crc kubenswrapper[5072]: I1124 11:26:28.335263 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6203439c-7b33-45b5-b052-9a09e6df2f11-ovsdbserver-nb\") pod \"dnsmasq-dns-6d97fcdd8f-nf7ht\" (UID: \"6203439c-7b33-45b5-b052-9a09e6df2f11\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-nf7ht" Nov 24 11:26:28 crc kubenswrapper[5072]: I1124 11:26:28.342799 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 24 11:26:28 crc kubenswrapper[5072]: I1124 11:26:28.374173 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rwks\" (UniqueName: \"kubernetes.io/projected/6203439c-7b33-45b5-b052-9a09e6df2f11-kube-api-access-8rwks\") pod \"dnsmasq-dns-6d97fcdd8f-nf7ht\" (UID: \"6203439c-7b33-45b5-b052-9a09e6df2f11\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-nf7ht" Nov 24 11:26:28 crc kubenswrapper[5072]: I1124 11:26:28.436355 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8aa8ea85-5c78-4e77-921c-2558e2aa6237-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"8aa8ea85-5c78-4e77-921c-2558e2aa6237\") " pod="openstack/cinder-api-0" Nov 24 11:26:28 crc kubenswrapper[5072]: I1124 11:26:28.436438 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8aa8ea85-5c78-4e77-921c-2558e2aa6237-config-data\") pod \"cinder-api-0\" (UID: \"8aa8ea85-5c78-4e77-921c-2558e2aa6237\") " pod="openstack/cinder-api-0" Nov 24 11:26:28 crc kubenswrapper[5072]: I1124 11:26:28.436486 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8aa8ea85-5c78-4e77-921c-2558e2aa6237-etc-machine-id\") pod \"cinder-api-0\" (UID: \"8aa8ea85-5c78-4e77-921c-2558e2aa6237\") " pod="openstack/cinder-api-0" Nov 24 11:26:28 crc kubenswrapper[5072]: I1124 11:26:28.436531 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8aa8ea85-5c78-4e77-921c-2558e2aa6237-logs\") pod \"cinder-api-0\" (UID: \"8aa8ea85-5c78-4e77-921c-2558e2aa6237\") " pod="openstack/cinder-api-0" Nov 24 11:26:28 crc kubenswrapper[5072]: I1124 11:26:28.436568 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8aa8ea85-5c78-4e77-921c-2558e2aa6237-scripts\") pod \"cinder-api-0\" (UID: \"8aa8ea85-5c78-4e77-921c-2558e2aa6237\") " pod="openstack/cinder-api-0" Nov 24 11:26:28 crc kubenswrapper[5072]: I1124 11:26:28.436588 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6vsz\" (UniqueName: \"kubernetes.io/projected/8aa8ea85-5c78-4e77-921c-2558e2aa6237-kube-api-access-v6vsz\") pod \"cinder-api-0\" (UID: \"8aa8ea85-5c78-4e77-921c-2558e2aa6237\") " pod="openstack/cinder-api-0" Nov 24 11:26:28 crc kubenswrapper[5072]: I1124 11:26:28.436617 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8aa8ea85-5c78-4e77-921c-2558e2aa6237-config-data-custom\") pod \"cinder-api-0\" (UID: \"8aa8ea85-5c78-4e77-921c-2558e2aa6237\") " pod="openstack/cinder-api-0" Nov 24 11:26:28 crc kubenswrapper[5072]: I1124 11:26:28.474086 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d97fcdd8f-nf7ht" Nov 24 11:26:28 crc kubenswrapper[5072]: I1124 11:26:28.538291 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8aa8ea85-5c78-4e77-921c-2558e2aa6237-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"8aa8ea85-5c78-4e77-921c-2558e2aa6237\") " pod="openstack/cinder-api-0" Nov 24 11:26:28 crc kubenswrapper[5072]: I1124 11:26:28.538339 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8aa8ea85-5c78-4e77-921c-2558e2aa6237-config-data\") pod \"cinder-api-0\" (UID: \"8aa8ea85-5c78-4e77-921c-2558e2aa6237\") " pod="openstack/cinder-api-0" Nov 24 11:26:28 crc kubenswrapper[5072]: I1124 11:26:28.538398 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8aa8ea85-5c78-4e77-921c-2558e2aa6237-etc-machine-id\") pod \"cinder-api-0\" (UID: \"8aa8ea85-5c78-4e77-921c-2558e2aa6237\") " pod="openstack/cinder-api-0" Nov 24 11:26:28 crc kubenswrapper[5072]: I1124 11:26:28.538455 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8aa8ea85-5c78-4e77-921c-2558e2aa6237-logs\") pod \"cinder-api-0\" (UID: \"8aa8ea85-5c78-4e77-921c-2558e2aa6237\") " pod="openstack/cinder-api-0" Nov 24 11:26:28 crc kubenswrapper[5072]: I1124 11:26:28.538494 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8aa8ea85-5c78-4e77-921c-2558e2aa6237-scripts\") pod \"cinder-api-0\" (UID: \"8aa8ea85-5c78-4e77-921c-2558e2aa6237\") " pod="openstack/cinder-api-0" Nov 24 11:26:28 crc kubenswrapper[5072]: I1124 11:26:28.538512 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6vsz\" (UniqueName: \"kubernetes.io/projected/8aa8ea85-5c78-4e77-921c-2558e2aa6237-kube-api-access-v6vsz\") pod \"cinder-api-0\" (UID: \"8aa8ea85-5c78-4e77-921c-2558e2aa6237\") " pod="openstack/cinder-api-0" Nov 24 11:26:28 crc kubenswrapper[5072]: I1124 11:26:28.538547 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8aa8ea85-5c78-4e77-921c-2558e2aa6237-config-data-custom\") pod \"cinder-api-0\" (UID: \"8aa8ea85-5c78-4e77-921c-2558e2aa6237\") " pod="openstack/cinder-api-0" Nov 24 11:26:28 crc kubenswrapper[5072]: I1124 11:26:28.538850 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8aa8ea85-5c78-4e77-921c-2558e2aa6237-logs\") pod \"cinder-api-0\" (UID: \"8aa8ea85-5c78-4e77-921c-2558e2aa6237\") " pod="openstack/cinder-api-0" Nov 24 11:26:28 crc kubenswrapper[5072]: I1124 11:26:28.538542 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8aa8ea85-5c78-4e77-921c-2558e2aa6237-etc-machine-id\") pod \"cinder-api-0\" (UID: \"8aa8ea85-5c78-4e77-921c-2558e2aa6237\") " pod="openstack/cinder-api-0" Nov 24 11:26:28 crc kubenswrapper[5072]: I1124 11:26:28.546016 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8aa8ea85-5c78-4e77-921c-2558e2aa6237-scripts\") pod \"cinder-api-0\" (UID: \"8aa8ea85-5c78-4e77-921c-2558e2aa6237\") " pod="openstack/cinder-api-0" Nov 24 11:26:28 crc kubenswrapper[5072]: I1124 11:26:28.546253 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8aa8ea85-5c78-4e77-921c-2558e2aa6237-config-data\") pod \"cinder-api-0\" (UID: \"8aa8ea85-5c78-4e77-921c-2558e2aa6237\") " pod="openstack/cinder-api-0" Nov 24 11:26:28 crc kubenswrapper[5072]: I1124 11:26:28.546257 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8aa8ea85-5c78-4e77-921c-2558e2aa6237-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"8aa8ea85-5c78-4e77-921c-2558e2aa6237\") " pod="openstack/cinder-api-0" Nov 24 11:26:28 crc kubenswrapper[5072]: I1124 11:26:28.546783 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8aa8ea85-5c78-4e77-921c-2558e2aa6237-config-data-custom\") pod \"cinder-api-0\" (UID: \"8aa8ea85-5c78-4e77-921c-2558e2aa6237\") " pod="openstack/cinder-api-0" Nov 24 11:26:28 crc kubenswrapper[5072]: I1124 11:26:28.560045 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6vsz\" (UniqueName: \"kubernetes.io/projected/8aa8ea85-5c78-4e77-921c-2558e2aa6237-kube-api-access-v6vsz\") pod \"cinder-api-0\" (UID: \"8aa8ea85-5c78-4e77-921c-2558e2aa6237\") " pod="openstack/cinder-api-0" Nov 24 11:26:28 crc kubenswrapper[5072]: I1124 11:26:28.676646 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-56f6884b8b-d9lh4" event={"ID":"17dcf560-c08b-4adb-b4e1-90887cddba39","Type":"ContainerStarted","Data":"ff574c14ae9a67950cb22e101c59c77d1e98656fb983e47fe1513660e95de91a"} Nov 24 11:26:28 crc kubenswrapper[5072]: I1124 11:26:28.676947 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-56f6884b8b-d9lh4" event={"ID":"17dcf560-c08b-4adb-b4e1-90887cddba39","Type":"ContainerStarted","Data":"c0c7083c578e25d66475ff81ccf7908556787fc493d2e6545638fa313b83ef4c"} Nov 24 11:26:28 crc kubenswrapper[5072]: I1124 11:26:28.678450 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7785cf9ff8-jrntg"] Nov 24 11:26:28 crc kubenswrapper[5072]: I1124 11:26:28.694754 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-55f6867c5c-rjpdx" event={"ID":"522a3a4f-dbc9-4b6a-9bff-5df22b4cba44","Type":"ContainerStarted","Data":"d226fb54f2a5877bcd7aacc7b208fc79cb00018524f3b361e6158ea4e8237416"} Nov 24 11:26:28 crc kubenswrapper[5072]: I1124 11:26:28.694804 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-55f6867c5c-rjpdx" event={"ID":"522a3a4f-dbc9-4b6a-9bff-5df22b4cba44","Type":"ContainerStarted","Data":"2bca4fb0725e9e6b73653b4e2e71d051cbfcbb7b02962692a689cc02c628819f"} Nov 24 11:26:28 crc kubenswrapper[5072]: I1124 11:26:28.706747 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-56f6884b8b-d9lh4" podStartSLOduration=3.051708426 podStartE2EDuration="4.706723334s" podCreationTimestamp="2025-11-24 11:26:24 +0000 UTC" firstStartedPulling="2025-11-24 11:26:25.89110211 +0000 UTC m=+1037.602626586" lastFinishedPulling="2025-11-24 11:26:27.546117018 +0000 UTC m=+1039.257641494" observedRunningTime="2025-11-24 11:26:28.691773105 +0000 UTC m=+1040.403297581" watchObservedRunningTime="2025-11-24 11:26:28.706723334 +0000 UTC m=+1040.418247810" Nov 24 11:26:28 crc kubenswrapper[5072]: I1124 11:26:28.717766 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b596a610-936b-465e-aa9d-cb3b8f7811a4","Type":"ContainerStarted","Data":"d58ac5848c669e06620802778cb91f5a9261b93ea91426bd7da12b6e1c704a06"} Nov 24 11:26:28 crc kubenswrapper[5072]: I1124 11:26:28.732852 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-55f6867c5c-rjpdx" podStartSLOduration=3.008699498 podStartE2EDuration="4.732825295s" podCreationTimestamp="2025-11-24 11:26:24 +0000 UTC" firstStartedPulling="2025-11-24 11:26:25.816403289 +0000 UTC m=+1037.527927765" lastFinishedPulling="2025-11-24 11:26:27.540529086 +0000 UTC m=+1039.252053562" observedRunningTime="2025-11-24 11:26:28.714893781 +0000 UTC m=+1040.426418257" watchObservedRunningTime="2025-11-24 11:26:28.732825295 +0000 UTC m=+1040.444349811" Nov 24 11:26:28 crc kubenswrapper[5072]: I1124 11:26:28.753727 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.336830343 podStartE2EDuration="5.753703614s" podCreationTimestamp="2025-11-24 11:26:23 +0000 UTC" firstStartedPulling="2025-11-24 11:26:24.417309303 +0000 UTC m=+1036.128833779" lastFinishedPulling="2025-11-24 11:26:27.834182574 +0000 UTC m=+1039.545707050" observedRunningTime="2025-11-24 11:26:28.744759557 +0000 UTC m=+1040.456284053" watchObservedRunningTime="2025-11-24 11:26:28.753703614 +0000 UTC m=+1040.465228090" Nov 24 11:26:28 crc kubenswrapper[5072]: I1124 11:26:28.834465 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 24 11:26:28 crc kubenswrapper[5072]: I1124 11:26:28.872666 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 11:26:28 crc kubenswrapper[5072]: W1124 11:26:28.886192 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb87dd9d8_b704_4a8b_9037_a27242b516da.slice/crio-90556f8fadd0f4cb64afd0a5a4c5cb0a4fe22948a727cafe6ee2ec62652c1dd0 WatchSource:0}: Error finding container 90556f8fadd0f4cb64afd0a5a4c5cb0a4fe22948a727cafe6ee2ec62652c1dd0: Status 404 returned error can't find the container with id 90556f8fadd0f4cb64afd0a5a4c5cb0a4fe22948a727cafe6ee2ec62652c1dd0 Nov 24 11:26:29 crc kubenswrapper[5072]: I1124 11:26:29.066302 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-nf7ht"] Nov 24 11:26:29 crc kubenswrapper[5072]: I1124 11:26:29.353366 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 24 11:26:29 crc kubenswrapper[5072]: I1124 11:26:29.730263 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7785cf9ff8-jrntg" event={"ID":"02bf4aaa-02e9-42b0-96e7-182557310711","Type":"ContainerStarted","Data":"bc11981e424a9d05a17d072187d7c11c6d013a9a14724477a215aec516d768ba"} Nov 24 11:26:29 crc kubenswrapper[5072]: I1124 11:26:29.730392 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7785cf9ff8-jrntg" Nov 24 11:26:29 crc kubenswrapper[5072]: I1124 11:26:29.730421 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7785cf9ff8-jrntg" Nov 24 11:26:29 crc kubenswrapper[5072]: I1124 11:26:29.730431 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7785cf9ff8-jrntg" event={"ID":"02bf4aaa-02e9-42b0-96e7-182557310711","Type":"ContainerStarted","Data":"34ddfe04b9281697abc3c6a4893a4d983526a9dbf1cbd4d0227d6fb415dc1858"} Nov 24 11:26:29 crc kubenswrapper[5072]: I1124 11:26:29.730445 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7785cf9ff8-jrntg" event={"ID":"02bf4aaa-02e9-42b0-96e7-182557310711","Type":"ContainerStarted","Data":"a9384b8a8d434c23a003ed6d86f32103c45ecbff72b77071d8c76ba8f5dd0e10"} Nov 24 11:26:29 crc kubenswrapper[5072]: I1124 11:26:29.733663 5072 generic.go:334] "Generic (PLEG): container finished" podID="6203439c-7b33-45b5-b052-9a09e6df2f11" containerID="360a5e1a79b597dfa0f67f0a5d0a5d957255ee193b6dcc9922402499eeb0affb" exitCode=0 Nov 24 11:26:29 crc kubenswrapper[5072]: I1124 11:26:29.733733 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-nf7ht" event={"ID":"6203439c-7b33-45b5-b052-9a09e6df2f11","Type":"ContainerDied","Data":"360a5e1a79b597dfa0f67f0a5d0a5d957255ee193b6dcc9922402499eeb0affb"} Nov 24 11:26:29 crc kubenswrapper[5072]: I1124 11:26:29.733759 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-nf7ht" event={"ID":"6203439c-7b33-45b5-b052-9a09e6df2f11","Type":"ContainerStarted","Data":"b2d1d68e6b7e93009ff73c815500d27b65bf45d8cd2d576e9d5affabe170d3c4"} Nov 24 11:26:29 crc kubenswrapper[5072]: I1124 11:26:29.736918 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b87dd9d8-b704-4a8b-9037-a27242b516da","Type":"ContainerStarted","Data":"90556f8fadd0f4cb64afd0a5a4c5cb0a4fe22948a727cafe6ee2ec62652c1dd0"} Nov 24 11:26:29 crc kubenswrapper[5072]: I1124 11:26:29.740217 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8aa8ea85-5c78-4e77-921c-2558e2aa6237","Type":"ContainerStarted","Data":"4cd54fca996e98a6531a39b01a9bfee00ce07fc08b8c090e7dcab67a982a7b11"} Nov 24 11:26:29 crc kubenswrapper[5072]: I1124 11:26:29.741507 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6bb684768f-cpgh9" podUID="b11b3a0a-db05-460b-9828-780b3c846f57" containerName="dnsmasq-dns" containerID="cri-o://77cdf06c9f719e88bcbde84d955642437319ff624182ab29307049a391ad780b" gracePeriod=10 Nov 24 11:26:29 crc kubenswrapper[5072]: I1124 11:26:29.741866 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 11:26:29 crc kubenswrapper[5072]: I1124 11:26:29.764279 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7785cf9ff8-jrntg" podStartSLOduration=2.764260068 podStartE2EDuration="2.764260068s" podCreationTimestamp="2025-11-24 11:26:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:26:29.760117503 +0000 UTC m=+1041.471641989" watchObservedRunningTime="2025-11-24 11:26:29.764260068 +0000 UTC m=+1041.475784544" Nov 24 11:26:30 crc kubenswrapper[5072]: I1124 11:26:30.200767 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb684768f-cpgh9" Nov 24 11:26:30 crc kubenswrapper[5072]: I1124 11:26:30.281097 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b11b3a0a-db05-460b-9828-780b3c846f57-config\") pod \"b11b3a0a-db05-460b-9828-780b3c846f57\" (UID: \"b11b3a0a-db05-460b-9828-780b3c846f57\") " Nov 24 11:26:30 crc kubenswrapper[5072]: I1124 11:26:30.281785 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b11b3a0a-db05-460b-9828-780b3c846f57-ovsdbserver-sb\") pod \"b11b3a0a-db05-460b-9828-780b3c846f57\" (UID: \"b11b3a0a-db05-460b-9828-780b3c846f57\") " Nov 24 11:26:30 crc kubenswrapper[5072]: I1124 11:26:30.281825 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g5dss\" (UniqueName: \"kubernetes.io/projected/b11b3a0a-db05-460b-9828-780b3c846f57-kube-api-access-g5dss\") pod \"b11b3a0a-db05-460b-9828-780b3c846f57\" (UID: \"b11b3a0a-db05-460b-9828-780b3c846f57\") " Nov 24 11:26:30 crc kubenswrapper[5072]: I1124 11:26:30.281862 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b11b3a0a-db05-460b-9828-780b3c846f57-dns-svc\") pod \"b11b3a0a-db05-460b-9828-780b3c846f57\" (UID: \"b11b3a0a-db05-460b-9828-780b3c846f57\") " Nov 24 11:26:30 crc kubenswrapper[5072]: I1124 11:26:30.281907 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b11b3a0a-db05-460b-9828-780b3c846f57-ovsdbserver-nb\") pod \"b11b3a0a-db05-460b-9828-780b3c846f57\" (UID: \"b11b3a0a-db05-460b-9828-780b3c846f57\") " Nov 24 11:26:30 crc kubenswrapper[5072]: I1124 11:26:30.286344 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11b3a0a-db05-460b-9828-780b3c846f57-kube-api-access-g5dss" (OuterVolumeSpecName: "kube-api-access-g5dss") pod "b11b3a0a-db05-460b-9828-780b3c846f57" (UID: "b11b3a0a-db05-460b-9828-780b3c846f57"). InnerVolumeSpecName "kube-api-access-g5dss". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:26:30 crc kubenswrapper[5072]: I1124 11:26:30.332711 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b11b3a0a-db05-460b-9828-780b3c846f57-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b11b3a0a-db05-460b-9828-780b3c846f57" (UID: "b11b3a0a-db05-460b-9828-780b3c846f57"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:26:30 crc kubenswrapper[5072]: I1124 11:26:30.343043 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b11b3a0a-db05-460b-9828-780b3c846f57-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b11b3a0a-db05-460b-9828-780b3c846f57" (UID: "b11b3a0a-db05-460b-9828-780b3c846f57"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:26:30 crc kubenswrapper[5072]: I1124 11:26:30.378871 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b11b3a0a-db05-460b-9828-780b3c846f57-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b11b3a0a-db05-460b-9828-780b3c846f57" (UID: "b11b3a0a-db05-460b-9828-780b3c846f57"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:26:30 crc kubenswrapper[5072]: I1124 11:26:30.385706 5072 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b11b3a0a-db05-460b-9828-780b3c846f57-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:30 crc kubenswrapper[5072]: I1124 11:26:30.385748 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g5dss\" (UniqueName: \"kubernetes.io/projected/b11b3a0a-db05-460b-9828-780b3c846f57-kube-api-access-g5dss\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:30 crc kubenswrapper[5072]: I1124 11:26:30.385764 5072 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b11b3a0a-db05-460b-9828-780b3c846f57-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:30 crc kubenswrapper[5072]: I1124 11:26:30.385775 5072 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b11b3a0a-db05-460b-9828-780b3c846f57-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:30 crc kubenswrapper[5072]: I1124 11:26:30.387716 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b11b3a0a-db05-460b-9828-780b3c846f57-config" (OuterVolumeSpecName: "config") pod "b11b3a0a-db05-460b-9828-780b3c846f57" (UID: "b11b3a0a-db05-460b-9828-780b3c846f57"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:26:30 crc kubenswrapper[5072]: I1124 11:26:30.486766 5072 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b11b3a0a-db05-460b-9828-780b3c846f57-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:30 crc kubenswrapper[5072]: I1124 11:26:30.753341 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b87dd9d8-b704-4a8b-9037-a27242b516da","Type":"ContainerStarted","Data":"f519d9c0fdc385aafd8af74fd44984e171a548b9c20c6e69580dfbb4e840ca9a"} Nov 24 11:26:30 crc kubenswrapper[5072]: I1124 11:26:30.756482 5072 generic.go:334] "Generic (PLEG): container finished" podID="b11b3a0a-db05-460b-9828-780b3c846f57" containerID="77cdf06c9f719e88bcbde84d955642437319ff624182ab29307049a391ad780b" exitCode=0 Nov 24 11:26:30 crc kubenswrapper[5072]: I1124 11:26:30.756569 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb684768f-cpgh9" Nov 24 11:26:30 crc kubenswrapper[5072]: I1124 11:26:30.756582 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb684768f-cpgh9" event={"ID":"b11b3a0a-db05-460b-9828-780b3c846f57","Type":"ContainerDied","Data":"77cdf06c9f719e88bcbde84d955642437319ff624182ab29307049a391ad780b"} Nov 24 11:26:30 crc kubenswrapper[5072]: I1124 11:26:30.756731 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb684768f-cpgh9" event={"ID":"b11b3a0a-db05-460b-9828-780b3c846f57","Type":"ContainerDied","Data":"e0a96074778574a1af2ff0356ae9c9b8ec6c422e885ddaa0f663855f13a8a115"} Nov 24 11:26:30 crc kubenswrapper[5072]: I1124 11:26:30.756781 5072 scope.go:117] "RemoveContainer" containerID="77cdf06c9f719e88bcbde84d955642437319ff624182ab29307049a391ad780b" Nov 24 11:26:30 crc kubenswrapper[5072]: I1124 11:26:30.758650 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8aa8ea85-5c78-4e77-921c-2558e2aa6237","Type":"ContainerStarted","Data":"b98c1f862b8f2f28dc27d9b4405a988a4ab96b473e33d08afe85e9ab4b5b854f"} Nov 24 11:26:30 crc kubenswrapper[5072]: I1124 11:26:30.762232 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-nf7ht" event={"ID":"6203439c-7b33-45b5-b052-9a09e6df2f11","Type":"ContainerStarted","Data":"93c1ba59b63148f4a6709489e721e342ceab7993a340f5b44ce6e6491b48edbc"} Nov 24 11:26:30 crc kubenswrapper[5072]: I1124 11:26:30.762455 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d97fcdd8f-nf7ht" Nov 24 11:26:30 crc kubenswrapper[5072]: I1124 11:26:30.781112 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6d97fcdd8f-nf7ht" podStartSLOduration=2.781089342 podStartE2EDuration="2.781089342s" podCreationTimestamp="2025-11-24 11:26:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:26:30.780884437 +0000 UTC m=+1042.492408923" watchObservedRunningTime="2025-11-24 11:26:30.781089342 +0000 UTC m=+1042.492613818" Nov 24 11:26:30 crc kubenswrapper[5072]: I1124 11:26:30.806197 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bb684768f-cpgh9"] Nov 24 11:26:30 crc kubenswrapper[5072]: I1124 11:26:30.812182 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6bb684768f-cpgh9"] Nov 24 11:26:30 crc kubenswrapper[5072]: I1124 11:26:30.830653 5072 scope.go:117] "RemoveContainer" containerID="44084e54c129aa5894d5bb2fafb178593cd8b8a363ad43aa4c52e31c04b9a770" Nov 24 11:26:30 crc kubenswrapper[5072]: I1124 11:26:30.856402 5072 scope.go:117] "RemoveContainer" containerID="77cdf06c9f719e88bcbde84d955642437319ff624182ab29307049a391ad780b" Nov 24 11:26:30 crc kubenswrapper[5072]: E1124 11:26:30.857174 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77cdf06c9f719e88bcbde84d955642437319ff624182ab29307049a391ad780b\": container with ID starting with 77cdf06c9f719e88bcbde84d955642437319ff624182ab29307049a391ad780b not found: ID does not exist" containerID="77cdf06c9f719e88bcbde84d955642437319ff624182ab29307049a391ad780b" Nov 24 11:26:30 crc kubenswrapper[5072]: I1124 11:26:30.857227 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77cdf06c9f719e88bcbde84d955642437319ff624182ab29307049a391ad780b"} err="failed to get container status \"77cdf06c9f719e88bcbde84d955642437319ff624182ab29307049a391ad780b\": rpc error: code = NotFound desc = could not find container \"77cdf06c9f719e88bcbde84d955642437319ff624182ab29307049a391ad780b\": container with ID starting with 77cdf06c9f719e88bcbde84d955642437319ff624182ab29307049a391ad780b not found: ID does not exist" Nov 24 11:26:30 crc kubenswrapper[5072]: I1124 11:26:30.857261 5072 scope.go:117] "RemoveContainer" containerID="44084e54c129aa5894d5bb2fafb178593cd8b8a363ad43aa4c52e31c04b9a770" Nov 24 11:26:30 crc kubenswrapper[5072]: E1124 11:26:30.857970 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44084e54c129aa5894d5bb2fafb178593cd8b8a363ad43aa4c52e31c04b9a770\": container with ID starting with 44084e54c129aa5894d5bb2fafb178593cd8b8a363ad43aa4c52e31c04b9a770 not found: ID does not exist" containerID="44084e54c129aa5894d5bb2fafb178593cd8b8a363ad43aa4c52e31c04b9a770" Nov 24 11:26:30 crc kubenswrapper[5072]: I1124 11:26:30.858001 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44084e54c129aa5894d5bb2fafb178593cd8b8a363ad43aa4c52e31c04b9a770"} err="failed to get container status \"44084e54c129aa5894d5bb2fafb178593cd8b8a363ad43aa4c52e31c04b9a770\": rpc error: code = NotFound desc = could not find container \"44084e54c129aa5894d5bb2fafb178593cd8b8a363ad43aa4c52e31c04b9a770\": container with ID starting with 44084e54c129aa5894d5bb2fafb178593cd8b8a363ad43aa4c52e31c04b9a770 not found: ID does not exist" Nov 24 11:26:31 crc kubenswrapper[5072]: I1124 11:26:31.031185 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11b3a0a-db05-460b-9828-780b3c846f57" path="/var/lib/kubelet/pods/b11b3a0a-db05-460b-9828-780b3c846f57/volumes" Nov 24 11:26:31 crc kubenswrapper[5072]: I1124 11:26:31.227691 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 24 11:26:31 crc kubenswrapper[5072]: I1124 11:26:31.773678 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8aa8ea85-5c78-4e77-921c-2558e2aa6237","Type":"ContainerStarted","Data":"068c1ecb5837f4db6a6d4a1215732782aabc13103d3f57b8fd48f1ac050d0d17"} Nov 24 11:26:31 crc kubenswrapper[5072]: I1124 11:26:31.773888 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="8aa8ea85-5c78-4e77-921c-2558e2aa6237" containerName="cinder-api-log" containerID="cri-o://b98c1f862b8f2f28dc27d9b4405a988a4ab96b473e33d08afe85e9ab4b5b854f" gracePeriod=30 Nov 24 11:26:31 crc kubenswrapper[5072]: I1124 11:26:31.773917 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="8aa8ea85-5c78-4e77-921c-2558e2aa6237" containerName="cinder-api" containerID="cri-o://068c1ecb5837f4db6a6d4a1215732782aabc13103d3f57b8fd48f1ac050d0d17" gracePeriod=30 Nov 24 11:26:31 crc kubenswrapper[5072]: I1124 11:26:31.774101 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 24 11:26:31 crc kubenswrapper[5072]: I1124 11:26:31.784686 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b87dd9d8-b704-4a8b-9037-a27242b516da","Type":"ContainerStarted","Data":"d1df5091ed6b678c0194ade0e72451400f2eaa4117cf8e64600280e1d5d101af"} Nov 24 11:26:31 crc kubenswrapper[5072]: I1124 11:26:31.793237 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.793220268 podStartE2EDuration="3.793220268s" podCreationTimestamp="2025-11-24 11:26:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:26:31.791502974 +0000 UTC m=+1043.503027450" watchObservedRunningTime="2025-11-24 11:26:31.793220268 +0000 UTC m=+1043.504744744" Nov 24 11:26:31 crc kubenswrapper[5072]: I1124 11:26:31.820630 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.865274875 podStartE2EDuration="4.820610961s" podCreationTimestamp="2025-11-24 11:26:27 +0000 UTC" firstStartedPulling="2025-11-24 11:26:28.895626108 +0000 UTC m=+1040.607150584" lastFinishedPulling="2025-11-24 11:26:29.850962194 +0000 UTC m=+1041.562486670" observedRunningTime="2025-11-24 11:26:31.814794684 +0000 UTC m=+1043.526319170" watchObservedRunningTime="2025-11-24 11:26:31.820610961 +0000 UTC m=+1043.532135427" Nov 24 11:26:32 crc kubenswrapper[5072]: I1124 11:26:32.350247 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 24 11:26:32 crc kubenswrapper[5072]: I1124 11:26:32.431186 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8aa8ea85-5c78-4e77-921c-2558e2aa6237-combined-ca-bundle\") pod \"8aa8ea85-5c78-4e77-921c-2558e2aa6237\" (UID: \"8aa8ea85-5c78-4e77-921c-2558e2aa6237\") " Nov 24 11:26:32 crc kubenswrapper[5072]: I1124 11:26:32.431265 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8aa8ea85-5c78-4e77-921c-2558e2aa6237-logs\") pod \"8aa8ea85-5c78-4e77-921c-2558e2aa6237\" (UID: \"8aa8ea85-5c78-4e77-921c-2558e2aa6237\") " Nov 24 11:26:32 crc kubenswrapper[5072]: I1124 11:26:32.431319 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8aa8ea85-5c78-4e77-921c-2558e2aa6237-scripts\") pod \"8aa8ea85-5c78-4e77-921c-2558e2aa6237\" (UID: \"8aa8ea85-5c78-4e77-921c-2558e2aa6237\") " Nov 24 11:26:32 crc kubenswrapper[5072]: I1124 11:26:32.431429 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8aa8ea85-5c78-4e77-921c-2558e2aa6237-config-data\") pod \"8aa8ea85-5c78-4e77-921c-2558e2aa6237\" (UID: \"8aa8ea85-5c78-4e77-921c-2558e2aa6237\") " Nov 24 11:26:32 crc kubenswrapper[5072]: I1124 11:26:32.431470 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8aa8ea85-5c78-4e77-921c-2558e2aa6237-etc-machine-id\") pod \"8aa8ea85-5c78-4e77-921c-2558e2aa6237\" (UID: \"8aa8ea85-5c78-4e77-921c-2558e2aa6237\") " Nov 24 11:26:32 crc kubenswrapper[5072]: I1124 11:26:32.431487 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8aa8ea85-5c78-4e77-921c-2558e2aa6237-config-data-custom\") pod \"8aa8ea85-5c78-4e77-921c-2558e2aa6237\" (UID: \"8aa8ea85-5c78-4e77-921c-2558e2aa6237\") " Nov 24 11:26:32 crc kubenswrapper[5072]: I1124 11:26:32.431511 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6vsz\" (UniqueName: \"kubernetes.io/projected/8aa8ea85-5c78-4e77-921c-2558e2aa6237-kube-api-access-v6vsz\") pod \"8aa8ea85-5c78-4e77-921c-2558e2aa6237\" (UID: \"8aa8ea85-5c78-4e77-921c-2558e2aa6237\") " Nov 24 11:26:32 crc kubenswrapper[5072]: I1124 11:26:32.431600 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8aa8ea85-5c78-4e77-921c-2558e2aa6237-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "8aa8ea85-5c78-4e77-921c-2558e2aa6237" (UID: "8aa8ea85-5c78-4e77-921c-2558e2aa6237"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 11:26:32 crc kubenswrapper[5072]: I1124 11:26:32.431841 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8aa8ea85-5c78-4e77-921c-2558e2aa6237-logs" (OuterVolumeSpecName: "logs") pod "8aa8ea85-5c78-4e77-921c-2558e2aa6237" (UID: "8aa8ea85-5c78-4e77-921c-2558e2aa6237"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:26:32 crc kubenswrapper[5072]: I1124 11:26:32.431872 5072 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8aa8ea85-5c78-4e77-921c-2558e2aa6237-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:32 crc kubenswrapper[5072]: I1124 11:26:32.454810 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8aa8ea85-5c78-4e77-921c-2558e2aa6237-scripts" (OuterVolumeSpecName: "scripts") pod "8aa8ea85-5c78-4e77-921c-2558e2aa6237" (UID: "8aa8ea85-5c78-4e77-921c-2558e2aa6237"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:26:32 crc kubenswrapper[5072]: I1124 11:26:32.455004 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8aa8ea85-5c78-4e77-921c-2558e2aa6237-kube-api-access-v6vsz" (OuterVolumeSpecName: "kube-api-access-v6vsz") pod "8aa8ea85-5c78-4e77-921c-2558e2aa6237" (UID: "8aa8ea85-5c78-4e77-921c-2558e2aa6237"). InnerVolumeSpecName "kube-api-access-v6vsz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:26:32 crc kubenswrapper[5072]: I1124 11:26:32.455137 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8aa8ea85-5c78-4e77-921c-2558e2aa6237-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "8aa8ea85-5c78-4e77-921c-2558e2aa6237" (UID: "8aa8ea85-5c78-4e77-921c-2558e2aa6237"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:26:32 crc kubenswrapper[5072]: I1124 11:26:32.467573 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8aa8ea85-5c78-4e77-921c-2558e2aa6237-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8aa8ea85-5c78-4e77-921c-2558e2aa6237" (UID: "8aa8ea85-5c78-4e77-921c-2558e2aa6237"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:26:32 crc kubenswrapper[5072]: I1124 11:26:32.507723 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8aa8ea85-5c78-4e77-921c-2558e2aa6237-config-data" (OuterVolumeSpecName: "config-data") pod "8aa8ea85-5c78-4e77-921c-2558e2aa6237" (UID: "8aa8ea85-5c78-4e77-921c-2558e2aa6237"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:26:32 crc kubenswrapper[5072]: I1124 11:26:32.533536 5072 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8aa8ea85-5c78-4e77-921c-2558e2aa6237-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:32 crc kubenswrapper[5072]: I1124 11:26:32.533570 5072 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8aa8ea85-5c78-4e77-921c-2558e2aa6237-logs\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:32 crc kubenswrapper[5072]: I1124 11:26:32.533580 5072 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8aa8ea85-5c78-4e77-921c-2558e2aa6237-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:32 crc kubenswrapper[5072]: I1124 11:26:32.533588 5072 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8aa8ea85-5c78-4e77-921c-2558e2aa6237-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:32 crc kubenswrapper[5072]: I1124 11:26:32.533599 5072 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8aa8ea85-5c78-4e77-921c-2558e2aa6237-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:32 crc kubenswrapper[5072]: I1124 11:26:32.533609 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v6vsz\" (UniqueName: \"kubernetes.io/projected/8aa8ea85-5c78-4e77-921c-2558e2aa6237-kube-api-access-v6vsz\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:32 crc kubenswrapper[5072]: I1124 11:26:32.794311 5072 generic.go:334] "Generic (PLEG): container finished" podID="8aa8ea85-5c78-4e77-921c-2558e2aa6237" containerID="068c1ecb5837f4db6a6d4a1215732782aabc13103d3f57b8fd48f1ac050d0d17" exitCode=0 Nov 24 11:26:32 crc kubenswrapper[5072]: I1124 11:26:32.794352 5072 generic.go:334] "Generic (PLEG): container finished" podID="8aa8ea85-5c78-4e77-921c-2558e2aa6237" containerID="b98c1f862b8f2f28dc27d9b4405a988a4ab96b473e33d08afe85e9ab4b5b854f" exitCode=143 Nov 24 11:26:32 crc kubenswrapper[5072]: I1124 11:26:32.795423 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 24 11:26:32 crc kubenswrapper[5072]: I1124 11:26:32.796240 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8aa8ea85-5c78-4e77-921c-2558e2aa6237","Type":"ContainerDied","Data":"068c1ecb5837f4db6a6d4a1215732782aabc13103d3f57b8fd48f1ac050d0d17"} Nov 24 11:26:32 crc kubenswrapper[5072]: I1124 11:26:32.796304 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8aa8ea85-5c78-4e77-921c-2558e2aa6237","Type":"ContainerDied","Data":"b98c1f862b8f2f28dc27d9b4405a988a4ab96b473e33d08afe85e9ab4b5b854f"} Nov 24 11:26:32 crc kubenswrapper[5072]: I1124 11:26:32.796324 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8aa8ea85-5c78-4e77-921c-2558e2aa6237","Type":"ContainerDied","Data":"4cd54fca996e98a6531a39b01a9bfee00ce07fc08b8c090e7dcab67a982a7b11"} Nov 24 11:26:32 crc kubenswrapper[5072]: I1124 11:26:32.796344 5072 scope.go:117] "RemoveContainer" containerID="068c1ecb5837f4db6a6d4a1215732782aabc13103d3f57b8fd48f1ac050d0d17" Nov 24 11:26:32 crc kubenswrapper[5072]: I1124 11:26:32.825362 5072 scope.go:117] "RemoveContainer" containerID="b98c1f862b8f2f28dc27d9b4405a988a4ab96b473e33d08afe85e9ab4b5b854f" Nov 24 11:26:32 crc kubenswrapper[5072]: I1124 11:26:32.825626 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 24 11:26:32 crc kubenswrapper[5072]: I1124 11:26:32.831281 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Nov 24 11:26:32 crc kubenswrapper[5072]: I1124 11:26:32.847150 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 24 11:26:32 crc kubenswrapper[5072]: E1124 11:26:32.847464 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8aa8ea85-5c78-4e77-921c-2558e2aa6237" containerName="cinder-api" Nov 24 11:26:32 crc kubenswrapper[5072]: I1124 11:26:32.847479 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="8aa8ea85-5c78-4e77-921c-2558e2aa6237" containerName="cinder-api" Nov 24 11:26:32 crc kubenswrapper[5072]: E1124 11:26:32.847496 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b11b3a0a-db05-460b-9828-780b3c846f57" containerName="dnsmasq-dns" Nov 24 11:26:32 crc kubenswrapper[5072]: I1124 11:26:32.847503 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="b11b3a0a-db05-460b-9828-780b3c846f57" containerName="dnsmasq-dns" Nov 24 11:26:32 crc kubenswrapper[5072]: E1124 11:26:32.847519 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8aa8ea85-5c78-4e77-921c-2558e2aa6237" containerName="cinder-api-log" Nov 24 11:26:32 crc kubenswrapper[5072]: I1124 11:26:32.847525 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="8aa8ea85-5c78-4e77-921c-2558e2aa6237" containerName="cinder-api-log" Nov 24 11:26:32 crc kubenswrapper[5072]: E1124 11:26:32.847536 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b11b3a0a-db05-460b-9828-780b3c846f57" containerName="init" Nov 24 11:26:32 crc kubenswrapper[5072]: I1124 11:26:32.847541 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="b11b3a0a-db05-460b-9828-780b3c846f57" containerName="init" Nov 24 11:26:32 crc kubenswrapper[5072]: I1124 11:26:32.847715 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="b11b3a0a-db05-460b-9828-780b3c846f57" containerName="dnsmasq-dns" Nov 24 11:26:32 crc kubenswrapper[5072]: I1124 11:26:32.847729 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="8aa8ea85-5c78-4e77-921c-2558e2aa6237" containerName="cinder-api-log" Nov 24 11:26:32 crc kubenswrapper[5072]: I1124 11:26:32.847747 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="8aa8ea85-5c78-4e77-921c-2558e2aa6237" containerName="cinder-api" Nov 24 11:26:32 crc kubenswrapper[5072]: I1124 11:26:32.848176 5072 scope.go:117] "RemoveContainer" containerID="068c1ecb5837f4db6a6d4a1215732782aabc13103d3f57b8fd48f1ac050d0d17" Nov 24 11:26:32 crc kubenswrapper[5072]: E1124 11:26:32.851800 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"068c1ecb5837f4db6a6d4a1215732782aabc13103d3f57b8fd48f1ac050d0d17\": container with ID starting with 068c1ecb5837f4db6a6d4a1215732782aabc13103d3f57b8fd48f1ac050d0d17 not found: ID does not exist" containerID="068c1ecb5837f4db6a6d4a1215732782aabc13103d3f57b8fd48f1ac050d0d17" Nov 24 11:26:32 crc kubenswrapper[5072]: I1124 11:26:32.851918 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"068c1ecb5837f4db6a6d4a1215732782aabc13103d3f57b8fd48f1ac050d0d17"} err="failed to get container status \"068c1ecb5837f4db6a6d4a1215732782aabc13103d3f57b8fd48f1ac050d0d17\": rpc error: code = NotFound desc = could not find container \"068c1ecb5837f4db6a6d4a1215732782aabc13103d3f57b8fd48f1ac050d0d17\": container with ID starting with 068c1ecb5837f4db6a6d4a1215732782aabc13103d3f57b8fd48f1ac050d0d17 not found: ID does not exist" Nov 24 11:26:32 crc kubenswrapper[5072]: I1124 11:26:32.852266 5072 scope.go:117] "RemoveContainer" containerID="b98c1f862b8f2f28dc27d9b4405a988a4ab96b473e33d08afe85e9ab4b5b854f" Nov 24 11:26:32 crc kubenswrapper[5072]: I1124 11:26:32.852612 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 24 11:26:32 crc kubenswrapper[5072]: E1124 11:26:32.853572 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b98c1f862b8f2f28dc27d9b4405a988a4ab96b473e33d08afe85e9ab4b5b854f\": container with ID starting with b98c1f862b8f2f28dc27d9b4405a988a4ab96b473e33d08afe85e9ab4b5b854f not found: ID does not exist" containerID="b98c1f862b8f2f28dc27d9b4405a988a4ab96b473e33d08afe85e9ab4b5b854f" Nov 24 11:26:32 crc kubenswrapper[5072]: I1124 11:26:32.853619 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b98c1f862b8f2f28dc27d9b4405a988a4ab96b473e33d08afe85e9ab4b5b854f"} err="failed to get container status \"b98c1f862b8f2f28dc27d9b4405a988a4ab96b473e33d08afe85e9ab4b5b854f\": rpc error: code = NotFound desc = could not find container \"b98c1f862b8f2f28dc27d9b4405a988a4ab96b473e33d08afe85e9ab4b5b854f\": container with ID starting with b98c1f862b8f2f28dc27d9b4405a988a4ab96b473e33d08afe85e9ab4b5b854f not found: ID does not exist" Nov 24 11:26:32 crc kubenswrapper[5072]: I1124 11:26:32.853646 5072 scope.go:117] "RemoveContainer" containerID="068c1ecb5837f4db6a6d4a1215732782aabc13103d3f57b8fd48f1ac050d0d17" Nov 24 11:26:32 crc kubenswrapper[5072]: I1124 11:26:32.855426 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"068c1ecb5837f4db6a6d4a1215732782aabc13103d3f57b8fd48f1ac050d0d17"} err="failed to get container status \"068c1ecb5837f4db6a6d4a1215732782aabc13103d3f57b8fd48f1ac050d0d17\": rpc error: code = NotFound desc = could not find container \"068c1ecb5837f4db6a6d4a1215732782aabc13103d3f57b8fd48f1ac050d0d17\": container with ID starting with 068c1ecb5837f4db6a6d4a1215732782aabc13103d3f57b8fd48f1ac050d0d17 not found: ID does not exist" Nov 24 11:26:32 crc kubenswrapper[5072]: I1124 11:26:32.855456 5072 scope.go:117] "RemoveContainer" containerID="b98c1f862b8f2f28dc27d9b4405a988a4ab96b473e33d08afe85e9ab4b5b854f" Nov 24 11:26:32 crc kubenswrapper[5072]: I1124 11:26:32.855849 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b98c1f862b8f2f28dc27d9b4405a988a4ab96b473e33d08afe85e9ab4b5b854f"} err="failed to get container status \"b98c1f862b8f2f28dc27d9b4405a988a4ab96b473e33d08afe85e9ab4b5b854f\": rpc error: code = NotFound desc = could not find container \"b98c1f862b8f2f28dc27d9b4405a988a4ab96b473e33d08afe85e9ab4b5b854f\": container with ID starting with b98c1f862b8f2f28dc27d9b4405a988a4ab96b473e33d08afe85e9ab4b5b854f not found: ID does not exist" Nov 24 11:26:32 crc kubenswrapper[5072]: I1124 11:26:32.856105 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Nov 24 11:26:32 crc kubenswrapper[5072]: I1124 11:26:32.856271 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Nov 24 11:26:32 crc kubenswrapper[5072]: I1124 11:26:32.858517 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 24 11:26:32 crc kubenswrapper[5072]: I1124 11:26:32.873349 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 24 11:26:32 crc kubenswrapper[5072]: I1124 11:26:32.943945 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/83c629ab-d9bd-4c85-b3e8-7d43a3d1c495-etc-machine-id\") pod \"cinder-api-0\" (UID: \"83c629ab-d9bd-4c85-b3e8-7d43a3d1c495\") " pod="openstack/cinder-api-0" Nov 24 11:26:32 crc kubenswrapper[5072]: I1124 11:26:32.943999 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83c629ab-d9bd-4c85-b3e8-7d43a3d1c495-config-data\") pod \"cinder-api-0\" (UID: \"83c629ab-d9bd-4c85-b3e8-7d43a3d1c495\") " pod="openstack/cinder-api-0" Nov 24 11:26:32 crc kubenswrapper[5072]: I1124 11:26:32.944020 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83c629ab-d9bd-4c85-b3e8-7d43a3d1c495-scripts\") pod \"cinder-api-0\" (UID: \"83c629ab-d9bd-4c85-b3e8-7d43a3d1c495\") " pod="openstack/cinder-api-0" Nov 24 11:26:32 crc kubenswrapper[5072]: I1124 11:26:32.944067 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/83c629ab-d9bd-4c85-b3e8-7d43a3d1c495-public-tls-certs\") pod \"cinder-api-0\" (UID: \"83c629ab-d9bd-4c85-b3e8-7d43a3d1c495\") " pod="openstack/cinder-api-0" Nov 24 11:26:32 crc kubenswrapper[5072]: I1124 11:26:32.944299 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/83c629ab-d9bd-4c85-b3e8-7d43a3d1c495-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"83c629ab-d9bd-4c85-b3e8-7d43a3d1c495\") " pod="openstack/cinder-api-0" Nov 24 11:26:32 crc kubenswrapper[5072]: I1124 11:26:32.944351 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6f2hz\" (UniqueName: \"kubernetes.io/projected/83c629ab-d9bd-4c85-b3e8-7d43a3d1c495-kube-api-access-6f2hz\") pod \"cinder-api-0\" (UID: \"83c629ab-d9bd-4c85-b3e8-7d43a3d1c495\") " pod="openstack/cinder-api-0" Nov 24 11:26:32 crc kubenswrapper[5072]: I1124 11:26:32.944478 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/83c629ab-d9bd-4c85-b3e8-7d43a3d1c495-config-data-custom\") pod \"cinder-api-0\" (UID: \"83c629ab-d9bd-4c85-b3e8-7d43a3d1c495\") " pod="openstack/cinder-api-0" Nov 24 11:26:32 crc kubenswrapper[5072]: I1124 11:26:32.944585 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83c629ab-d9bd-4c85-b3e8-7d43a3d1c495-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"83c629ab-d9bd-4c85-b3e8-7d43a3d1c495\") " pod="openstack/cinder-api-0" Nov 24 11:26:32 crc kubenswrapper[5072]: I1124 11:26:32.944639 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83c629ab-d9bd-4c85-b3e8-7d43a3d1c495-logs\") pod \"cinder-api-0\" (UID: \"83c629ab-d9bd-4c85-b3e8-7d43a3d1c495\") " pod="openstack/cinder-api-0" Nov 24 11:26:33 crc kubenswrapper[5072]: I1124 11:26:33.029320 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8aa8ea85-5c78-4e77-921c-2558e2aa6237" path="/var/lib/kubelet/pods/8aa8ea85-5c78-4e77-921c-2558e2aa6237/volumes" Nov 24 11:26:33 crc kubenswrapper[5072]: I1124 11:26:33.045945 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/83c629ab-d9bd-4c85-b3e8-7d43a3d1c495-config-data-custom\") pod \"cinder-api-0\" (UID: \"83c629ab-d9bd-4c85-b3e8-7d43a3d1c495\") " pod="openstack/cinder-api-0" Nov 24 11:26:33 crc kubenswrapper[5072]: I1124 11:26:33.046035 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83c629ab-d9bd-4c85-b3e8-7d43a3d1c495-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"83c629ab-d9bd-4c85-b3e8-7d43a3d1c495\") " pod="openstack/cinder-api-0" Nov 24 11:26:33 crc kubenswrapper[5072]: I1124 11:26:33.046071 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83c629ab-d9bd-4c85-b3e8-7d43a3d1c495-logs\") pod \"cinder-api-0\" (UID: \"83c629ab-d9bd-4c85-b3e8-7d43a3d1c495\") " pod="openstack/cinder-api-0" Nov 24 11:26:33 crc kubenswrapper[5072]: I1124 11:26:33.046152 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/83c629ab-d9bd-4c85-b3e8-7d43a3d1c495-etc-machine-id\") pod \"cinder-api-0\" (UID: \"83c629ab-d9bd-4c85-b3e8-7d43a3d1c495\") " pod="openstack/cinder-api-0" Nov 24 11:26:33 crc kubenswrapper[5072]: I1124 11:26:33.046190 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83c629ab-d9bd-4c85-b3e8-7d43a3d1c495-config-data\") pod \"cinder-api-0\" (UID: \"83c629ab-d9bd-4c85-b3e8-7d43a3d1c495\") " pod="openstack/cinder-api-0" Nov 24 11:26:33 crc kubenswrapper[5072]: I1124 11:26:33.046212 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83c629ab-d9bd-4c85-b3e8-7d43a3d1c495-scripts\") pod \"cinder-api-0\" (UID: \"83c629ab-d9bd-4c85-b3e8-7d43a3d1c495\") " pod="openstack/cinder-api-0" Nov 24 11:26:33 crc kubenswrapper[5072]: I1124 11:26:33.046254 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/83c629ab-d9bd-4c85-b3e8-7d43a3d1c495-public-tls-certs\") pod \"cinder-api-0\" (UID: \"83c629ab-d9bd-4c85-b3e8-7d43a3d1c495\") " pod="openstack/cinder-api-0" Nov 24 11:26:33 crc kubenswrapper[5072]: I1124 11:26:33.046293 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/83c629ab-d9bd-4c85-b3e8-7d43a3d1c495-etc-machine-id\") pod \"cinder-api-0\" (UID: \"83c629ab-d9bd-4c85-b3e8-7d43a3d1c495\") " pod="openstack/cinder-api-0" Nov 24 11:26:33 crc kubenswrapper[5072]: I1124 11:26:33.046326 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/83c629ab-d9bd-4c85-b3e8-7d43a3d1c495-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"83c629ab-d9bd-4c85-b3e8-7d43a3d1c495\") " pod="openstack/cinder-api-0" Nov 24 11:26:33 crc kubenswrapper[5072]: I1124 11:26:33.046357 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6f2hz\" (UniqueName: \"kubernetes.io/projected/83c629ab-d9bd-4c85-b3e8-7d43a3d1c495-kube-api-access-6f2hz\") pod \"cinder-api-0\" (UID: \"83c629ab-d9bd-4c85-b3e8-7d43a3d1c495\") " pod="openstack/cinder-api-0" Nov 24 11:26:33 crc kubenswrapper[5072]: I1124 11:26:33.049139 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83c629ab-d9bd-4c85-b3e8-7d43a3d1c495-logs\") pod \"cinder-api-0\" (UID: \"83c629ab-d9bd-4c85-b3e8-7d43a3d1c495\") " pod="openstack/cinder-api-0" Nov 24 11:26:33 crc kubenswrapper[5072]: I1124 11:26:33.049914 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/83c629ab-d9bd-4c85-b3e8-7d43a3d1c495-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"83c629ab-d9bd-4c85-b3e8-7d43a3d1c495\") " pod="openstack/cinder-api-0" Nov 24 11:26:33 crc kubenswrapper[5072]: I1124 11:26:33.050304 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83c629ab-d9bd-4c85-b3e8-7d43a3d1c495-scripts\") pod \"cinder-api-0\" (UID: \"83c629ab-d9bd-4c85-b3e8-7d43a3d1c495\") " pod="openstack/cinder-api-0" Nov 24 11:26:33 crc kubenswrapper[5072]: I1124 11:26:33.050753 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/83c629ab-d9bd-4c85-b3e8-7d43a3d1c495-config-data-custom\") pod \"cinder-api-0\" (UID: \"83c629ab-d9bd-4c85-b3e8-7d43a3d1c495\") " pod="openstack/cinder-api-0" Nov 24 11:26:33 crc kubenswrapper[5072]: I1124 11:26:33.050867 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83c629ab-d9bd-4c85-b3e8-7d43a3d1c495-config-data\") pod \"cinder-api-0\" (UID: \"83c629ab-d9bd-4c85-b3e8-7d43a3d1c495\") " pod="openstack/cinder-api-0" Nov 24 11:26:33 crc kubenswrapper[5072]: I1124 11:26:33.055393 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/83c629ab-d9bd-4c85-b3e8-7d43a3d1c495-public-tls-certs\") pod \"cinder-api-0\" (UID: \"83c629ab-d9bd-4c85-b3e8-7d43a3d1c495\") " pod="openstack/cinder-api-0" Nov 24 11:26:33 crc kubenswrapper[5072]: I1124 11:26:33.072121 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83c629ab-d9bd-4c85-b3e8-7d43a3d1c495-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"83c629ab-d9bd-4c85-b3e8-7d43a3d1c495\") " pod="openstack/cinder-api-0" Nov 24 11:26:33 crc kubenswrapper[5072]: I1124 11:26:33.075805 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6f2hz\" (UniqueName: \"kubernetes.io/projected/83c629ab-d9bd-4c85-b3e8-7d43a3d1c495-kube-api-access-6f2hz\") pod \"cinder-api-0\" (UID: \"83c629ab-d9bd-4c85-b3e8-7d43a3d1c495\") " pod="openstack/cinder-api-0" Nov 24 11:26:33 crc kubenswrapper[5072]: I1124 11:26:33.113817 5072 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-78b9c4bd46-swfr9" podUID="e2e3a041-841d-423f-80a2-69a532d7975e" containerName="barbican-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 11:26:33 crc kubenswrapper[5072]: I1124 11:26:33.167537 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 24 11:26:33 crc kubenswrapper[5072]: I1124 11:26:33.351489 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 24 11:26:33 crc kubenswrapper[5072]: I1124 11:26:33.644266 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 24 11:26:33 crc kubenswrapper[5072]: I1124 11:26:33.808037 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"83c629ab-d9bd-4c85-b3e8-7d43a3d1c495","Type":"ContainerStarted","Data":"b8c84d4776da63c6bd2ec16a9bffa6ba3eec1b2440dec0544b81a7f6ac7aca55"} Nov 24 11:26:34 crc kubenswrapper[5072]: I1124 11:26:34.196571 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6765f59d56-zj7gz" Nov 24 11:26:34 crc kubenswrapper[5072]: I1124 11:26:34.539929 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7785cf9ff8-jrntg" Nov 24 11:26:34 crc kubenswrapper[5072]: I1124 11:26:34.819880 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"83c629ab-d9bd-4c85-b3e8-7d43a3d1c495","Type":"ContainerStarted","Data":"df0d3670fb11254560daee8296c05d4fb02479417d5ea0cae65254d64417b550"} Nov 24 11:26:35 crc kubenswrapper[5072]: I1124 11:26:35.831050 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"83c629ab-d9bd-4c85-b3e8-7d43a3d1c495","Type":"ContainerStarted","Data":"e07f010023e00caaa45f7cc4a60208bebc67bcc464a3b89004511a96dacef607"} Nov 24 11:26:35 crc kubenswrapper[5072]: I1124 11:26:35.832759 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 24 11:26:35 crc kubenswrapper[5072]: I1124 11:26:35.863109 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.863087368 podStartE2EDuration="3.863087368s" podCreationTimestamp="2025-11-24 11:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:26:35.855505816 +0000 UTC m=+1047.567030302" watchObservedRunningTime="2025-11-24 11:26:35.863087368 +0000 UTC m=+1047.574611854" Nov 24 11:26:36 crc kubenswrapper[5072]: I1124 11:26:36.113623 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7785cf9ff8-jrntg" Nov 24 11:26:36 crc kubenswrapper[5072]: I1124 11:26:36.188024 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-78b9c4bd46-swfr9"] Nov 24 11:26:36 crc kubenswrapper[5072]: I1124 11:26:36.188279 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-78b9c4bd46-swfr9" podUID="e2e3a041-841d-423f-80a2-69a532d7975e" containerName="barbican-api-log" containerID="cri-o://432125256d8e6ebf3f40b12d3968a14a8bf85de1183cd18ef41c27797db697c7" gracePeriod=30 Nov 24 11:26:36 crc kubenswrapper[5072]: I1124 11:26:36.188600 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-78b9c4bd46-swfr9" podUID="e2e3a041-841d-423f-80a2-69a532d7975e" containerName="barbican-api" containerID="cri-o://8fb527f5d6ddd8d4b88947f9401ba87140e158e8c5717cf73cb7fc32c96fa384" gracePeriod=30 Nov 24 11:26:36 crc kubenswrapper[5072]: I1124 11:26:36.196317 5072 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-78b9c4bd46-swfr9" podUID="e2e3a041-841d-423f-80a2-69a532d7975e" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.147:9311/healthcheck\": EOF" Nov 24 11:26:36 crc kubenswrapper[5072]: I1124 11:26:36.196712 5072 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-78b9c4bd46-swfr9" podUID="e2e3a041-841d-423f-80a2-69a532d7975e" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.147:9311/healthcheck\": EOF" Nov 24 11:26:36 crc kubenswrapper[5072]: I1124 11:26:36.561936 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6dc7d7697-tf7nw" Nov 24 11:26:36 crc kubenswrapper[5072]: I1124 11:26:36.631332 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6765f59d56-zj7gz"] Nov 24 11:26:36 crc kubenswrapper[5072]: I1124 11:26:36.631656 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6765f59d56-zj7gz" podUID="ea6b17ec-1925-4441-965e-9f2eeca16bec" containerName="neutron-api" containerID="cri-o://520695adde43cd501b9afc9befe9d308cef3532d7c842639fa0993497d308b4e" gracePeriod=30 Nov 24 11:26:36 crc kubenswrapper[5072]: I1124 11:26:36.631680 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6765f59d56-zj7gz" podUID="ea6b17ec-1925-4441-965e-9f2eeca16bec" containerName="neutron-httpd" containerID="cri-o://fa3af4260987b08192d8788da8a5f087c0f3f8e5cbd5e787586354887bec78fe" gracePeriod=30 Nov 24 11:26:36 crc kubenswrapper[5072]: I1124 11:26:36.858208 5072 generic.go:334] "Generic (PLEG): container finished" podID="e2e3a041-841d-423f-80a2-69a532d7975e" containerID="432125256d8e6ebf3f40b12d3968a14a8bf85de1183cd18ef41c27797db697c7" exitCode=143 Nov 24 11:26:36 crc kubenswrapper[5072]: I1124 11:26:36.858471 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-78b9c4bd46-swfr9" event={"ID":"e2e3a041-841d-423f-80a2-69a532d7975e","Type":"ContainerDied","Data":"432125256d8e6ebf3f40b12d3968a14a8bf85de1183cd18ef41c27797db697c7"} Nov 24 11:26:36 crc kubenswrapper[5072]: I1124 11:26:36.861132 5072 generic.go:334] "Generic (PLEG): container finished" podID="ea6b17ec-1925-4441-965e-9f2eeca16bec" containerID="fa3af4260987b08192d8788da8a5f087c0f3f8e5cbd5e787586354887bec78fe" exitCode=0 Nov 24 11:26:36 crc kubenswrapper[5072]: I1124 11:26:36.861198 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6765f59d56-zj7gz" event={"ID":"ea6b17ec-1925-4441-965e-9f2eeca16bec","Type":"ContainerDied","Data":"fa3af4260987b08192d8788da8a5f087c0f3f8e5cbd5e787586354887bec78fe"} Nov 24 11:26:38 crc kubenswrapper[5072]: I1124 11:26:38.476603 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6d97fcdd8f-nf7ht" Nov 24 11:26:38 crc kubenswrapper[5072]: I1124 11:26:38.540943 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b946d459c-n4llq"] Nov 24 11:26:38 crc kubenswrapper[5072]: I1124 11:26:38.541192 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7b946d459c-n4llq" podUID="0569a2f4-e2fb-4625-a547-a9244109a287" containerName="dnsmasq-dns" containerID="cri-o://aa5f178a132c6f24fb4bd764a33ef9d6d4aac489ef3620699f3193e1f0778570" gracePeriod=10 Nov 24 11:26:38 crc kubenswrapper[5072]: I1124 11:26:38.617883 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 24 11:26:38 crc kubenswrapper[5072]: I1124 11:26:38.671489 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 11:26:38 crc kubenswrapper[5072]: I1124 11:26:38.879271 5072 generic.go:334] "Generic (PLEG): container finished" podID="0569a2f4-e2fb-4625-a547-a9244109a287" containerID="aa5f178a132c6f24fb4bd764a33ef9d6d4aac489ef3620699f3193e1f0778570" exitCode=0 Nov 24 11:26:38 crc kubenswrapper[5072]: I1124 11:26:38.879476 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="b87dd9d8-b704-4a8b-9037-a27242b516da" containerName="cinder-scheduler" containerID="cri-o://f519d9c0fdc385aafd8af74fd44984e171a548b9c20c6e69580dfbb4e840ca9a" gracePeriod=30 Nov 24 11:26:38 crc kubenswrapper[5072]: I1124 11:26:38.879755 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b946d459c-n4llq" event={"ID":"0569a2f4-e2fb-4625-a547-a9244109a287","Type":"ContainerDied","Data":"aa5f178a132c6f24fb4bd764a33ef9d6d4aac489ef3620699f3193e1f0778570"} Nov 24 11:26:38 crc kubenswrapper[5072]: I1124 11:26:38.880049 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="b87dd9d8-b704-4a8b-9037-a27242b516da" containerName="probe" containerID="cri-o://d1df5091ed6b678c0194ade0e72451400f2eaa4117cf8e64600280e1d5d101af" gracePeriod=30 Nov 24 11:26:39 crc kubenswrapper[5072]: I1124 11:26:39.093136 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b946d459c-n4llq" Nov 24 11:26:39 crc kubenswrapper[5072]: I1124 11:26:39.244446 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0569a2f4-e2fb-4625-a547-a9244109a287-ovsdbserver-sb\") pod \"0569a2f4-e2fb-4625-a547-a9244109a287\" (UID: \"0569a2f4-e2fb-4625-a547-a9244109a287\") " Nov 24 11:26:39 crc kubenswrapper[5072]: I1124 11:26:39.244505 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0569a2f4-e2fb-4625-a547-a9244109a287-config\") pod \"0569a2f4-e2fb-4625-a547-a9244109a287\" (UID: \"0569a2f4-e2fb-4625-a547-a9244109a287\") " Nov 24 11:26:39 crc kubenswrapper[5072]: I1124 11:26:39.244570 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0569a2f4-e2fb-4625-a547-a9244109a287-ovsdbserver-nb\") pod \"0569a2f4-e2fb-4625-a547-a9244109a287\" (UID: \"0569a2f4-e2fb-4625-a547-a9244109a287\") " Nov 24 11:26:39 crc kubenswrapper[5072]: I1124 11:26:39.244595 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0569a2f4-e2fb-4625-a547-a9244109a287-dns-svc\") pod \"0569a2f4-e2fb-4625-a547-a9244109a287\" (UID: \"0569a2f4-e2fb-4625-a547-a9244109a287\") " Nov 24 11:26:39 crc kubenswrapper[5072]: I1124 11:26:39.244637 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-27w5p\" (UniqueName: \"kubernetes.io/projected/0569a2f4-e2fb-4625-a547-a9244109a287-kube-api-access-27w5p\") pod \"0569a2f4-e2fb-4625-a547-a9244109a287\" (UID: \"0569a2f4-e2fb-4625-a547-a9244109a287\") " Nov 24 11:26:39 crc kubenswrapper[5072]: I1124 11:26:39.265584 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0569a2f4-e2fb-4625-a547-a9244109a287-kube-api-access-27w5p" (OuterVolumeSpecName: "kube-api-access-27w5p") pod "0569a2f4-e2fb-4625-a547-a9244109a287" (UID: "0569a2f4-e2fb-4625-a547-a9244109a287"). InnerVolumeSpecName "kube-api-access-27w5p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:26:39 crc kubenswrapper[5072]: I1124 11:26:39.292917 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0569a2f4-e2fb-4625-a547-a9244109a287-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0569a2f4-e2fb-4625-a547-a9244109a287" (UID: "0569a2f4-e2fb-4625-a547-a9244109a287"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:26:39 crc kubenswrapper[5072]: I1124 11:26:39.295911 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0569a2f4-e2fb-4625-a547-a9244109a287-config" (OuterVolumeSpecName: "config") pod "0569a2f4-e2fb-4625-a547-a9244109a287" (UID: "0569a2f4-e2fb-4625-a547-a9244109a287"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:26:39 crc kubenswrapper[5072]: I1124 11:26:39.323339 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0569a2f4-e2fb-4625-a547-a9244109a287-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0569a2f4-e2fb-4625-a547-a9244109a287" (UID: "0569a2f4-e2fb-4625-a547-a9244109a287"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:26:39 crc kubenswrapper[5072]: I1124 11:26:39.330983 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0569a2f4-e2fb-4625-a547-a9244109a287-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0569a2f4-e2fb-4625-a547-a9244109a287" (UID: "0569a2f4-e2fb-4625-a547-a9244109a287"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:26:39 crc kubenswrapper[5072]: I1124 11:26:39.346198 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-27w5p\" (UniqueName: \"kubernetes.io/projected/0569a2f4-e2fb-4625-a547-a9244109a287-kube-api-access-27w5p\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:39 crc kubenswrapper[5072]: I1124 11:26:39.346245 5072 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0569a2f4-e2fb-4625-a547-a9244109a287-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:39 crc kubenswrapper[5072]: I1124 11:26:39.346256 5072 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0569a2f4-e2fb-4625-a547-a9244109a287-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:39 crc kubenswrapper[5072]: I1124 11:26:39.346265 5072 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0569a2f4-e2fb-4625-a547-a9244109a287-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:39 crc kubenswrapper[5072]: I1124 11:26:39.346277 5072 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0569a2f4-e2fb-4625-a547-a9244109a287-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:39 crc kubenswrapper[5072]: I1124 11:26:39.903787 5072 generic.go:334] "Generic (PLEG): container finished" podID="b87dd9d8-b704-4a8b-9037-a27242b516da" containerID="d1df5091ed6b678c0194ade0e72451400f2eaa4117cf8e64600280e1d5d101af" exitCode=0 Nov 24 11:26:39 crc kubenswrapper[5072]: I1124 11:26:39.903851 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b87dd9d8-b704-4a8b-9037-a27242b516da","Type":"ContainerDied","Data":"d1df5091ed6b678c0194ade0e72451400f2eaa4117cf8e64600280e1d5d101af"} Nov 24 11:26:39 crc kubenswrapper[5072]: I1124 11:26:39.908318 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b946d459c-n4llq" event={"ID":"0569a2f4-e2fb-4625-a547-a9244109a287","Type":"ContainerDied","Data":"810306c0b02a9c0d6c50fef46a80e382fd1bfb2df7dc1b35d6877adc5ce49677"} Nov 24 11:26:39 crc kubenswrapper[5072]: I1124 11:26:39.908478 5072 scope.go:117] "RemoveContainer" containerID="aa5f178a132c6f24fb4bd764a33ef9d6d4aac489ef3620699f3193e1f0778570" Nov 24 11:26:39 crc kubenswrapper[5072]: I1124 11:26:39.908995 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b946d459c-n4llq" Nov 24 11:26:39 crc kubenswrapper[5072]: I1124 11:26:39.932740 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-64d9f94c7b-p7b2p" Nov 24 11:26:39 crc kubenswrapper[5072]: I1124 11:26:39.945160 5072 scope.go:117] "RemoveContainer" containerID="fbe7265e908585ef0adee5887602c27361c3e52b01e60532bf15f49311b82a21" Nov 24 11:26:39 crc kubenswrapper[5072]: I1124 11:26:39.983054 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b946d459c-n4llq"] Nov 24 11:26:39 crc kubenswrapper[5072]: I1124 11:26:39.989976 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7b946d459c-n4llq"] Nov 24 11:26:40 crc kubenswrapper[5072]: I1124 11:26:40.047169 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-64d9f94c7b-p7b2p" Nov 24 11:26:40 crc kubenswrapper[5072]: I1124 11:26:40.641006 5072 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-78b9c4bd46-swfr9" podUID="e2e3a041-841d-423f-80a2-69a532d7975e" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.147:9311/healthcheck\": read tcp 10.217.0.2:33806->10.217.0.147:9311: read: connection reset by peer" Nov 24 11:26:40 crc kubenswrapper[5072]: I1124 11:26:40.641064 5072 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-78b9c4bd46-swfr9" podUID="e2e3a041-841d-423f-80a2-69a532d7975e" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.147:9311/healthcheck\": read tcp 10.217.0.2:33810->10.217.0.147:9311: read: connection reset by peer" Nov 24 11:26:40 crc kubenswrapper[5072]: I1124 11:26:40.641756 5072 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-78b9c4bd46-swfr9" podUID="e2e3a041-841d-423f-80a2-69a532d7975e" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.147:9311/healthcheck\": dial tcp 10.217.0.147:9311: connect: connection refused" Nov 24 11:26:40 crc kubenswrapper[5072]: I1124 11:26:40.641825 5072 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-78b9c4bd46-swfr9" podUID="e2e3a041-841d-423f-80a2-69a532d7975e" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.147:9311/healthcheck\": dial tcp 10.217.0.147:9311: connect: connection refused" Nov 24 11:26:40 crc kubenswrapper[5072]: I1124 11:26:40.918854 5072 generic.go:334] "Generic (PLEG): container finished" podID="e2e3a041-841d-423f-80a2-69a532d7975e" containerID="8fb527f5d6ddd8d4b88947f9401ba87140e158e8c5717cf73cb7fc32c96fa384" exitCode=0 Nov 24 11:26:40 crc kubenswrapper[5072]: I1124 11:26:40.918916 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-78b9c4bd46-swfr9" event={"ID":"e2e3a041-841d-423f-80a2-69a532d7975e","Type":"ContainerDied","Data":"8fb527f5d6ddd8d4b88947f9401ba87140e158e8c5717cf73cb7fc32c96fa384"} Nov 24 11:26:41 crc kubenswrapper[5072]: I1124 11:26:41.030225 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0569a2f4-e2fb-4625-a547-a9244109a287" path="/var/lib/kubelet/pods/0569a2f4-e2fb-4625-a547-a9244109a287/volumes" Nov 24 11:26:41 crc kubenswrapper[5072]: I1124 11:26:41.053205 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-78b9c4bd46-swfr9" Nov 24 11:26:41 crc kubenswrapper[5072]: I1124 11:26:41.092989 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5tkl\" (UniqueName: \"kubernetes.io/projected/e2e3a041-841d-423f-80a2-69a532d7975e-kube-api-access-m5tkl\") pod \"e2e3a041-841d-423f-80a2-69a532d7975e\" (UID: \"e2e3a041-841d-423f-80a2-69a532d7975e\") " Nov 24 11:26:41 crc kubenswrapper[5072]: I1124 11:26:41.093145 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e2e3a041-841d-423f-80a2-69a532d7975e-config-data-custom\") pod \"e2e3a041-841d-423f-80a2-69a532d7975e\" (UID: \"e2e3a041-841d-423f-80a2-69a532d7975e\") " Nov 24 11:26:41 crc kubenswrapper[5072]: I1124 11:26:41.093176 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2e3a041-841d-423f-80a2-69a532d7975e-logs\") pod \"e2e3a041-841d-423f-80a2-69a532d7975e\" (UID: \"e2e3a041-841d-423f-80a2-69a532d7975e\") " Nov 24 11:26:41 crc kubenswrapper[5072]: I1124 11:26:41.093236 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2e3a041-841d-423f-80a2-69a532d7975e-config-data\") pod \"e2e3a041-841d-423f-80a2-69a532d7975e\" (UID: \"e2e3a041-841d-423f-80a2-69a532d7975e\") " Nov 24 11:26:41 crc kubenswrapper[5072]: I1124 11:26:41.093305 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2e3a041-841d-423f-80a2-69a532d7975e-combined-ca-bundle\") pod \"e2e3a041-841d-423f-80a2-69a532d7975e\" (UID: \"e2e3a041-841d-423f-80a2-69a532d7975e\") " Nov 24 11:26:41 crc kubenswrapper[5072]: I1124 11:26:41.095600 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2e3a041-841d-423f-80a2-69a532d7975e-logs" (OuterVolumeSpecName: "logs") pod "e2e3a041-841d-423f-80a2-69a532d7975e" (UID: "e2e3a041-841d-423f-80a2-69a532d7975e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:26:41 crc kubenswrapper[5072]: I1124 11:26:41.099599 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2e3a041-841d-423f-80a2-69a532d7975e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e2e3a041-841d-423f-80a2-69a532d7975e" (UID: "e2e3a041-841d-423f-80a2-69a532d7975e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:26:41 crc kubenswrapper[5072]: I1124 11:26:41.106399 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2e3a041-841d-423f-80a2-69a532d7975e-kube-api-access-m5tkl" (OuterVolumeSpecName: "kube-api-access-m5tkl") pod "e2e3a041-841d-423f-80a2-69a532d7975e" (UID: "e2e3a041-841d-423f-80a2-69a532d7975e"). InnerVolumeSpecName "kube-api-access-m5tkl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:26:41 crc kubenswrapper[5072]: I1124 11:26:41.137903 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2e3a041-841d-423f-80a2-69a532d7975e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e2e3a041-841d-423f-80a2-69a532d7975e" (UID: "e2e3a041-841d-423f-80a2-69a532d7975e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:26:41 crc kubenswrapper[5072]: I1124 11:26:41.159440 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2e3a041-841d-423f-80a2-69a532d7975e-config-data" (OuterVolumeSpecName: "config-data") pod "e2e3a041-841d-423f-80a2-69a532d7975e" (UID: "e2e3a041-841d-423f-80a2-69a532d7975e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:26:41 crc kubenswrapper[5072]: I1124 11:26:41.195656 5072 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e2e3a041-841d-423f-80a2-69a532d7975e-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:41 crc kubenswrapper[5072]: I1124 11:26:41.195718 5072 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2e3a041-841d-423f-80a2-69a532d7975e-logs\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:41 crc kubenswrapper[5072]: I1124 11:26:41.195728 5072 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2e3a041-841d-423f-80a2-69a532d7975e-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:41 crc kubenswrapper[5072]: I1124 11:26:41.195737 5072 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2e3a041-841d-423f-80a2-69a532d7975e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:41 crc kubenswrapper[5072]: I1124 11:26:41.195746 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m5tkl\" (UniqueName: \"kubernetes.io/projected/e2e3a041-841d-423f-80a2-69a532d7975e-kube-api-access-m5tkl\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:41 crc kubenswrapper[5072]: I1124 11:26:41.931785 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-78b9c4bd46-swfr9" event={"ID":"e2e3a041-841d-423f-80a2-69a532d7975e","Type":"ContainerDied","Data":"330e6bc24e2590bdbb1d631e734508b38fee4811a9e73783fbbc1db9c17cf857"} Nov 24 11:26:41 crc kubenswrapper[5072]: I1124 11:26:41.931847 5072 scope.go:117] "RemoveContainer" containerID="8fb527f5d6ddd8d4b88947f9401ba87140e158e8c5717cf73cb7fc32c96fa384" Nov 24 11:26:41 crc kubenswrapper[5072]: I1124 11:26:41.931955 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-78b9c4bd46-swfr9" Nov 24 11:26:41 crc kubenswrapper[5072]: I1124 11:26:41.977641 5072 scope.go:117] "RemoveContainer" containerID="432125256d8e6ebf3f40b12d3968a14a8bf85de1183cd18ef41c27797db697c7" Nov 24 11:26:41 crc kubenswrapper[5072]: I1124 11:26:41.985115 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-78b9c4bd46-swfr9"] Nov 24 11:26:41 crc kubenswrapper[5072]: I1124 11:26:41.994594 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-78b9c4bd46-swfr9"] Nov 24 11:26:43 crc kubenswrapper[5072]: I1124 11:26:43.026826 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2e3a041-841d-423f-80a2-69a532d7975e" path="/var/lib/kubelet/pods/e2e3a041-841d-423f-80a2-69a532d7975e/volumes" Nov 24 11:26:43 crc kubenswrapper[5072]: I1124 11:26:43.641184 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 24 11:26:43 crc kubenswrapper[5072]: I1124 11:26:43.740855 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b87dd9d8-b704-4a8b-9037-a27242b516da-config-data-custom\") pod \"b87dd9d8-b704-4a8b-9037-a27242b516da\" (UID: \"b87dd9d8-b704-4a8b-9037-a27242b516da\") " Nov 24 11:26:43 crc kubenswrapper[5072]: I1124 11:26:43.740919 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b87dd9d8-b704-4a8b-9037-a27242b516da-config-data\") pod \"b87dd9d8-b704-4a8b-9037-a27242b516da\" (UID: \"b87dd9d8-b704-4a8b-9037-a27242b516da\") " Nov 24 11:26:43 crc kubenswrapper[5072]: I1124 11:26:43.741000 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b87dd9d8-b704-4a8b-9037-a27242b516da-etc-machine-id\") pod \"b87dd9d8-b704-4a8b-9037-a27242b516da\" (UID: \"b87dd9d8-b704-4a8b-9037-a27242b516da\") " Nov 24 11:26:43 crc kubenswrapper[5072]: I1124 11:26:43.741019 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b87dd9d8-b704-4a8b-9037-a27242b516da-combined-ca-bundle\") pod \"b87dd9d8-b704-4a8b-9037-a27242b516da\" (UID: \"b87dd9d8-b704-4a8b-9037-a27242b516da\") " Nov 24 11:26:43 crc kubenswrapper[5072]: I1124 11:26:43.741051 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9zp7d\" (UniqueName: \"kubernetes.io/projected/b87dd9d8-b704-4a8b-9037-a27242b516da-kube-api-access-9zp7d\") pod \"b87dd9d8-b704-4a8b-9037-a27242b516da\" (UID: \"b87dd9d8-b704-4a8b-9037-a27242b516da\") " Nov 24 11:26:43 crc kubenswrapper[5072]: I1124 11:26:43.741079 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b87dd9d8-b704-4a8b-9037-a27242b516da-scripts\") pod \"b87dd9d8-b704-4a8b-9037-a27242b516da\" (UID: \"b87dd9d8-b704-4a8b-9037-a27242b516da\") " Nov 24 11:26:43 crc kubenswrapper[5072]: I1124 11:26:43.742783 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b87dd9d8-b704-4a8b-9037-a27242b516da-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "b87dd9d8-b704-4a8b-9037-a27242b516da" (UID: "b87dd9d8-b704-4a8b-9037-a27242b516da"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 11:26:43 crc kubenswrapper[5072]: I1124 11:26:43.747797 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b87dd9d8-b704-4a8b-9037-a27242b516da-scripts" (OuterVolumeSpecName: "scripts") pod "b87dd9d8-b704-4a8b-9037-a27242b516da" (UID: "b87dd9d8-b704-4a8b-9037-a27242b516da"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:26:43 crc kubenswrapper[5072]: I1124 11:26:43.748240 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b87dd9d8-b704-4a8b-9037-a27242b516da-kube-api-access-9zp7d" (OuterVolumeSpecName: "kube-api-access-9zp7d") pod "b87dd9d8-b704-4a8b-9037-a27242b516da" (UID: "b87dd9d8-b704-4a8b-9037-a27242b516da"). InnerVolumeSpecName "kube-api-access-9zp7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:26:43 crc kubenswrapper[5072]: I1124 11:26:43.748801 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b87dd9d8-b704-4a8b-9037-a27242b516da-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "b87dd9d8-b704-4a8b-9037-a27242b516da" (UID: "b87dd9d8-b704-4a8b-9037-a27242b516da"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:26:43 crc kubenswrapper[5072]: I1124 11:26:43.816733 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b87dd9d8-b704-4a8b-9037-a27242b516da-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b87dd9d8-b704-4a8b-9037-a27242b516da" (UID: "b87dd9d8-b704-4a8b-9037-a27242b516da"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:26:43 crc kubenswrapper[5072]: I1124 11:26:43.842907 5072 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b87dd9d8-b704-4a8b-9037-a27242b516da-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:43 crc kubenswrapper[5072]: I1124 11:26:43.842943 5072 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b87dd9d8-b704-4a8b-9037-a27242b516da-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:43 crc kubenswrapper[5072]: I1124 11:26:43.842955 5072 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b87dd9d8-b704-4a8b-9037-a27242b516da-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:43 crc kubenswrapper[5072]: I1124 11:26:43.842968 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9zp7d\" (UniqueName: \"kubernetes.io/projected/b87dd9d8-b704-4a8b-9037-a27242b516da-kube-api-access-9zp7d\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:43 crc kubenswrapper[5072]: I1124 11:26:43.842995 5072 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b87dd9d8-b704-4a8b-9037-a27242b516da-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:43 crc kubenswrapper[5072]: I1124 11:26:43.862565 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b87dd9d8-b704-4a8b-9037-a27242b516da-config-data" (OuterVolumeSpecName: "config-data") pod "b87dd9d8-b704-4a8b-9037-a27242b516da" (UID: "b87dd9d8-b704-4a8b-9037-a27242b516da"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:26:43 crc kubenswrapper[5072]: I1124 11:26:43.945152 5072 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b87dd9d8-b704-4a8b-9037-a27242b516da-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:43 crc kubenswrapper[5072]: I1124 11:26:43.954948 5072 generic.go:334] "Generic (PLEG): container finished" podID="ea6b17ec-1925-4441-965e-9f2eeca16bec" containerID="520695adde43cd501b9afc9befe9d308cef3532d7c842639fa0993497d308b4e" exitCode=0 Nov 24 11:26:43 crc kubenswrapper[5072]: I1124 11:26:43.954993 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6765f59d56-zj7gz" event={"ID":"ea6b17ec-1925-4441-965e-9f2eeca16bec","Type":"ContainerDied","Data":"520695adde43cd501b9afc9befe9d308cef3532d7c842639fa0993497d308b4e"} Nov 24 11:26:43 crc kubenswrapper[5072]: I1124 11:26:43.957802 5072 generic.go:334] "Generic (PLEG): container finished" podID="b87dd9d8-b704-4a8b-9037-a27242b516da" containerID="f519d9c0fdc385aafd8af74fd44984e171a548b9c20c6e69580dfbb4e840ca9a" exitCode=0 Nov 24 11:26:43 crc kubenswrapper[5072]: I1124 11:26:43.957852 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b87dd9d8-b704-4a8b-9037-a27242b516da","Type":"ContainerDied","Data":"f519d9c0fdc385aafd8af74fd44984e171a548b9c20c6e69580dfbb4e840ca9a"} Nov 24 11:26:43 crc kubenswrapper[5072]: I1124 11:26:43.957877 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 24 11:26:43 crc kubenswrapper[5072]: I1124 11:26:43.957897 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b87dd9d8-b704-4a8b-9037-a27242b516da","Type":"ContainerDied","Data":"90556f8fadd0f4cb64afd0a5a4c5cb0a4fe22948a727cafe6ee2ec62652c1dd0"} Nov 24 11:26:43 crc kubenswrapper[5072]: I1124 11:26:43.957947 5072 scope.go:117] "RemoveContainer" containerID="d1df5091ed6b678c0194ade0e72451400f2eaa4117cf8e64600280e1d5d101af" Nov 24 11:26:43 crc kubenswrapper[5072]: I1124 11:26:43.976723 5072 scope.go:117] "RemoveContainer" containerID="f519d9c0fdc385aafd8af74fd44984e171a548b9c20c6e69580dfbb4e840ca9a" Nov 24 11:26:44 crc kubenswrapper[5072]: I1124 11:26:44.010565 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 11:26:44 crc kubenswrapper[5072]: I1124 11:26:44.027701 5072 scope.go:117] "RemoveContainer" containerID="d1df5091ed6b678c0194ade0e72451400f2eaa4117cf8e64600280e1d5d101af" Nov 24 11:26:44 crc kubenswrapper[5072]: E1124 11:26:44.029217 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1df5091ed6b678c0194ade0e72451400f2eaa4117cf8e64600280e1d5d101af\": container with ID starting with d1df5091ed6b678c0194ade0e72451400f2eaa4117cf8e64600280e1d5d101af not found: ID does not exist" containerID="d1df5091ed6b678c0194ade0e72451400f2eaa4117cf8e64600280e1d5d101af" Nov 24 11:26:44 crc kubenswrapper[5072]: I1124 11:26:44.029299 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1df5091ed6b678c0194ade0e72451400f2eaa4117cf8e64600280e1d5d101af"} err="failed to get container status \"d1df5091ed6b678c0194ade0e72451400f2eaa4117cf8e64600280e1d5d101af\": rpc error: code = NotFound desc = could not find container \"d1df5091ed6b678c0194ade0e72451400f2eaa4117cf8e64600280e1d5d101af\": container with ID starting with d1df5091ed6b678c0194ade0e72451400f2eaa4117cf8e64600280e1d5d101af not found: ID does not exist" Nov 24 11:26:44 crc kubenswrapper[5072]: I1124 11:26:44.029333 5072 scope.go:117] "RemoveContainer" containerID="f519d9c0fdc385aafd8af74fd44984e171a548b9c20c6e69580dfbb4e840ca9a" Nov 24 11:26:44 crc kubenswrapper[5072]: I1124 11:26:44.029502 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 11:26:44 crc kubenswrapper[5072]: E1124 11:26:44.029853 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f519d9c0fdc385aafd8af74fd44984e171a548b9c20c6e69580dfbb4e840ca9a\": container with ID starting with f519d9c0fdc385aafd8af74fd44984e171a548b9c20c6e69580dfbb4e840ca9a not found: ID does not exist" containerID="f519d9c0fdc385aafd8af74fd44984e171a548b9c20c6e69580dfbb4e840ca9a" Nov 24 11:26:44 crc kubenswrapper[5072]: I1124 11:26:44.029875 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f519d9c0fdc385aafd8af74fd44984e171a548b9c20c6e69580dfbb4e840ca9a"} err="failed to get container status \"f519d9c0fdc385aafd8af74fd44984e171a548b9c20c6e69580dfbb4e840ca9a\": rpc error: code = NotFound desc = could not find container \"f519d9c0fdc385aafd8af74fd44984e171a548b9c20c6e69580dfbb4e840ca9a\": container with ID starting with f519d9c0fdc385aafd8af74fd44984e171a548b9c20c6e69580dfbb4e840ca9a not found: ID does not exist" Nov 24 11:26:44 crc kubenswrapper[5072]: I1124 11:26:44.042473 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 11:26:44 crc kubenswrapper[5072]: E1124 11:26:44.043552 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2e3a041-841d-423f-80a2-69a532d7975e" containerName="barbican-api" Nov 24 11:26:44 crc kubenswrapper[5072]: I1124 11:26:44.043573 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2e3a041-841d-423f-80a2-69a532d7975e" containerName="barbican-api" Nov 24 11:26:44 crc kubenswrapper[5072]: E1124 11:26:44.043585 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b87dd9d8-b704-4a8b-9037-a27242b516da" containerName="cinder-scheduler" Nov 24 11:26:44 crc kubenswrapper[5072]: I1124 11:26:44.043593 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="b87dd9d8-b704-4a8b-9037-a27242b516da" containerName="cinder-scheduler" Nov 24 11:26:44 crc kubenswrapper[5072]: E1124 11:26:44.043615 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b87dd9d8-b704-4a8b-9037-a27242b516da" containerName="probe" Nov 24 11:26:44 crc kubenswrapper[5072]: I1124 11:26:44.043621 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="b87dd9d8-b704-4a8b-9037-a27242b516da" containerName="probe" Nov 24 11:26:44 crc kubenswrapper[5072]: E1124 11:26:44.043632 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0569a2f4-e2fb-4625-a547-a9244109a287" containerName="init" Nov 24 11:26:44 crc kubenswrapper[5072]: I1124 11:26:44.043641 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="0569a2f4-e2fb-4625-a547-a9244109a287" containerName="init" Nov 24 11:26:44 crc kubenswrapper[5072]: E1124 11:26:44.043655 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2e3a041-841d-423f-80a2-69a532d7975e" containerName="barbican-api-log" Nov 24 11:26:44 crc kubenswrapper[5072]: I1124 11:26:44.043661 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2e3a041-841d-423f-80a2-69a532d7975e" containerName="barbican-api-log" Nov 24 11:26:44 crc kubenswrapper[5072]: E1124 11:26:44.043671 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0569a2f4-e2fb-4625-a547-a9244109a287" containerName="dnsmasq-dns" Nov 24 11:26:44 crc kubenswrapper[5072]: I1124 11:26:44.043676 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="0569a2f4-e2fb-4625-a547-a9244109a287" containerName="dnsmasq-dns" Nov 24 11:26:44 crc kubenswrapper[5072]: I1124 11:26:44.043827 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="0569a2f4-e2fb-4625-a547-a9244109a287" containerName="dnsmasq-dns" Nov 24 11:26:44 crc kubenswrapper[5072]: I1124 11:26:44.043841 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="b87dd9d8-b704-4a8b-9037-a27242b516da" containerName="probe" Nov 24 11:26:44 crc kubenswrapper[5072]: I1124 11:26:44.043854 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2e3a041-841d-423f-80a2-69a532d7975e" containerName="barbican-api" Nov 24 11:26:44 crc kubenswrapper[5072]: I1124 11:26:44.043869 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2e3a041-841d-423f-80a2-69a532d7975e" containerName="barbican-api-log" Nov 24 11:26:44 crc kubenswrapper[5072]: I1124 11:26:44.043879 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="b87dd9d8-b704-4a8b-9037-a27242b516da" containerName="cinder-scheduler" Nov 24 11:26:44 crc kubenswrapper[5072]: I1124 11:26:44.047191 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 24 11:26:44 crc kubenswrapper[5072]: I1124 11:26:44.048032 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 11:26:44 crc kubenswrapper[5072]: I1124 11:26:44.052496 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 24 11:26:44 crc kubenswrapper[5072]: I1124 11:26:44.148653 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5053f25d-e6d3-4a92-88f4-5659485403af-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"5053f25d-e6d3-4a92-88f4-5659485403af\") " pod="openstack/cinder-scheduler-0" Nov 24 11:26:44 crc kubenswrapper[5072]: I1124 11:26:44.148724 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5053f25d-e6d3-4a92-88f4-5659485403af-config-data\") pod \"cinder-scheduler-0\" (UID: \"5053f25d-e6d3-4a92-88f4-5659485403af\") " pod="openstack/cinder-scheduler-0" Nov 24 11:26:44 crc kubenswrapper[5072]: I1124 11:26:44.148773 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5053f25d-e6d3-4a92-88f4-5659485403af-scripts\") pod \"cinder-scheduler-0\" (UID: \"5053f25d-e6d3-4a92-88f4-5659485403af\") " pod="openstack/cinder-scheduler-0" Nov 24 11:26:44 crc kubenswrapper[5072]: I1124 11:26:44.148837 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5053f25d-e6d3-4a92-88f4-5659485403af-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"5053f25d-e6d3-4a92-88f4-5659485403af\") " pod="openstack/cinder-scheduler-0" Nov 24 11:26:44 crc kubenswrapper[5072]: I1124 11:26:44.148878 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5053f25d-e6d3-4a92-88f4-5659485403af-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"5053f25d-e6d3-4a92-88f4-5659485403af\") " pod="openstack/cinder-scheduler-0" Nov 24 11:26:44 crc kubenswrapper[5072]: I1124 11:26:44.148987 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4dg4\" (UniqueName: \"kubernetes.io/projected/5053f25d-e6d3-4a92-88f4-5659485403af-kube-api-access-z4dg4\") pod \"cinder-scheduler-0\" (UID: \"5053f25d-e6d3-4a92-88f4-5659485403af\") " pod="openstack/cinder-scheduler-0" Nov 24 11:26:44 crc kubenswrapper[5072]: I1124 11:26:44.161806 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6765f59d56-zj7gz" Nov 24 11:26:44 crc kubenswrapper[5072]: I1124 11:26:44.249748 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea6b17ec-1925-4441-965e-9f2eeca16bec-ovndb-tls-certs\") pod \"ea6b17ec-1925-4441-965e-9f2eeca16bec\" (UID: \"ea6b17ec-1925-4441-965e-9f2eeca16bec\") " Nov 24 11:26:44 crc kubenswrapper[5072]: I1124 11:26:44.249852 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ea6b17ec-1925-4441-965e-9f2eeca16bec-config\") pod \"ea6b17ec-1925-4441-965e-9f2eeca16bec\" (UID: \"ea6b17ec-1925-4441-965e-9f2eeca16bec\") " Nov 24 11:26:44 crc kubenswrapper[5072]: I1124 11:26:44.250530 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea6b17ec-1925-4441-965e-9f2eeca16bec-combined-ca-bundle\") pod \"ea6b17ec-1925-4441-965e-9f2eeca16bec\" (UID: \"ea6b17ec-1925-4441-965e-9f2eeca16bec\") " Nov 24 11:26:44 crc kubenswrapper[5072]: I1124 11:26:44.250692 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ea6b17ec-1925-4441-965e-9f2eeca16bec-httpd-config\") pod \"ea6b17ec-1925-4441-965e-9f2eeca16bec\" (UID: \"ea6b17ec-1925-4441-965e-9f2eeca16bec\") " Nov 24 11:26:44 crc kubenswrapper[5072]: I1124 11:26:44.250789 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m66fz\" (UniqueName: \"kubernetes.io/projected/ea6b17ec-1925-4441-965e-9f2eeca16bec-kube-api-access-m66fz\") pod \"ea6b17ec-1925-4441-965e-9f2eeca16bec\" (UID: \"ea6b17ec-1925-4441-965e-9f2eeca16bec\") " Nov 24 11:26:44 crc kubenswrapper[5072]: I1124 11:26:44.251022 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4dg4\" (UniqueName: \"kubernetes.io/projected/5053f25d-e6d3-4a92-88f4-5659485403af-kube-api-access-z4dg4\") pod \"cinder-scheduler-0\" (UID: \"5053f25d-e6d3-4a92-88f4-5659485403af\") " pod="openstack/cinder-scheduler-0" Nov 24 11:26:44 crc kubenswrapper[5072]: I1124 11:26:44.251117 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5053f25d-e6d3-4a92-88f4-5659485403af-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"5053f25d-e6d3-4a92-88f4-5659485403af\") " pod="openstack/cinder-scheduler-0" Nov 24 11:26:44 crc kubenswrapper[5072]: I1124 11:26:44.251477 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5053f25d-e6d3-4a92-88f4-5659485403af-config-data\") pod \"cinder-scheduler-0\" (UID: \"5053f25d-e6d3-4a92-88f4-5659485403af\") " pod="openstack/cinder-scheduler-0" Nov 24 11:26:44 crc kubenswrapper[5072]: I1124 11:26:44.251559 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5053f25d-e6d3-4a92-88f4-5659485403af-scripts\") pod \"cinder-scheduler-0\" (UID: \"5053f25d-e6d3-4a92-88f4-5659485403af\") " pod="openstack/cinder-scheduler-0" Nov 24 11:26:44 crc kubenswrapper[5072]: I1124 11:26:44.251626 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5053f25d-e6d3-4a92-88f4-5659485403af-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"5053f25d-e6d3-4a92-88f4-5659485403af\") " pod="openstack/cinder-scheduler-0" Nov 24 11:26:44 crc kubenswrapper[5072]: I1124 11:26:44.251709 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5053f25d-e6d3-4a92-88f4-5659485403af-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"5053f25d-e6d3-4a92-88f4-5659485403af\") " pod="openstack/cinder-scheduler-0" Nov 24 11:26:44 crc kubenswrapper[5072]: I1124 11:26:44.251820 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5053f25d-e6d3-4a92-88f4-5659485403af-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"5053f25d-e6d3-4a92-88f4-5659485403af\") " pod="openstack/cinder-scheduler-0" Nov 24 11:26:44 crc kubenswrapper[5072]: I1124 11:26:44.254740 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea6b17ec-1925-4441-965e-9f2eeca16bec-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "ea6b17ec-1925-4441-965e-9f2eeca16bec" (UID: "ea6b17ec-1925-4441-965e-9f2eeca16bec"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:26:44 crc kubenswrapper[5072]: I1124 11:26:44.255216 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea6b17ec-1925-4441-965e-9f2eeca16bec-kube-api-access-m66fz" (OuterVolumeSpecName: "kube-api-access-m66fz") pod "ea6b17ec-1925-4441-965e-9f2eeca16bec" (UID: "ea6b17ec-1925-4441-965e-9f2eeca16bec"). InnerVolumeSpecName "kube-api-access-m66fz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:26:44 crc kubenswrapper[5072]: I1124 11:26:44.255676 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5053f25d-e6d3-4a92-88f4-5659485403af-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"5053f25d-e6d3-4a92-88f4-5659485403af\") " pod="openstack/cinder-scheduler-0" Nov 24 11:26:44 crc kubenswrapper[5072]: I1124 11:26:44.257245 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5053f25d-e6d3-4a92-88f4-5659485403af-config-data\") pod \"cinder-scheduler-0\" (UID: \"5053f25d-e6d3-4a92-88f4-5659485403af\") " pod="openstack/cinder-scheduler-0" Nov 24 11:26:44 crc kubenswrapper[5072]: I1124 11:26:44.258175 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5053f25d-e6d3-4a92-88f4-5659485403af-scripts\") pod \"cinder-scheduler-0\" (UID: \"5053f25d-e6d3-4a92-88f4-5659485403af\") " pod="openstack/cinder-scheduler-0" Nov 24 11:26:44 crc kubenswrapper[5072]: I1124 11:26:44.266108 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5053f25d-e6d3-4a92-88f4-5659485403af-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"5053f25d-e6d3-4a92-88f4-5659485403af\") " pod="openstack/cinder-scheduler-0" Nov 24 11:26:44 crc kubenswrapper[5072]: I1124 11:26:44.270989 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4dg4\" (UniqueName: \"kubernetes.io/projected/5053f25d-e6d3-4a92-88f4-5659485403af-kube-api-access-z4dg4\") pod \"cinder-scheduler-0\" (UID: \"5053f25d-e6d3-4a92-88f4-5659485403af\") " pod="openstack/cinder-scheduler-0" Nov 24 11:26:44 crc kubenswrapper[5072]: I1124 11:26:44.310822 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea6b17ec-1925-4441-965e-9f2eeca16bec-config" (OuterVolumeSpecName: "config") pod "ea6b17ec-1925-4441-965e-9f2eeca16bec" (UID: "ea6b17ec-1925-4441-965e-9f2eeca16bec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:26:44 crc kubenswrapper[5072]: I1124 11:26:44.314167 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea6b17ec-1925-4441-965e-9f2eeca16bec-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ea6b17ec-1925-4441-965e-9f2eeca16bec" (UID: "ea6b17ec-1925-4441-965e-9f2eeca16bec"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:26:44 crc kubenswrapper[5072]: I1124 11:26:44.349461 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea6b17ec-1925-4441-965e-9f2eeca16bec-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "ea6b17ec-1925-4441-965e-9f2eeca16bec" (UID: "ea6b17ec-1925-4441-965e-9f2eeca16bec"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:26:44 crc kubenswrapper[5072]: I1124 11:26:44.354840 5072 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ea6b17ec-1925-4441-965e-9f2eeca16bec-httpd-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:44 crc kubenswrapper[5072]: I1124 11:26:44.354888 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m66fz\" (UniqueName: \"kubernetes.io/projected/ea6b17ec-1925-4441-965e-9f2eeca16bec-kube-api-access-m66fz\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:44 crc kubenswrapper[5072]: I1124 11:26:44.354901 5072 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea6b17ec-1925-4441-965e-9f2eeca16bec-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:44 crc kubenswrapper[5072]: I1124 11:26:44.354915 5072 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/ea6b17ec-1925-4441-965e-9f2eeca16bec-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:44 crc kubenswrapper[5072]: I1124 11:26:44.354928 5072 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea6b17ec-1925-4441-965e-9f2eeca16bec-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:44 crc kubenswrapper[5072]: I1124 11:26:44.380916 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 24 11:26:44 crc kubenswrapper[5072]: I1124 11:26:44.735045 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-6cc7b79dbf-mkd8x" Nov 24 11:26:44 crc kubenswrapper[5072]: I1124 11:26:44.905785 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 11:26:45 crc kubenswrapper[5072]: I1124 11:26:45.001995 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5053f25d-e6d3-4a92-88f4-5659485403af","Type":"ContainerStarted","Data":"ab0ebeebf2d0f4522ef2b3a6d0f57749fd5d359521514c328c6fd57dcb22f9c7"} Nov 24 11:26:45 crc kubenswrapper[5072]: I1124 11:26:45.007794 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6765f59d56-zj7gz" event={"ID":"ea6b17ec-1925-4441-965e-9f2eeca16bec","Type":"ContainerDied","Data":"75e77858822e47f2caedc6238227e146f0d48c793a75683695151e48c31da8fa"} Nov 24 11:26:45 crc kubenswrapper[5072]: I1124 11:26:45.007827 5072 scope.go:117] "RemoveContainer" containerID="fa3af4260987b08192d8788da8a5f087c0f3f8e5cbd5e787586354887bec78fe" Nov 24 11:26:45 crc kubenswrapper[5072]: I1124 11:26:45.007937 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6765f59d56-zj7gz" Nov 24 11:26:45 crc kubenswrapper[5072]: I1124 11:26:45.074474 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b87dd9d8-b704-4a8b-9037-a27242b516da" path="/var/lib/kubelet/pods/b87dd9d8-b704-4a8b-9037-a27242b516da/volumes" Nov 24 11:26:45 crc kubenswrapper[5072]: I1124 11:26:45.100685 5072 scope.go:117] "RemoveContainer" containerID="520695adde43cd501b9afc9befe9d308cef3532d7c842639fa0993497d308b4e" Nov 24 11:26:45 crc kubenswrapper[5072]: I1124 11:26:45.143174 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6765f59d56-zj7gz"] Nov 24 11:26:45 crc kubenswrapper[5072]: I1124 11:26:45.150958 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6765f59d56-zj7gz"] Nov 24 11:26:45 crc kubenswrapper[5072]: I1124 11:26:45.324493 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Nov 24 11:26:46 crc kubenswrapper[5072]: I1124 11:26:46.035734 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5053f25d-e6d3-4a92-88f4-5659485403af","Type":"ContainerStarted","Data":"c9080d851f90d2430d1921a51490b3c54ab9ce4127b0e0672019620734ad4364"} Nov 24 11:26:47 crc kubenswrapper[5072]: I1124 11:26:47.035981 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea6b17ec-1925-4441-965e-9f2eeca16bec" path="/var/lib/kubelet/pods/ea6b17ec-1925-4441-965e-9f2eeca16bec/volumes" Nov 24 11:26:47 crc kubenswrapper[5072]: I1124 11:26:47.044599 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5053f25d-e6d3-4a92-88f4-5659485403af","Type":"ContainerStarted","Data":"b81cc764ae8e484d518887cac67806d8111f75c8c5ce4037d1d5ff347d43a18b"} Nov 24 11:26:47 crc kubenswrapper[5072]: I1124 11:26:47.068657 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.068641739 podStartE2EDuration="4.068641739s" podCreationTimestamp="2025-11-24 11:26:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:26:47.06593504 +0000 UTC m=+1058.777459506" watchObservedRunningTime="2025-11-24 11:26:47.068641739 +0000 UTC m=+1058.780166205" Nov 24 11:26:48 crc kubenswrapper[5072]: I1124 11:26:48.347736 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 24 11:26:48 crc kubenswrapper[5072]: E1124 11:26:48.348211 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea6b17ec-1925-4441-965e-9f2eeca16bec" containerName="neutron-api" Nov 24 11:26:48 crc kubenswrapper[5072]: I1124 11:26:48.348227 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea6b17ec-1925-4441-965e-9f2eeca16bec" containerName="neutron-api" Nov 24 11:26:48 crc kubenswrapper[5072]: E1124 11:26:48.348251 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea6b17ec-1925-4441-965e-9f2eeca16bec" containerName="neutron-httpd" Nov 24 11:26:48 crc kubenswrapper[5072]: I1124 11:26:48.348260 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea6b17ec-1925-4441-965e-9f2eeca16bec" containerName="neutron-httpd" Nov 24 11:26:48 crc kubenswrapper[5072]: I1124 11:26:48.348487 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea6b17ec-1925-4441-965e-9f2eeca16bec" containerName="neutron-api" Nov 24 11:26:48 crc kubenswrapper[5072]: I1124 11:26:48.348505 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea6b17ec-1925-4441-965e-9f2eeca16bec" containerName="neutron-httpd" Nov 24 11:26:48 crc kubenswrapper[5072]: I1124 11:26:48.349240 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 24 11:26:48 crc kubenswrapper[5072]: I1124 11:26:48.354091 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 24 11:26:48 crc kubenswrapper[5072]: I1124 11:26:48.354891 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Nov 24 11:26:48 crc kubenswrapper[5072]: I1124 11:26:48.355087 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-grwxw" Nov 24 11:26:48 crc kubenswrapper[5072]: I1124 11:26:48.355473 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Nov 24 11:26:48 crc kubenswrapper[5072]: I1124 11:26:48.448495 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/36162589-ddbd-4386-82e5-62d4d73d41b7-openstack-config\") pod \"openstackclient\" (UID: \"36162589-ddbd-4386-82e5-62d4d73d41b7\") " pod="openstack/openstackclient" Nov 24 11:26:48 crc kubenswrapper[5072]: I1124 11:26:48.448792 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36162589-ddbd-4386-82e5-62d4d73d41b7-combined-ca-bundle\") pod \"openstackclient\" (UID: \"36162589-ddbd-4386-82e5-62d4d73d41b7\") " pod="openstack/openstackclient" Nov 24 11:26:48 crc kubenswrapper[5072]: I1124 11:26:48.448818 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwk2l\" (UniqueName: \"kubernetes.io/projected/36162589-ddbd-4386-82e5-62d4d73d41b7-kube-api-access-bwk2l\") pod \"openstackclient\" (UID: \"36162589-ddbd-4386-82e5-62d4d73d41b7\") " pod="openstack/openstackclient" Nov 24 11:26:48 crc kubenswrapper[5072]: I1124 11:26:48.448861 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/36162589-ddbd-4386-82e5-62d4d73d41b7-openstack-config-secret\") pod \"openstackclient\" (UID: \"36162589-ddbd-4386-82e5-62d4d73d41b7\") " pod="openstack/openstackclient" Nov 24 11:26:48 crc kubenswrapper[5072]: I1124 11:26:48.550429 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwk2l\" (UniqueName: \"kubernetes.io/projected/36162589-ddbd-4386-82e5-62d4d73d41b7-kube-api-access-bwk2l\") pod \"openstackclient\" (UID: \"36162589-ddbd-4386-82e5-62d4d73d41b7\") " pod="openstack/openstackclient" Nov 24 11:26:48 crc kubenswrapper[5072]: I1124 11:26:48.550594 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/36162589-ddbd-4386-82e5-62d4d73d41b7-openstack-config-secret\") pod \"openstackclient\" (UID: \"36162589-ddbd-4386-82e5-62d4d73d41b7\") " pod="openstack/openstackclient" Nov 24 11:26:48 crc kubenswrapper[5072]: I1124 11:26:48.552168 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/36162589-ddbd-4386-82e5-62d4d73d41b7-openstack-config\") pod \"openstackclient\" (UID: \"36162589-ddbd-4386-82e5-62d4d73d41b7\") " pod="openstack/openstackclient" Nov 24 11:26:48 crc kubenswrapper[5072]: I1124 11:26:48.552238 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36162589-ddbd-4386-82e5-62d4d73d41b7-combined-ca-bundle\") pod \"openstackclient\" (UID: \"36162589-ddbd-4386-82e5-62d4d73d41b7\") " pod="openstack/openstackclient" Nov 24 11:26:48 crc kubenswrapper[5072]: I1124 11:26:48.553578 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/36162589-ddbd-4386-82e5-62d4d73d41b7-openstack-config\") pod \"openstackclient\" (UID: \"36162589-ddbd-4386-82e5-62d4d73d41b7\") " pod="openstack/openstackclient" Nov 24 11:26:48 crc kubenswrapper[5072]: I1124 11:26:48.558110 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/36162589-ddbd-4386-82e5-62d4d73d41b7-openstack-config-secret\") pod \"openstackclient\" (UID: \"36162589-ddbd-4386-82e5-62d4d73d41b7\") " pod="openstack/openstackclient" Nov 24 11:26:48 crc kubenswrapper[5072]: I1124 11:26:48.567861 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36162589-ddbd-4386-82e5-62d4d73d41b7-combined-ca-bundle\") pod \"openstackclient\" (UID: \"36162589-ddbd-4386-82e5-62d4d73d41b7\") " pod="openstack/openstackclient" Nov 24 11:26:48 crc kubenswrapper[5072]: I1124 11:26:48.568326 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwk2l\" (UniqueName: \"kubernetes.io/projected/36162589-ddbd-4386-82e5-62d4d73d41b7-kube-api-access-bwk2l\") pod \"openstackclient\" (UID: \"36162589-ddbd-4386-82e5-62d4d73d41b7\") " pod="openstack/openstackclient" Nov 24 11:26:48 crc kubenswrapper[5072]: I1124 11:26:48.675245 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 24 11:26:49 crc kubenswrapper[5072]: I1124 11:26:49.126371 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 24 11:26:49 crc kubenswrapper[5072]: I1124 11:26:49.381449 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 24 11:26:50 crc kubenswrapper[5072]: I1124 11:26:50.071403 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"36162589-ddbd-4386-82e5-62d4d73d41b7","Type":"ContainerStarted","Data":"5ff5b579d091f8a618d3dcfa23b9eb3c4e9565e33d5bc0651713fed3667aee36"} Nov 24 11:26:50 crc kubenswrapper[5072]: I1124 11:26:50.987639 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:26:50 crc kubenswrapper[5072]: I1124 11:26:50.988238 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b596a610-936b-465e-aa9d-cb3b8f7811a4" containerName="ceilometer-central-agent" containerID="cri-o://0dea738dbd0d20ab607009a71fc10cafb721363f18aae1e2bccbd2b2f516fc90" gracePeriod=30 Nov 24 11:26:50 crc kubenswrapper[5072]: I1124 11:26:50.988349 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b596a610-936b-465e-aa9d-cb3b8f7811a4" containerName="proxy-httpd" containerID="cri-o://d58ac5848c669e06620802778cb91f5a9261b93ea91426bd7da12b6e1c704a06" gracePeriod=30 Nov 24 11:26:50 crc kubenswrapper[5072]: I1124 11:26:50.988363 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b596a610-936b-465e-aa9d-cb3b8f7811a4" containerName="ceilometer-notification-agent" containerID="cri-o://000fde3ba0f07a2e05d9e3c475c3113c4786af8bf4e719407ca1f4881edfff42" gracePeriod=30 Nov 24 11:26:50 crc kubenswrapper[5072]: I1124 11:26:50.988336 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b596a610-936b-465e-aa9d-cb3b8f7811a4" containerName="sg-core" containerID="cri-o://41e23318d797772b3402c14be3112eeff9df54547c6f7c9ab1098c4abcfe8773" gracePeriod=30 Nov 24 11:26:50 crc kubenswrapper[5072]: I1124 11:26:50.993032 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 24 11:26:52 crc kubenswrapper[5072]: I1124 11:26:52.093284 5072 generic.go:334] "Generic (PLEG): container finished" podID="b596a610-936b-465e-aa9d-cb3b8f7811a4" containerID="d58ac5848c669e06620802778cb91f5a9261b93ea91426bd7da12b6e1c704a06" exitCode=0 Nov 24 11:26:52 crc kubenswrapper[5072]: I1124 11:26:52.093592 5072 generic.go:334] "Generic (PLEG): container finished" podID="b596a610-936b-465e-aa9d-cb3b8f7811a4" containerID="41e23318d797772b3402c14be3112eeff9df54547c6f7c9ab1098c4abcfe8773" exitCode=2 Nov 24 11:26:52 crc kubenswrapper[5072]: I1124 11:26:52.093603 5072 generic.go:334] "Generic (PLEG): container finished" podID="b596a610-936b-465e-aa9d-cb3b8f7811a4" containerID="0dea738dbd0d20ab607009a71fc10cafb721363f18aae1e2bccbd2b2f516fc90" exitCode=0 Nov 24 11:26:52 crc kubenswrapper[5072]: I1124 11:26:52.093493 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b596a610-936b-465e-aa9d-cb3b8f7811a4","Type":"ContainerDied","Data":"d58ac5848c669e06620802778cb91f5a9261b93ea91426bd7da12b6e1c704a06"} Nov 24 11:26:52 crc kubenswrapper[5072]: I1124 11:26:52.093633 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b596a610-936b-465e-aa9d-cb3b8f7811a4","Type":"ContainerDied","Data":"41e23318d797772b3402c14be3112eeff9df54547c6f7c9ab1098c4abcfe8773"} Nov 24 11:26:52 crc kubenswrapper[5072]: I1124 11:26:52.093645 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b596a610-936b-465e-aa9d-cb3b8f7811a4","Type":"ContainerDied","Data":"0dea738dbd0d20ab607009a71fc10cafb721363f18aae1e2bccbd2b2f516fc90"} Nov 24 11:26:52 crc kubenswrapper[5072]: I1124 11:26:52.684911 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 11:26:52 crc kubenswrapper[5072]: I1124 11:26:52.722838 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b596a610-936b-465e-aa9d-cb3b8f7811a4-sg-core-conf-yaml\") pod \"b596a610-936b-465e-aa9d-cb3b8f7811a4\" (UID: \"b596a610-936b-465e-aa9d-cb3b8f7811a4\") " Nov 24 11:26:52 crc kubenswrapper[5072]: I1124 11:26:52.722912 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b596a610-936b-465e-aa9d-cb3b8f7811a4-scripts\") pod \"b596a610-936b-465e-aa9d-cb3b8f7811a4\" (UID: \"b596a610-936b-465e-aa9d-cb3b8f7811a4\") " Nov 24 11:26:52 crc kubenswrapper[5072]: I1124 11:26:52.722935 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8pz6\" (UniqueName: \"kubernetes.io/projected/b596a610-936b-465e-aa9d-cb3b8f7811a4-kube-api-access-n8pz6\") pod \"b596a610-936b-465e-aa9d-cb3b8f7811a4\" (UID: \"b596a610-936b-465e-aa9d-cb3b8f7811a4\") " Nov 24 11:26:52 crc kubenswrapper[5072]: I1124 11:26:52.722985 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b596a610-936b-465e-aa9d-cb3b8f7811a4-combined-ca-bundle\") pod \"b596a610-936b-465e-aa9d-cb3b8f7811a4\" (UID: \"b596a610-936b-465e-aa9d-cb3b8f7811a4\") " Nov 24 11:26:52 crc kubenswrapper[5072]: I1124 11:26:52.723033 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b596a610-936b-465e-aa9d-cb3b8f7811a4-run-httpd\") pod \"b596a610-936b-465e-aa9d-cb3b8f7811a4\" (UID: \"b596a610-936b-465e-aa9d-cb3b8f7811a4\") " Nov 24 11:26:52 crc kubenswrapper[5072]: I1124 11:26:52.723093 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b596a610-936b-465e-aa9d-cb3b8f7811a4-log-httpd\") pod \"b596a610-936b-465e-aa9d-cb3b8f7811a4\" (UID: \"b596a610-936b-465e-aa9d-cb3b8f7811a4\") " Nov 24 11:26:52 crc kubenswrapper[5072]: I1124 11:26:52.723116 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b596a610-936b-465e-aa9d-cb3b8f7811a4-config-data\") pod \"b596a610-936b-465e-aa9d-cb3b8f7811a4\" (UID: \"b596a610-936b-465e-aa9d-cb3b8f7811a4\") " Nov 24 11:26:52 crc kubenswrapper[5072]: I1124 11:26:52.724464 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b596a610-936b-465e-aa9d-cb3b8f7811a4-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b596a610-936b-465e-aa9d-cb3b8f7811a4" (UID: "b596a610-936b-465e-aa9d-cb3b8f7811a4"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:26:52 crc kubenswrapper[5072]: I1124 11:26:52.727998 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b596a610-936b-465e-aa9d-cb3b8f7811a4-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b596a610-936b-465e-aa9d-cb3b8f7811a4" (UID: "b596a610-936b-465e-aa9d-cb3b8f7811a4"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:26:52 crc kubenswrapper[5072]: I1124 11:26:52.755995 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b596a610-936b-465e-aa9d-cb3b8f7811a4-scripts" (OuterVolumeSpecName: "scripts") pod "b596a610-936b-465e-aa9d-cb3b8f7811a4" (UID: "b596a610-936b-465e-aa9d-cb3b8f7811a4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:26:52 crc kubenswrapper[5072]: I1124 11:26:52.756103 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b596a610-936b-465e-aa9d-cb3b8f7811a4-kube-api-access-n8pz6" (OuterVolumeSpecName: "kube-api-access-n8pz6") pod "b596a610-936b-465e-aa9d-cb3b8f7811a4" (UID: "b596a610-936b-465e-aa9d-cb3b8f7811a4"). InnerVolumeSpecName "kube-api-access-n8pz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:26:52 crc kubenswrapper[5072]: I1124 11:26:52.761988 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b596a610-936b-465e-aa9d-cb3b8f7811a4-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b596a610-936b-465e-aa9d-cb3b8f7811a4" (UID: "b596a610-936b-465e-aa9d-cb3b8f7811a4"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:26:52 crc kubenswrapper[5072]: I1124 11:26:52.825973 5072 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b596a610-936b-465e-aa9d-cb3b8f7811a4-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:52 crc kubenswrapper[5072]: I1124 11:26:52.826024 5072 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b596a610-936b-465e-aa9d-cb3b8f7811a4-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:52 crc kubenswrapper[5072]: I1124 11:26:52.826051 5072 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b596a610-936b-465e-aa9d-cb3b8f7811a4-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:52 crc kubenswrapper[5072]: I1124 11:26:52.826060 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n8pz6\" (UniqueName: \"kubernetes.io/projected/b596a610-936b-465e-aa9d-cb3b8f7811a4-kube-api-access-n8pz6\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:52 crc kubenswrapper[5072]: I1124 11:26:52.826068 5072 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b596a610-936b-465e-aa9d-cb3b8f7811a4-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:52 crc kubenswrapper[5072]: I1124 11:26:52.829190 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b596a610-936b-465e-aa9d-cb3b8f7811a4-config-data" (OuterVolumeSpecName: "config-data") pod "b596a610-936b-465e-aa9d-cb3b8f7811a4" (UID: "b596a610-936b-465e-aa9d-cb3b8f7811a4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:26:52 crc kubenswrapper[5072]: I1124 11:26:52.835790 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b596a610-936b-465e-aa9d-cb3b8f7811a4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b596a610-936b-465e-aa9d-cb3b8f7811a4" (UID: "b596a610-936b-465e-aa9d-cb3b8f7811a4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:26:52 crc kubenswrapper[5072]: I1124 11:26:52.927501 5072 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b596a610-936b-465e-aa9d-cb3b8f7811a4-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:52 crc kubenswrapper[5072]: I1124 11:26:52.927825 5072 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b596a610-936b-465e-aa9d-cb3b8f7811a4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:53 crc kubenswrapper[5072]: I1124 11:26:53.105160 5072 generic.go:334] "Generic (PLEG): container finished" podID="b596a610-936b-465e-aa9d-cb3b8f7811a4" containerID="000fde3ba0f07a2e05d9e3c475c3113c4786af8bf4e719407ca1f4881edfff42" exitCode=0 Nov 24 11:26:53 crc kubenswrapper[5072]: I1124 11:26:53.105211 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b596a610-936b-465e-aa9d-cb3b8f7811a4","Type":"ContainerDied","Data":"000fde3ba0f07a2e05d9e3c475c3113c4786af8bf4e719407ca1f4881edfff42"} Nov 24 11:26:53 crc kubenswrapper[5072]: I1124 11:26:53.105241 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b596a610-936b-465e-aa9d-cb3b8f7811a4","Type":"ContainerDied","Data":"ff55489101bfec25266b04b65979d1f0dbf879163397987b45ca098ceeb83a17"} Nov 24 11:26:53 crc kubenswrapper[5072]: I1124 11:26:53.105262 5072 scope.go:117] "RemoveContainer" containerID="d58ac5848c669e06620802778cb91f5a9261b93ea91426bd7da12b6e1c704a06" Nov 24 11:26:53 crc kubenswrapper[5072]: I1124 11:26:53.106332 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 11:26:53 crc kubenswrapper[5072]: I1124 11:26:53.133028 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:26:53 crc kubenswrapper[5072]: I1124 11:26:53.154072 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:26:53 crc kubenswrapper[5072]: I1124 11:26:53.164361 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:26:53 crc kubenswrapper[5072]: E1124 11:26:53.165012 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b596a610-936b-465e-aa9d-cb3b8f7811a4" containerName="ceilometer-notification-agent" Nov 24 11:26:53 crc kubenswrapper[5072]: I1124 11:26:53.165036 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="b596a610-936b-465e-aa9d-cb3b8f7811a4" containerName="ceilometer-notification-agent" Nov 24 11:26:53 crc kubenswrapper[5072]: E1124 11:26:53.165070 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b596a610-936b-465e-aa9d-cb3b8f7811a4" containerName="ceilometer-central-agent" Nov 24 11:26:53 crc kubenswrapper[5072]: I1124 11:26:53.165077 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="b596a610-936b-465e-aa9d-cb3b8f7811a4" containerName="ceilometer-central-agent" Nov 24 11:26:53 crc kubenswrapper[5072]: E1124 11:26:53.165092 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b596a610-936b-465e-aa9d-cb3b8f7811a4" containerName="proxy-httpd" Nov 24 11:26:53 crc kubenswrapper[5072]: I1124 11:26:53.165100 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="b596a610-936b-465e-aa9d-cb3b8f7811a4" containerName="proxy-httpd" Nov 24 11:26:53 crc kubenswrapper[5072]: E1124 11:26:53.165111 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b596a610-936b-465e-aa9d-cb3b8f7811a4" containerName="sg-core" Nov 24 11:26:53 crc kubenswrapper[5072]: I1124 11:26:53.165118 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="b596a610-936b-465e-aa9d-cb3b8f7811a4" containerName="sg-core" Nov 24 11:26:53 crc kubenswrapper[5072]: I1124 11:26:53.165333 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="b596a610-936b-465e-aa9d-cb3b8f7811a4" containerName="ceilometer-central-agent" Nov 24 11:26:53 crc kubenswrapper[5072]: I1124 11:26:53.165353 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="b596a610-936b-465e-aa9d-cb3b8f7811a4" containerName="sg-core" Nov 24 11:26:53 crc kubenswrapper[5072]: I1124 11:26:53.165361 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="b596a610-936b-465e-aa9d-cb3b8f7811a4" containerName="proxy-httpd" Nov 24 11:26:53 crc kubenswrapper[5072]: I1124 11:26:53.165398 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="b596a610-936b-465e-aa9d-cb3b8f7811a4" containerName="ceilometer-notification-agent" Nov 24 11:26:53 crc kubenswrapper[5072]: I1124 11:26:53.166938 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 11:26:53 crc kubenswrapper[5072]: I1124 11:26:53.171936 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 11:26:53 crc kubenswrapper[5072]: I1124 11:26:53.172860 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 11:26:53 crc kubenswrapper[5072]: I1124 11:26:53.173562 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:26:53 crc kubenswrapper[5072]: I1124 11:26:53.232240 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b59dad27-fffc-4e50-a269-262c2b77f88b-config-data\") pod \"ceilometer-0\" (UID: \"b59dad27-fffc-4e50-a269-262c2b77f88b\") " pod="openstack/ceilometer-0" Nov 24 11:26:53 crc kubenswrapper[5072]: I1124 11:26:53.232593 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b59dad27-fffc-4e50-a269-262c2b77f88b-scripts\") pod \"ceilometer-0\" (UID: \"b59dad27-fffc-4e50-a269-262c2b77f88b\") " pod="openstack/ceilometer-0" Nov 24 11:26:53 crc kubenswrapper[5072]: I1124 11:26:53.232619 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b59dad27-fffc-4e50-a269-262c2b77f88b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b59dad27-fffc-4e50-a269-262c2b77f88b\") " pod="openstack/ceilometer-0" Nov 24 11:26:53 crc kubenswrapper[5072]: I1124 11:26:53.232652 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b59dad27-fffc-4e50-a269-262c2b77f88b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b59dad27-fffc-4e50-a269-262c2b77f88b\") " pod="openstack/ceilometer-0" Nov 24 11:26:53 crc kubenswrapper[5072]: I1124 11:26:53.232685 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c26vn\" (UniqueName: \"kubernetes.io/projected/b59dad27-fffc-4e50-a269-262c2b77f88b-kube-api-access-c26vn\") pod \"ceilometer-0\" (UID: \"b59dad27-fffc-4e50-a269-262c2b77f88b\") " pod="openstack/ceilometer-0" Nov 24 11:26:53 crc kubenswrapper[5072]: I1124 11:26:53.232811 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b59dad27-fffc-4e50-a269-262c2b77f88b-log-httpd\") pod \"ceilometer-0\" (UID: \"b59dad27-fffc-4e50-a269-262c2b77f88b\") " pod="openstack/ceilometer-0" Nov 24 11:26:53 crc kubenswrapper[5072]: I1124 11:26:53.232934 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b59dad27-fffc-4e50-a269-262c2b77f88b-run-httpd\") pod \"ceilometer-0\" (UID: \"b59dad27-fffc-4e50-a269-262c2b77f88b\") " pod="openstack/ceilometer-0" Nov 24 11:26:53 crc kubenswrapper[5072]: I1124 11:26:53.335262 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b59dad27-fffc-4e50-a269-262c2b77f88b-scripts\") pod \"ceilometer-0\" (UID: \"b59dad27-fffc-4e50-a269-262c2b77f88b\") " pod="openstack/ceilometer-0" Nov 24 11:26:53 crc kubenswrapper[5072]: I1124 11:26:53.335317 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b59dad27-fffc-4e50-a269-262c2b77f88b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b59dad27-fffc-4e50-a269-262c2b77f88b\") " pod="openstack/ceilometer-0" Nov 24 11:26:53 crc kubenswrapper[5072]: I1124 11:26:53.335359 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b59dad27-fffc-4e50-a269-262c2b77f88b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b59dad27-fffc-4e50-a269-262c2b77f88b\") " pod="openstack/ceilometer-0" Nov 24 11:26:53 crc kubenswrapper[5072]: I1124 11:26:53.335458 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c26vn\" (UniqueName: \"kubernetes.io/projected/b59dad27-fffc-4e50-a269-262c2b77f88b-kube-api-access-c26vn\") pod \"ceilometer-0\" (UID: \"b59dad27-fffc-4e50-a269-262c2b77f88b\") " pod="openstack/ceilometer-0" Nov 24 11:26:53 crc kubenswrapper[5072]: I1124 11:26:53.335486 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b59dad27-fffc-4e50-a269-262c2b77f88b-log-httpd\") pod \"ceilometer-0\" (UID: \"b59dad27-fffc-4e50-a269-262c2b77f88b\") " pod="openstack/ceilometer-0" Nov 24 11:26:53 crc kubenswrapper[5072]: I1124 11:26:53.335535 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b59dad27-fffc-4e50-a269-262c2b77f88b-run-httpd\") pod \"ceilometer-0\" (UID: \"b59dad27-fffc-4e50-a269-262c2b77f88b\") " pod="openstack/ceilometer-0" Nov 24 11:26:53 crc kubenswrapper[5072]: I1124 11:26:53.335604 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b59dad27-fffc-4e50-a269-262c2b77f88b-config-data\") pod \"ceilometer-0\" (UID: \"b59dad27-fffc-4e50-a269-262c2b77f88b\") " pod="openstack/ceilometer-0" Nov 24 11:26:53 crc kubenswrapper[5072]: I1124 11:26:53.336639 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b59dad27-fffc-4e50-a269-262c2b77f88b-run-httpd\") pod \"ceilometer-0\" (UID: \"b59dad27-fffc-4e50-a269-262c2b77f88b\") " pod="openstack/ceilometer-0" Nov 24 11:26:53 crc kubenswrapper[5072]: I1124 11:26:53.337576 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b59dad27-fffc-4e50-a269-262c2b77f88b-log-httpd\") pod \"ceilometer-0\" (UID: \"b59dad27-fffc-4e50-a269-262c2b77f88b\") " pod="openstack/ceilometer-0" Nov 24 11:26:53 crc kubenswrapper[5072]: I1124 11:26:53.339410 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b59dad27-fffc-4e50-a269-262c2b77f88b-scripts\") pod \"ceilometer-0\" (UID: \"b59dad27-fffc-4e50-a269-262c2b77f88b\") " pod="openstack/ceilometer-0" Nov 24 11:26:53 crc kubenswrapper[5072]: I1124 11:26:53.345410 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b59dad27-fffc-4e50-a269-262c2b77f88b-config-data\") pod \"ceilometer-0\" (UID: \"b59dad27-fffc-4e50-a269-262c2b77f88b\") " pod="openstack/ceilometer-0" Nov 24 11:26:53 crc kubenswrapper[5072]: I1124 11:26:53.345681 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b59dad27-fffc-4e50-a269-262c2b77f88b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b59dad27-fffc-4e50-a269-262c2b77f88b\") " pod="openstack/ceilometer-0" Nov 24 11:26:53 crc kubenswrapper[5072]: I1124 11:26:53.346298 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b59dad27-fffc-4e50-a269-262c2b77f88b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b59dad27-fffc-4e50-a269-262c2b77f88b\") " pod="openstack/ceilometer-0" Nov 24 11:26:53 crc kubenswrapper[5072]: I1124 11:26:53.353639 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c26vn\" (UniqueName: \"kubernetes.io/projected/b59dad27-fffc-4e50-a269-262c2b77f88b-kube-api-access-c26vn\") pod \"ceilometer-0\" (UID: \"b59dad27-fffc-4e50-a269-262c2b77f88b\") " pod="openstack/ceilometer-0" Nov 24 11:26:53 crc kubenswrapper[5072]: I1124 11:26:53.490890 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 11:26:54 crc kubenswrapper[5072]: I1124 11:26:54.599129 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 24 11:26:55 crc kubenswrapper[5072]: I1124 11:26:55.035960 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b596a610-936b-465e-aa9d-cb3b8f7811a4" path="/var/lib/kubelet/pods/b596a610-936b-465e-aa9d-cb3b8f7811a4/volumes" Nov 24 11:26:57 crc kubenswrapper[5072]: I1124 11:26:57.859080 5072 scope.go:117] "RemoveContainer" containerID="41e23318d797772b3402c14be3112eeff9df54547c6f7c9ab1098c4abcfe8773" Nov 24 11:26:57 crc kubenswrapper[5072]: I1124 11:26:57.906976 5072 scope.go:117] "RemoveContainer" containerID="000fde3ba0f07a2e05d9e3c475c3113c4786af8bf4e719407ca1f4881edfff42" Nov 24 11:26:58 crc kubenswrapper[5072]: I1124 11:26:58.091709 5072 scope.go:117] "RemoveContainer" containerID="0dea738dbd0d20ab607009a71fc10cafb721363f18aae1e2bccbd2b2f516fc90" Nov 24 11:26:58 crc kubenswrapper[5072]: I1124 11:26:58.110623 5072 scope.go:117] "RemoveContainer" containerID="d58ac5848c669e06620802778cb91f5a9261b93ea91426bd7da12b6e1c704a06" Nov 24 11:26:58 crc kubenswrapper[5072]: E1124 11:26:58.111006 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d58ac5848c669e06620802778cb91f5a9261b93ea91426bd7da12b6e1c704a06\": container with ID starting with d58ac5848c669e06620802778cb91f5a9261b93ea91426bd7da12b6e1c704a06 not found: ID does not exist" containerID="d58ac5848c669e06620802778cb91f5a9261b93ea91426bd7da12b6e1c704a06" Nov 24 11:26:58 crc kubenswrapper[5072]: I1124 11:26:58.111041 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d58ac5848c669e06620802778cb91f5a9261b93ea91426bd7da12b6e1c704a06"} err="failed to get container status \"d58ac5848c669e06620802778cb91f5a9261b93ea91426bd7da12b6e1c704a06\": rpc error: code = NotFound desc = could not find container \"d58ac5848c669e06620802778cb91f5a9261b93ea91426bd7da12b6e1c704a06\": container with ID starting with d58ac5848c669e06620802778cb91f5a9261b93ea91426bd7da12b6e1c704a06 not found: ID does not exist" Nov 24 11:26:58 crc kubenswrapper[5072]: I1124 11:26:58.111066 5072 scope.go:117] "RemoveContainer" containerID="41e23318d797772b3402c14be3112eeff9df54547c6f7c9ab1098c4abcfe8773" Nov 24 11:26:58 crc kubenswrapper[5072]: E1124 11:26:58.111254 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41e23318d797772b3402c14be3112eeff9df54547c6f7c9ab1098c4abcfe8773\": container with ID starting with 41e23318d797772b3402c14be3112eeff9df54547c6f7c9ab1098c4abcfe8773 not found: ID does not exist" containerID="41e23318d797772b3402c14be3112eeff9df54547c6f7c9ab1098c4abcfe8773" Nov 24 11:26:58 crc kubenswrapper[5072]: I1124 11:26:58.111278 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41e23318d797772b3402c14be3112eeff9df54547c6f7c9ab1098c4abcfe8773"} err="failed to get container status \"41e23318d797772b3402c14be3112eeff9df54547c6f7c9ab1098c4abcfe8773\": rpc error: code = NotFound desc = could not find container \"41e23318d797772b3402c14be3112eeff9df54547c6f7c9ab1098c4abcfe8773\": container with ID starting with 41e23318d797772b3402c14be3112eeff9df54547c6f7c9ab1098c4abcfe8773 not found: ID does not exist" Nov 24 11:26:58 crc kubenswrapper[5072]: I1124 11:26:58.111295 5072 scope.go:117] "RemoveContainer" containerID="000fde3ba0f07a2e05d9e3c475c3113c4786af8bf4e719407ca1f4881edfff42" Nov 24 11:26:58 crc kubenswrapper[5072]: E1124 11:26:58.111487 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"000fde3ba0f07a2e05d9e3c475c3113c4786af8bf4e719407ca1f4881edfff42\": container with ID starting with 000fde3ba0f07a2e05d9e3c475c3113c4786af8bf4e719407ca1f4881edfff42 not found: ID does not exist" containerID="000fde3ba0f07a2e05d9e3c475c3113c4786af8bf4e719407ca1f4881edfff42" Nov 24 11:26:58 crc kubenswrapper[5072]: I1124 11:26:58.111511 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"000fde3ba0f07a2e05d9e3c475c3113c4786af8bf4e719407ca1f4881edfff42"} err="failed to get container status \"000fde3ba0f07a2e05d9e3c475c3113c4786af8bf4e719407ca1f4881edfff42\": rpc error: code = NotFound desc = could not find container \"000fde3ba0f07a2e05d9e3c475c3113c4786af8bf4e719407ca1f4881edfff42\": container with ID starting with 000fde3ba0f07a2e05d9e3c475c3113c4786af8bf4e719407ca1f4881edfff42 not found: ID does not exist" Nov 24 11:26:58 crc kubenswrapper[5072]: I1124 11:26:58.111526 5072 scope.go:117] "RemoveContainer" containerID="0dea738dbd0d20ab607009a71fc10cafb721363f18aae1e2bccbd2b2f516fc90" Nov 24 11:26:58 crc kubenswrapper[5072]: E1124 11:26:58.112319 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0dea738dbd0d20ab607009a71fc10cafb721363f18aae1e2bccbd2b2f516fc90\": container with ID starting with 0dea738dbd0d20ab607009a71fc10cafb721363f18aae1e2bccbd2b2f516fc90 not found: ID does not exist" containerID="0dea738dbd0d20ab607009a71fc10cafb721363f18aae1e2bccbd2b2f516fc90" Nov 24 11:26:58 crc kubenswrapper[5072]: I1124 11:26:58.112391 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0dea738dbd0d20ab607009a71fc10cafb721363f18aae1e2bccbd2b2f516fc90"} err="failed to get container status \"0dea738dbd0d20ab607009a71fc10cafb721363f18aae1e2bccbd2b2f516fc90\": rpc error: code = NotFound desc = could not find container \"0dea738dbd0d20ab607009a71fc10cafb721363f18aae1e2bccbd2b2f516fc90\": container with ID starting with 0dea738dbd0d20ab607009a71fc10cafb721363f18aae1e2bccbd2b2f516fc90 not found: ID does not exist" Nov 24 11:26:58 crc kubenswrapper[5072]: I1124 11:26:58.346555 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:26:58 crc kubenswrapper[5072]: W1124 11:26:58.351918 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb59dad27_fffc_4e50_a269_262c2b77f88b.slice/crio-2620ff1e05f4a0bcf65743172d463f6d78b3aa0e10090ab31a9fdfc08253df3f WatchSource:0}: Error finding container 2620ff1e05f4a0bcf65743172d463f6d78b3aa0e10090ab31a9fdfc08253df3f: Status 404 returned error can't find the container with id 2620ff1e05f4a0bcf65743172d463f6d78b3aa0e10090ab31a9fdfc08253df3f Nov 24 11:26:58 crc kubenswrapper[5072]: I1124 11:26:58.688171 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:26:59 crc kubenswrapper[5072]: I1124 11:26:59.164859 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b59dad27-fffc-4e50-a269-262c2b77f88b","Type":"ContainerStarted","Data":"2e8d32fe55dd20c0d929b7cc110400e6b67ca6a7e9682dd43daf79b861e9cdf6"} Nov 24 11:26:59 crc kubenswrapper[5072]: I1124 11:26:59.165230 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b59dad27-fffc-4e50-a269-262c2b77f88b","Type":"ContainerStarted","Data":"2620ff1e05f4a0bcf65743172d463f6d78b3aa0e10090ab31a9fdfc08253df3f"} Nov 24 11:26:59 crc kubenswrapper[5072]: I1124 11:26:59.166193 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"36162589-ddbd-4386-82e5-62d4d73d41b7","Type":"ContainerStarted","Data":"b6c027a9d04a8683c8e1689246d8d30dbaf447df58b88c76a4ea9f04839311f3"} Nov 24 11:26:59 crc kubenswrapper[5072]: I1124 11:26:59.182530 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.376019795 podStartE2EDuration="11.182514024s" podCreationTimestamp="2025-11-24 11:26:48 +0000 UTC" firstStartedPulling="2025-11-24 11:26:49.125278668 +0000 UTC m=+1060.836803154" lastFinishedPulling="2025-11-24 11:26:57.931772907 +0000 UTC m=+1069.643297383" observedRunningTime="2025-11-24 11:26:59.178989085 +0000 UTC m=+1070.890513561" watchObservedRunningTime="2025-11-24 11:26:59.182514024 +0000 UTC m=+1070.894038500" Nov 24 11:27:00 crc kubenswrapper[5072]: I1124 11:27:00.178846 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b59dad27-fffc-4e50-a269-262c2b77f88b","Type":"ContainerStarted","Data":"277de269c6a9e9c2d5fd0d05eb43e1e69087235b58290db1896dc560fd5ef83f"} Nov 24 11:27:00 crc kubenswrapper[5072]: I1124 11:27:00.512848 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-7cpcc"] Nov 24 11:27:00 crc kubenswrapper[5072]: I1124 11:27:00.514061 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-7cpcc" Nov 24 11:27:00 crc kubenswrapper[5072]: I1124 11:27:00.533045 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-7cpcc"] Nov 24 11:27:00 crc kubenswrapper[5072]: I1124 11:27:00.553667 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a645a183-9f0b-4761-89d5-9ed93d898c5d-operator-scripts\") pod \"nova-api-db-create-7cpcc\" (UID: \"a645a183-9f0b-4761-89d5-9ed93d898c5d\") " pod="openstack/nova-api-db-create-7cpcc" Nov 24 11:27:00 crc kubenswrapper[5072]: I1124 11:27:00.553996 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jb64l\" (UniqueName: \"kubernetes.io/projected/a645a183-9f0b-4761-89d5-9ed93d898c5d-kube-api-access-jb64l\") pod \"nova-api-db-create-7cpcc\" (UID: \"a645a183-9f0b-4761-89d5-9ed93d898c5d\") " pod="openstack/nova-api-db-create-7cpcc" Nov 24 11:27:00 crc kubenswrapper[5072]: I1124 11:27:00.613862 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-d9mv6"] Nov 24 11:27:00 crc kubenswrapper[5072]: I1124 11:27:00.615145 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-d9mv6" Nov 24 11:27:00 crc kubenswrapper[5072]: I1124 11:27:00.637407 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-47a1-account-create-w245w"] Nov 24 11:27:00 crc kubenswrapper[5072]: I1124 11:27:00.638791 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-47a1-account-create-w245w" Nov 24 11:27:00 crc kubenswrapper[5072]: I1124 11:27:00.646834 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-d9mv6"] Nov 24 11:27:00 crc kubenswrapper[5072]: I1124 11:27:00.649807 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Nov 24 11:27:00 crc kubenswrapper[5072]: I1124 11:27:00.656903 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a645a183-9f0b-4761-89d5-9ed93d898c5d-operator-scripts\") pod \"nova-api-db-create-7cpcc\" (UID: \"a645a183-9f0b-4761-89d5-9ed93d898c5d\") " pod="openstack/nova-api-db-create-7cpcc" Nov 24 11:27:00 crc kubenswrapper[5072]: I1124 11:27:00.656969 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f47541bf-a131-46fe-81d9-30eb49272885-operator-scripts\") pod \"nova-cell0-db-create-d9mv6\" (UID: \"f47541bf-a131-46fe-81d9-30eb49272885\") " pod="openstack/nova-cell0-db-create-d9mv6" Nov 24 11:27:00 crc kubenswrapper[5072]: I1124 11:27:00.657023 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jvmn\" (UniqueName: \"kubernetes.io/projected/f47541bf-a131-46fe-81d9-30eb49272885-kube-api-access-9jvmn\") pod \"nova-cell0-db-create-d9mv6\" (UID: \"f47541bf-a131-46fe-81d9-30eb49272885\") " pod="openstack/nova-cell0-db-create-d9mv6" Nov 24 11:27:00 crc kubenswrapper[5072]: I1124 11:27:00.657041 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lx5mm\" (UniqueName: \"kubernetes.io/projected/ef0ae516-a614-4d41-b48e-6ec7544ecc8b-kube-api-access-lx5mm\") pod \"nova-api-47a1-account-create-w245w\" (UID: \"ef0ae516-a614-4d41-b48e-6ec7544ecc8b\") " pod="openstack/nova-api-47a1-account-create-w245w" Nov 24 11:27:00 crc kubenswrapper[5072]: I1124 11:27:00.659759 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef0ae516-a614-4d41-b48e-6ec7544ecc8b-operator-scripts\") pod \"nova-api-47a1-account-create-w245w\" (UID: \"ef0ae516-a614-4d41-b48e-6ec7544ecc8b\") " pod="openstack/nova-api-47a1-account-create-w245w" Nov 24 11:27:00 crc kubenswrapper[5072]: I1124 11:27:00.659790 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jb64l\" (UniqueName: \"kubernetes.io/projected/a645a183-9f0b-4761-89d5-9ed93d898c5d-kube-api-access-jb64l\") pod \"nova-api-db-create-7cpcc\" (UID: \"a645a183-9f0b-4761-89d5-9ed93d898c5d\") " pod="openstack/nova-api-db-create-7cpcc" Nov 24 11:27:00 crc kubenswrapper[5072]: I1124 11:27:00.656912 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-47a1-account-create-w245w"] Nov 24 11:27:00 crc kubenswrapper[5072]: I1124 11:27:00.657811 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a645a183-9f0b-4761-89d5-9ed93d898c5d-operator-scripts\") pod \"nova-api-db-create-7cpcc\" (UID: \"a645a183-9f0b-4761-89d5-9ed93d898c5d\") " pod="openstack/nova-api-db-create-7cpcc" Nov 24 11:27:00 crc kubenswrapper[5072]: I1124 11:27:00.696109 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jb64l\" (UniqueName: \"kubernetes.io/projected/a645a183-9f0b-4761-89d5-9ed93d898c5d-kube-api-access-jb64l\") pod \"nova-api-db-create-7cpcc\" (UID: \"a645a183-9f0b-4761-89d5-9ed93d898c5d\") " pod="openstack/nova-api-db-create-7cpcc" Nov 24 11:27:00 crc kubenswrapper[5072]: I1124 11:27:00.713936 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-bc2xz"] Nov 24 11:27:00 crc kubenswrapper[5072]: I1124 11:27:00.715237 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-bc2xz" Nov 24 11:27:00 crc kubenswrapper[5072]: I1124 11:27:00.728584 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-bc2xz"] Nov 24 11:27:00 crc kubenswrapper[5072]: I1124 11:27:00.764136 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jvmn\" (UniqueName: \"kubernetes.io/projected/f47541bf-a131-46fe-81d9-30eb49272885-kube-api-access-9jvmn\") pod \"nova-cell0-db-create-d9mv6\" (UID: \"f47541bf-a131-46fe-81d9-30eb49272885\") " pod="openstack/nova-cell0-db-create-d9mv6" Nov 24 11:27:00 crc kubenswrapper[5072]: I1124 11:27:00.764203 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lx5mm\" (UniqueName: \"kubernetes.io/projected/ef0ae516-a614-4d41-b48e-6ec7544ecc8b-kube-api-access-lx5mm\") pod \"nova-api-47a1-account-create-w245w\" (UID: \"ef0ae516-a614-4d41-b48e-6ec7544ecc8b\") " pod="openstack/nova-api-47a1-account-create-w245w" Nov 24 11:27:00 crc kubenswrapper[5072]: I1124 11:27:00.764255 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef0ae516-a614-4d41-b48e-6ec7544ecc8b-operator-scripts\") pod \"nova-api-47a1-account-create-w245w\" (UID: \"ef0ae516-a614-4d41-b48e-6ec7544ecc8b\") " pod="openstack/nova-api-47a1-account-create-w245w" Nov 24 11:27:00 crc kubenswrapper[5072]: I1124 11:27:00.764716 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/784a74b5-3431-4fc5-ac75-d759b1f2a4cb-operator-scripts\") pod \"nova-cell1-db-create-bc2xz\" (UID: \"784a74b5-3431-4fc5-ac75-d759b1f2a4cb\") " pod="openstack/nova-cell1-db-create-bc2xz" Nov 24 11:27:00 crc kubenswrapper[5072]: I1124 11:27:00.764803 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jj4r\" (UniqueName: \"kubernetes.io/projected/784a74b5-3431-4fc5-ac75-d759b1f2a4cb-kube-api-access-4jj4r\") pod \"nova-cell1-db-create-bc2xz\" (UID: \"784a74b5-3431-4fc5-ac75-d759b1f2a4cb\") " pod="openstack/nova-cell1-db-create-bc2xz" Nov 24 11:27:00 crc kubenswrapper[5072]: I1124 11:27:00.764859 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f47541bf-a131-46fe-81d9-30eb49272885-operator-scripts\") pod \"nova-cell0-db-create-d9mv6\" (UID: \"f47541bf-a131-46fe-81d9-30eb49272885\") " pod="openstack/nova-cell0-db-create-d9mv6" Nov 24 11:27:00 crc kubenswrapper[5072]: I1124 11:27:00.765128 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef0ae516-a614-4d41-b48e-6ec7544ecc8b-operator-scripts\") pod \"nova-api-47a1-account-create-w245w\" (UID: \"ef0ae516-a614-4d41-b48e-6ec7544ecc8b\") " pod="openstack/nova-api-47a1-account-create-w245w" Nov 24 11:27:00 crc kubenswrapper[5072]: I1124 11:27:00.766922 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f47541bf-a131-46fe-81d9-30eb49272885-operator-scripts\") pod \"nova-cell0-db-create-d9mv6\" (UID: \"f47541bf-a131-46fe-81d9-30eb49272885\") " pod="openstack/nova-cell0-db-create-d9mv6" Nov 24 11:27:00 crc kubenswrapper[5072]: I1124 11:27:00.782764 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lx5mm\" (UniqueName: \"kubernetes.io/projected/ef0ae516-a614-4d41-b48e-6ec7544ecc8b-kube-api-access-lx5mm\") pod \"nova-api-47a1-account-create-w245w\" (UID: \"ef0ae516-a614-4d41-b48e-6ec7544ecc8b\") " pod="openstack/nova-api-47a1-account-create-w245w" Nov 24 11:27:00 crc kubenswrapper[5072]: I1124 11:27:00.799833 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jvmn\" (UniqueName: \"kubernetes.io/projected/f47541bf-a131-46fe-81d9-30eb49272885-kube-api-access-9jvmn\") pod \"nova-cell0-db-create-d9mv6\" (UID: \"f47541bf-a131-46fe-81d9-30eb49272885\") " pod="openstack/nova-cell0-db-create-d9mv6" Nov 24 11:27:00 crc kubenswrapper[5072]: I1124 11:27:00.819559 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-fa17-account-create-6k8xl"] Nov 24 11:27:00 crc kubenswrapper[5072]: I1124 11:27:00.820565 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-fa17-account-create-6k8xl" Nov 24 11:27:00 crc kubenswrapper[5072]: I1124 11:27:00.823824 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Nov 24 11:27:00 crc kubenswrapper[5072]: I1124 11:27:00.827283 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-7cpcc" Nov 24 11:27:00 crc kubenswrapper[5072]: I1124 11:27:00.830593 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-fa17-account-create-6k8xl"] Nov 24 11:27:00 crc kubenswrapper[5072]: I1124 11:27:00.866666 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ecd15413-8bab-481f-869c-02b3fd9fadc2-operator-scripts\") pod \"nova-cell0-fa17-account-create-6k8xl\" (UID: \"ecd15413-8bab-481f-869c-02b3fd9fadc2\") " pod="openstack/nova-cell0-fa17-account-create-6k8xl" Nov 24 11:27:00 crc kubenswrapper[5072]: I1124 11:27:00.866719 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/784a74b5-3431-4fc5-ac75-d759b1f2a4cb-operator-scripts\") pod \"nova-cell1-db-create-bc2xz\" (UID: \"784a74b5-3431-4fc5-ac75-d759b1f2a4cb\") " pod="openstack/nova-cell1-db-create-bc2xz" Nov 24 11:27:00 crc kubenswrapper[5072]: I1124 11:27:00.866748 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmj2h\" (UniqueName: \"kubernetes.io/projected/ecd15413-8bab-481f-869c-02b3fd9fadc2-kube-api-access-xmj2h\") pod \"nova-cell0-fa17-account-create-6k8xl\" (UID: \"ecd15413-8bab-481f-869c-02b3fd9fadc2\") " pod="openstack/nova-cell0-fa17-account-create-6k8xl" Nov 24 11:27:00 crc kubenswrapper[5072]: I1124 11:27:00.866779 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jj4r\" (UniqueName: \"kubernetes.io/projected/784a74b5-3431-4fc5-ac75-d759b1f2a4cb-kube-api-access-4jj4r\") pod \"nova-cell1-db-create-bc2xz\" (UID: \"784a74b5-3431-4fc5-ac75-d759b1f2a4cb\") " pod="openstack/nova-cell1-db-create-bc2xz" Nov 24 11:27:00 crc kubenswrapper[5072]: I1124 11:27:00.867405 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/784a74b5-3431-4fc5-ac75-d759b1f2a4cb-operator-scripts\") pod \"nova-cell1-db-create-bc2xz\" (UID: \"784a74b5-3431-4fc5-ac75-d759b1f2a4cb\") " pod="openstack/nova-cell1-db-create-bc2xz" Nov 24 11:27:00 crc kubenswrapper[5072]: I1124 11:27:00.883181 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jj4r\" (UniqueName: \"kubernetes.io/projected/784a74b5-3431-4fc5-ac75-d759b1f2a4cb-kube-api-access-4jj4r\") pod \"nova-cell1-db-create-bc2xz\" (UID: \"784a74b5-3431-4fc5-ac75-d759b1f2a4cb\") " pod="openstack/nova-cell1-db-create-bc2xz" Nov 24 11:27:00 crc kubenswrapper[5072]: I1124 11:27:00.937228 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-d9mv6" Nov 24 11:27:00 crc kubenswrapper[5072]: I1124 11:27:00.969239 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-47a1-account-create-w245w" Nov 24 11:27:00 crc kubenswrapper[5072]: I1124 11:27:00.969741 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ecd15413-8bab-481f-869c-02b3fd9fadc2-operator-scripts\") pod \"nova-cell0-fa17-account-create-6k8xl\" (UID: \"ecd15413-8bab-481f-869c-02b3fd9fadc2\") " pod="openstack/nova-cell0-fa17-account-create-6k8xl" Nov 24 11:27:00 crc kubenswrapper[5072]: I1124 11:27:00.969819 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmj2h\" (UniqueName: \"kubernetes.io/projected/ecd15413-8bab-481f-869c-02b3fd9fadc2-kube-api-access-xmj2h\") pod \"nova-cell0-fa17-account-create-6k8xl\" (UID: \"ecd15413-8bab-481f-869c-02b3fd9fadc2\") " pod="openstack/nova-cell0-fa17-account-create-6k8xl" Nov 24 11:27:00 crc kubenswrapper[5072]: I1124 11:27:00.970712 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ecd15413-8bab-481f-869c-02b3fd9fadc2-operator-scripts\") pod \"nova-cell0-fa17-account-create-6k8xl\" (UID: \"ecd15413-8bab-481f-869c-02b3fd9fadc2\") " pod="openstack/nova-cell0-fa17-account-create-6k8xl" Nov 24 11:27:00 crc kubenswrapper[5072]: I1124 11:27:00.987310 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmj2h\" (UniqueName: \"kubernetes.io/projected/ecd15413-8bab-481f-869c-02b3fd9fadc2-kube-api-access-xmj2h\") pod \"nova-cell0-fa17-account-create-6k8xl\" (UID: \"ecd15413-8bab-481f-869c-02b3fd9fadc2\") " pod="openstack/nova-cell0-fa17-account-create-6k8xl" Nov 24 11:27:01 crc kubenswrapper[5072]: I1124 11:27:01.042475 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-bf4a-account-create-st8r6"] Nov 24 11:27:01 crc kubenswrapper[5072]: I1124 11:27:01.043695 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-bf4a-account-create-st8r6"] Nov 24 11:27:01 crc kubenswrapper[5072]: I1124 11:27:01.043774 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-bf4a-account-create-st8r6" Nov 24 11:27:01 crc kubenswrapper[5072]: I1124 11:27:01.047746 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Nov 24 11:27:01 crc kubenswrapper[5072]: I1124 11:27:01.068217 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-bc2xz" Nov 24 11:27:01 crc kubenswrapper[5072]: I1124 11:27:01.071518 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6cf63fa-6157-4ba4-96fb-2b72065bbab7-operator-scripts\") pod \"nova-cell1-bf4a-account-create-st8r6\" (UID: \"f6cf63fa-6157-4ba4-96fb-2b72065bbab7\") " pod="openstack/nova-cell1-bf4a-account-create-st8r6" Nov 24 11:27:01 crc kubenswrapper[5072]: I1124 11:27:01.071850 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vttk\" (UniqueName: \"kubernetes.io/projected/f6cf63fa-6157-4ba4-96fb-2b72065bbab7-kube-api-access-5vttk\") pod \"nova-cell1-bf4a-account-create-st8r6\" (UID: \"f6cf63fa-6157-4ba4-96fb-2b72065bbab7\") " pod="openstack/nova-cell1-bf4a-account-create-st8r6" Nov 24 11:27:01 crc kubenswrapper[5072]: I1124 11:27:01.140692 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-fa17-account-create-6k8xl" Nov 24 11:27:01 crc kubenswrapper[5072]: I1124 11:27:01.173328 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6cf63fa-6157-4ba4-96fb-2b72065bbab7-operator-scripts\") pod \"nova-cell1-bf4a-account-create-st8r6\" (UID: \"f6cf63fa-6157-4ba4-96fb-2b72065bbab7\") " pod="openstack/nova-cell1-bf4a-account-create-st8r6" Nov 24 11:27:01 crc kubenswrapper[5072]: I1124 11:27:01.173517 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vttk\" (UniqueName: \"kubernetes.io/projected/f6cf63fa-6157-4ba4-96fb-2b72065bbab7-kube-api-access-5vttk\") pod \"nova-cell1-bf4a-account-create-st8r6\" (UID: \"f6cf63fa-6157-4ba4-96fb-2b72065bbab7\") " pod="openstack/nova-cell1-bf4a-account-create-st8r6" Nov 24 11:27:01 crc kubenswrapper[5072]: I1124 11:27:01.174065 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6cf63fa-6157-4ba4-96fb-2b72065bbab7-operator-scripts\") pod \"nova-cell1-bf4a-account-create-st8r6\" (UID: \"f6cf63fa-6157-4ba4-96fb-2b72065bbab7\") " pod="openstack/nova-cell1-bf4a-account-create-st8r6" Nov 24 11:27:01 crc kubenswrapper[5072]: I1124 11:27:01.193351 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vttk\" (UniqueName: \"kubernetes.io/projected/f6cf63fa-6157-4ba4-96fb-2b72065bbab7-kube-api-access-5vttk\") pod \"nova-cell1-bf4a-account-create-st8r6\" (UID: \"f6cf63fa-6157-4ba4-96fb-2b72065bbab7\") " pod="openstack/nova-cell1-bf4a-account-create-st8r6" Nov 24 11:27:01 crc kubenswrapper[5072]: I1124 11:27:01.202354 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b59dad27-fffc-4e50-a269-262c2b77f88b","Type":"ContainerStarted","Data":"c0fc8141787504e1987793eee0c5064b98e2369e7992e465ab9669d2260c3f98"} Nov 24 11:27:01 crc kubenswrapper[5072]: I1124 11:27:01.273281 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-7cpcc"] Nov 24 11:27:01 crc kubenswrapper[5072]: I1124 11:27:01.295320 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-d9mv6"] Nov 24 11:27:01 crc kubenswrapper[5072]: I1124 11:27:01.364125 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-bf4a-account-create-st8r6" Nov 24 11:27:01 crc kubenswrapper[5072]: I1124 11:27:01.600526 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-47a1-account-create-w245w"] Nov 24 11:27:01 crc kubenswrapper[5072]: W1124 11:27:01.610533 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podef0ae516_a614_4d41_b48e_6ec7544ecc8b.slice/crio-059bad3f58b6f4115a6c7509e6f2a743b36fb4ed2fb7eac3f8d6595671bce359 WatchSource:0}: Error finding container 059bad3f58b6f4115a6c7509e6f2a743b36fb4ed2fb7eac3f8d6595671bce359: Status 404 returned error can't find the container with id 059bad3f58b6f4115a6c7509e6f2a743b36fb4ed2fb7eac3f8d6595671bce359 Nov 24 11:27:01 crc kubenswrapper[5072]: I1124 11:27:01.631018 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-bc2xz"] Nov 24 11:27:01 crc kubenswrapper[5072]: I1124 11:27:01.778315 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-fa17-account-create-6k8xl"] Nov 24 11:27:01 crc kubenswrapper[5072]: W1124 11:27:01.783061 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podecd15413_8bab_481f_869c_02b3fd9fadc2.slice/crio-bc21da92d3b59c5806e98faf8b234b91f3ba486be2a8b2ab91363ee08ae1ec27 WatchSource:0}: Error finding container bc21da92d3b59c5806e98faf8b234b91f3ba486be2a8b2ab91363ee08ae1ec27: Status 404 returned error can't find the container with id bc21da92d3b59c5806e98faf8b234b91f3ba486be2a8b2ab91363ee08ae1ec27 Nov 24 11:27:01 crc kubenswrapper[5072]: I1124 11:27:01.931788 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-bf4a-account-create-st8r6"] Nov 24 11:27:02 crc kubenswrapper[5072]: I1124 11:27:02.213802 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-bf4a-account-create-st8r6" event={"ID":"f6cf63fa-6157-4ba4-96fb-2b72065bbab7","Type":"ContainerStarted","Data":"d8a0b386fe35a5213f04c3b9f7d12a99fbefba563a5969afcb4fbd8475a3a5ab"} Nov 24 11:27:02 crc kubenswrapper[5072]: I1124 11:27:02.215165 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-d9mv6" event={"ID":"f47541bf-a131-46fe-81d9-30eb49272885","Type":"ContainerStarted","Data":"7b5f998e1d6d141763d629ea2f6fd478be5fc98c84edfbe115f2f0f6c5753d93"} Nov 24 11:27:02 crc kubenswrapper[5072]: I1124 11:27:02.215189 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-d9mv6" event={"ID":"f47541bf-a131-46fe-81d9-30eb49272885","Type":"ContainerStarted","Data":"373e497b9ced78a829e2c5baa906e31e2b86cf60b3edd5ad1588474735671d60"} Nov 24 11:27:02 crc kubenswrapper[5072]: I1124 11:27:02.216391 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-47a1-account-create-w245w" event={"ID":"ef0ae516-a614-4d41-b48e-6ec7544ecc8b","Type":"ContainerStarted","Data":"059bad3f58b6f4115a6c7509e6f2a743b36fb4ed2fb7eac3f8d6595671bce359"} Nov 24 11:27:02 crc kubenswrapper[5072]: I1124 11:27:02.217526 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-fa17-account-create-6k8xl" event={"ID":"ecd15413-8bab-481f-869c-02b3fd9fadc2","Type":"ContainerStarted","Data":"bc21da92d3b59c5806e98faf8b234b91f3ba486be2a8b2ab91363ee08ae1ec27"} Nov 24 11:27:02 crc kubenswrapper[5072]: I1124 11:27:02.218390 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-bc2xz" event={"ID":"784a74b5-3431-4fc5-ac75-d759b1f2a4cb","Type":"ContainerStarted","Data":"a2a9b4e138ba2d1d4da6cf669133e1ece5ffe93624611879c54a251509c4a0b1"} Nov 24 11:27:02 crc kubenswrapper[5072]: I1124 11:27:02.219813 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-7cpcc" event={"ID":"a645a183-9f0b-4761-89d5-9ed93d898c5d","Type":"ContainerStarted","Data":"87b7bfc7260ad355aa3429eec6df1b3d0b7dc0772906030b9f5e6aa32d3ba454"} Nov 24 11:27:02 crc kubenswrapper[5072]: I1124 11:27:02.219830 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-7cpcc" event={"ID":"a645a183-9f0b-4761-89d5-9ed93d898c5d","Type":"ContainerStarted","Data":"0ae6b520551cbb666ac7f9a20a6cd38622674b1d6fdb705ff6676ff9e0c4543d"} Nov 24 11:27:02 crc kubenswrapper[5072]: I1124 11:27:02.232617 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-d9mv6" podStartSLOduration=2.232602006 podStartE2EDuration="2.232602006s" podCreationTimestamp="2025-11-24 11:27:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:27:02.229461366 +0000 UTC m=+1073.940985842" watchObservedRunningTime="2025-11-24 11:27:02.232602006 +0000 UTC m=+1073.944126472" Nov 24 11:27:02 crc kubenswrapper[5072]: I1124 11:27:02.255274 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-7cpcc" podStartSLOduration=2.255250729 podStartE2EDuration="2.255250729s" podCreationTimestamp="2025-11-24 11:27:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:27:02.244125678 +0000 UTC m=+1073.955650154" watchObservedRunningTime="2025-11-24 11:27:02.255250729 +0000 UTC m=+1073.966775225" Nov 24 11:27:03 crc kubenswrapper[5072]: I1124 11:27:03.228317 5072 generic.go:334] "Generic (PLEG): container finished" podID="ecd15413-8bab-481f-869c-02b3fd9fadc2" containerID="f45b14f3baa514b53d006808a7fdbd82018d32f2ec7c97828a784ba48a03e010" exitCode=0 Nov 24 11:27:03 crc kubenswrapper[5072]: I1124 11:27:03.228414 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-fa17-account-create-6k8xl" event={"ID":"ecd15413-8bab-481f-869c-02b3fd9fadc2","Type":"ContainerDied","Data":"f45b14f3baa514b53d006808a7fdbd82018d32f2ec7c97828a784ba48a03e010"} Nov 24 11:27:03 crc kubenswrapper[5072]: I1124 11:27:03.230476 5072 generic.go:334] "Generic (PLEG): container finished" podID="784a74b5-3431-4fc5-ac75-d759b1f2a4cb" containerID="bf3f982100274b1acee0560a68188bef797f3b326e9cf87408db76488ed1a3af" exitCode=0 Nov 24 11:27:03 crc kubenswrapper[5072]: I1124 11:27:03.230563 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-bc2xz" event={"ID":"784a74b5-3431-4fc5-ac75-d759b1f2a4cb","Type":"ContainerDied","Data":"bf3f982100274b1acee0560a68188bef797f3b326e9cf87408db76488ed1a3af"} Nov 24 11:27:03 crc kubenswrapper[5072]: I1124 11:27:03.235338 5072 generic.go:334] "Generic (PLEG): container finished" podID="a645a183-9f0b-4761-89d5-9ed93d898c5d" containerID="87b7bfc7260ad355aa3429eec6df1b3d0b7dc0772906030b9f5e6aa32d3ba454" exitCode=0 Nov 24 11:27:03 crc kubenswrapper[5072]: I1124 11:27:03.235436 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-7cpcc" event={"ID":"a645a183-9f0b-4761-89d5-9ed93d898c5d","Type":"ContainerDied","Data":"87b7bfc7260ad355aa3429eec6df1b3d0b7dc0772906030b9f5e6aa32d3ba454"} Nov 24 11:27:03 crc kubenswrapper[5072]: I1124 11:27:03.240041 5072 generic.go:334] "Generic (PLEG): container finished" podID="f6cf63fa-6157-4ba4-96fb-2b72065bbab7" containerID="74b37c494113b92a10313ea1622c376c7a0a02fd275104771a80623e25cc0d31" exitCode=0 Nov 24 11:27:03 crc kubenswrapper[5072]: I1124 11:27:03.240103 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-bf4a-account-create-st8r6" event={"ID":"f6cf63fa-6157-4ba4-96fb-2b72065bbab7","Type":"ContainerDied","Data":"74b37c494113b92a10313ea1622c376c7a0a02fd275104771a80623e25cc0d31"} Nov 24 11:27:03 crc kubenswrapper[5072]: I1124 11:27:03.241739 5072 generic.go:334] "Generic (PLEG): container finished" podID="f47541bf-a131-46fe-81d9-30eb49272885" containerID="7b5f998e1d6d141763d629ea2f6fd478be5fc98c84edfbe115f2f0f6c5753d93" exitCode=0 Nov 24 11:27:03 crc kubenswrapper[5072]: I1124 11:27:03.241837 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-d9mv6" event={"ID":"f47541bf-a131-46fe-81d9-30eb49272885","Type":"ContainerDied","Data":"7b5f998e1d6d141763d629ea2f6fd478be5fc98c84edfbe115f2f0f6c5753d93"} Nov 24 11:27:03 crc kubenswrapper[5072]: I1124 11:27:03.243239 5072 generic.go:334] "Generic (PLEG): container finished" podID="ef0ae516-a614-4d41-b48e-6ec7544ecc8b" containerID="e6128dea18b58d4ec75aa109a5be0e46d0a423c1617596295d8068649a5c1861" exitCode=0 Nov 24 11:27:03 crc kubenswrapper[5072]: I1124 11:27:03.243321 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-47a1-account-create-w245w" event={"ID":"ef0ae516-a614-4d41-b48e-6ec7544ecc8b","Type":"ContainerDied","Data":"e6128dea18b58d4ec75aa109a5be0e46d0a423c1617596295d8068649a5c1861"} Nov 24 11:27:03 crc kubenswrapper[5072]: I1124 11:27:03.246229 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b59dad27-fffc-4e50-a269-262c2b77f88b","Type":"ContainerStarted","Data":"9570498efee1504907d2b0091f22953179d3d3ead2140ad2eec4b58c14fdbfbd"} Nov 24 11:27:03 crc kubenswrapper[5072]: I1124 11:27:03.246519 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b59dad27-fffc-4e50-a269-262c2b77f88b" containerName="proxy-httpd" containerID="cri-o://9570498efee1504907d2b0091f22953179d3d3ead2140ad2eec4b58c14fdbfbd" gracePeriod=30 Nov 24 11:27:03 crc kubenswrapper[5072]: I1124 11:27:03.246538 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b59dad27-fffc-4e50-a269-262c2b77f88b" containerName="sg-core" containerID="cri-o://c0fc8141787504e1987793eee0c5064b98e2369e7992e465ab9669d2260c3f98" gracePeriod=30 Nov 24 11:27:03 crc kubenswrapper[5072]: I1124 11:27:03.246583 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b59dad27-fffc-4e50-a269-262c2b77f88b" containerName="ceilometer-notification-agent" containerID="cri-o://277de269c6a9e9c2d5fd0d05eb43e1e69087235b58290db1896dc560fd5ef83f" gracePeriod=30 Nov 24 11:27:03 crc kubenswrapper[5072]: I1124 11:27:03.246609 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b59dad27-fffc-4e50-a269-262c2b77f88b" containerName="ceilometer-central-agent" containerID="cri-o://2e8d32fe55dd20c0d929b7cc110400e6b67ca6a7e9682dd43daf79b861e9cdf6" gracePeriod=30 Nov 24 11:27:04 crc kubenswrapper[5072]: I1124 11:27:04.260746 5072 generic.go:334] "Generic (PLEG): container finished" podID="b59dad27-fffc-4e50-a269-262c2b77f88b" containerID="9570498efee1504907d2b0091f22953179d3d3ead2140ad2eec4b58c14fdbfbd" exitCode=0 Nov 24 11:27:04 crc kubenswrapper[5072]: I1124 11:27:04.261747 5072 generic.go:334] "Generic (PLEG): container finished" podID="b59dad27-fffc-4e50-a269-262c2b77f88b" containerID="c0fc8141787504e1987793eee0c5064b98e2369e7992e465ab9669d2260c3f98" exitCode=2 Nov 24 11:27:04 crc kubenswrapper[5072]: I1124 11:27:04.261847 5072 generic.go:334] "Generic (PLEG): container finished" podID="b59dad27-fffc-4e50-a269-262c2b77f88b" containerID="277de269c6a9e9c2d5fd0d05eb43e1e69087235b58290db1896dc560fd5ef83f" exitCode=0 Nov 24 11:27:04 crc kubenswrapper[5072]: I1124 11:27:04.260835 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b59dad27-fffc-4e50-a269-262c2b77f88b","Type":"ContainerDied","Data":"9570498efee1504907d2b0091f22953179d3d3ead2140ad2eec4b58c14fdbfbd"} Nov 24 11:27:04 crc kubenswrapper[5072]: I1124 11:27:04.263065 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b59dad27-fffc-4e50-a269-262c2b77f88b","Type":"ContainerDied","Data":"c0fc8141787504e1987793eee0c5064b98e2369e7992e465ab9669d2260c3f98"} Nov 24 11:27:04 crc kubenswrapper[5072]: I1124 11:27:04.263239 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b59dad27-fffc-4e50-a269-262c2b77f88b","Type":"ContainerDied","Data":"277de269c6a9e9c2d5fd0d05eb43e1e69087235b58290db1896dc560fd5ef83f"} Nov 24 11:27:04 crc kubenswrapper[5072]: I1124 11:27:04.613296 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-bf4a-account-create-st8r6" Nov 24 11:27:04 crc kubenswrapper[5072]: I1124 11:27:04.639647 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6cf63fa-6157-4ba4-96fb-2b72065bbab7-operator-scripts\") pod \"f6cf63fa-6157-4ba4-96fb-2b72065bbab7\" (UID: \"f6cf63fa-6157-4ba4-96fb-2b72065bbab7\") " Nov 24 11:27:04 crc kubenswrapper[5072]: I1124 11:27:04.639726 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5vttk\" (UniqueName: \"kubernetes.io/projected/f6cf63fa-6157-4ba4-96fb-2b72065bbab7-kube-api-access-5vttk\") pod \"f6cf63fa-6157-4ba4-96fb-2b72065bbab7\" (UID: \"f6cf63fa-6157-4ba4-96fb-2b72065bbab7\") " Nov 24 11:27:04 crc kubenswrapper[5072]: I1124 11:27:04.647943 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6cf63fa-6157-4ba4-96fb-2b72065bbab7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f6cf63fa-6157-4ba4-96fb-2b72065bbab7" (UID: "f6cf63fa-6157-4ba4-96fb-2b72065bbab7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:27:04 crc kubenswrapper[5072]: I1124 11:27:04.649143 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=7.514411378 podStartE2EDuration="11.649118711s" podCreationTimestamp="2025-11-24 11:26:53 +0000 UTC" firstStartedPulling="2025-11-24 11:26:58.355919578 +0000 UTC m=+1070.067444054" lastFinishedPulling="2025-11-24 11:27:02.490626901 +0000 UTC m=+1074.202151387" observedRunningTime="2025-11-24 11:27:03.349103524 +0000 UTC m=+1075.060628000" watchObservedRunningTime="2025-11-24 11:27:04.649118711 +0000 UTC m=+1076.360643187" Nov 24 11:27:04 crc kubenswrapper[5072]: I1124 11:27:04.657152 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6cf63fa-6157-4ba4-96fb-2b72065bbab7-kube-api-access-5vttk" (OuterVolumeSpecName: "kube-api-access-5vttk") pod "f6cf63fa-6157-4ba4-96fb-2b72065bbab7" (UID: "f6cf63fa-6157-4ba4-96fb-2b72065bbab7"). InnerVolumeSpecName "kube-api-access-5vttk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:27:04 crc kubenswrapper[5072]: I1124 11:27:04.727057 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-d9mv6" Nov 24 11:27:04 crc kubenswrapper[5072]: I1124 11:27:04.740618 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9jvmn\" (UniqueName: \"kubernetes.io/projected/f47541bf-a131-46fe-81d9-30eb49272885-kube-api-access-9jvmn\") pod \"f47541bf-a131-46fe-81d9-30eb49272885\" (UID: \"f47541bf-a131-46fe-81d9-30eb49272885\") " Nov 24 11:27:04 crc kubenswrapper[5072]: I1124 11:27:04.740857 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f47541bf-a131-46fe-81d9-30eb49272885-operator-scripts\") pod \"f47541bf-a131-46fe-81d9-30eb49272885\" (UID: \"f47541bf-a131-46fe-81d9-30eb49272885\") " Nov 24 11:27:04 crc kubenswrapper[5072]: I1124 11:27:04.741244 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f47541bf-a131-46fe-81d9-30eb49272885-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f47541bf-a131-46fe-81d9-30eb49272885" (UID: "f47541bf-a131-46fe-81d9-30eb49272885"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:27:04 crc kubenswrapper[5072]: I1124 11:27:04.741277 5072 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6cf63fa-6157-4ba4-96fb-2b72065bbab7-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:27:04 crc kubenswrapper[5072]: I1124 11:27:04.741298 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5vttk\" (UniqueName: \"kubernetes.io/projected/f6cf63fa-6157-4ba4-96fb-2b72065bbab7-kube-api-access-5vttk\") on node \"crc\" DevicePath \"\"" Nov 24 11:27:04 crc kubenswrapper[5072]: I1124 11:27:04.743827 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f47541bf-a131-46fe-81d9-30eb49272885-kube-api-access-9jvmn" (OuterVolumeSpecName: "kube-api-access-9jvmn") pod "f47541bf-a131-46fe-81d9-30eb49272885" (UID: "f47541bf-a131-46fe-81d9-30eb49272885"). InnerVolumeSpecName "kube-api-access-9jvmn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:27:04 crc kubenswrapper[5072]: I1124 11:27:04.841846 5072 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f47541bf-a131-46fe-81d9-30eb49272885-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:27:04 crc kubenswrapper[5072]: I1124 11:27:04.841875 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9jvmn\" (UniqueName: \"kubernetes.io/projected/f47541bf-a131-46fe-81d9-30eb49272885-kube-api-access-9jvmn\") on node \"crc\" DevicePath \"\"" Nov 24 11:27:04 crc kubenswrapper[5072]: I1124 11:27:04.845188 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-7cpcc" Nov 24 11:27:04 crc kubenswrapper[5072]: I1124 11:27:04.867112 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-fa17-account-create-6k8xl" Nov 24 11:27:04 crc kubenswrapper[5072]: I1124 11:27:04.894033 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-bc2xz" Nov 24 11:27:04 crc kubenswrapper[5072]: I1124 11:27:04.910986 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-47a1-account-create-w245w" Nov 24 11:27:04 crc kubenswrapper[5072]: I1124 11:27:04.942891 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ecd15413-8bab-481f-869c-02b3fd9fadc2-operator-scripts\") pod \"ecd15413-8bab-481f-869c-02b3fd9fadc2\" (UID: \"ecd15413-8bab-481f-869c-02b3fd9fadc2\") " Nov 24 11:27:04 crc kubenswrapper[5072]: I1124 11:27:04.942948 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/784a74b5-3431-4fc5-ac75-d759b1f2a4cb-operator-scripts\") pod \"784a74b5-3431-4fc5-ac75-d759b1f2a4cb\" (UID: \"784a74b5-3431-4fc5-ac75-d759b1f2a4cb\") " Nov 24 11:27:04 crc kubenswrapper[5072]: I1124 11:27:04.943082 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a645a183-9f0b-4761-89d5-9ed93d898c5d-operator-scripts\") pod \"a645a183-9f0b-4761-89d5-9ed93d898c5d\" (UID: \"a645a183-9f0b-4761-89d5-9ed93d898c5d\") " Nov 24 11:27:04 crc kubenswrapper[5072]: I1124 11:27:04.943165 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4jj4r\" (UniqueName: \"kubernetes.io/projected/784a74b5-3431-4fc5-ac75-d759b1f2a4cb-kube-api-access-4jj4r\") pod \"784a74b5-3431-4fc5-ac75-d759b1f2a4cb\" (UID: \"784a74b5-3431-4fc5-ac75-d759b1f2a4cb\") " Nov 24 11:27:04 crc kubenswrapper[5072]: I1124 11:27:04.943187 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xmj2h\" (UniqueName: \"kubernetes.io/projected/ecd15413-8bab-481f-869c-02b3fd9fadc2-kube-api-access-xmj2h\") pod \"ecd15413-8bab-481f-869c-02b3fd9fadc2\" (UID: \"ecd15413-8bab-481f-869c-02b3fd9fadc2\") " Nov 24 11:27:04 crc kubenswrapper[5072]: I1124 11:27:04.943275 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jb64l\" (UniqueName: \"kubernetes.io/projected/a645a183-9f0b-4761-89d5-9ed93d898c5d-kube-api-access-jb64l\") pod \"a645a183-9f0b-4761-89d5-9ed93d898c5d\" (UID: \"a645a183-9f0b-4761-89d5-9ed93d898c5d\") " Nov 24 11:27:04 crc kubenswrapper[5072]: I1124 11:27:04.943299 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lx5mm\" (UniqueName: \"kubernetes.io/projected/ef0ae516-a614-4d41-b48e-6ec7544ecc8b-kube-api-access-lx5mm\") pod \"ef0ae516-a614-4d41-b48e-6ec7544ecc8b\" (UID: \"ef0ae516-a614-4d41-b48e-6ec7544ecc8b\") " Nov 24 11:27:04 crc kubenswrapper[5072]: I1124 11:27:04.943321 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef0ae516-a614-4d41-b48e-6ec7544ecc8b-operator-scripts\") pod \"ef0ae516-a614-4d41-b48e-6ec7544ecc8b\" (UID: \"ef0ae516-a614-4d41-b48e-6ec7544ecc8b\") " Nov 24 11:27:04 crc kubenswrapper[5072]: I1124 11:27:04.943362 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a645a183-9f0b-4761-89d5-9ed93d898c5d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a645a183-9f0b-4761-89d5-9ed93d898c5d" (UID: "a645a183-9f0b-4761-89d5-9ed93d898c5d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:27:04 crc kubenswrapper[5072]: I1124 11:27:04.943383 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ecd15413-8bab-481f-869c-02b3fd9fadc2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ecd15413-8bab-481f-869c-02b3fd9fadc2" (UID: "ecd15413-8bab-481f-869c-02b3fd9fadc2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:27:04 crc kubenswrapper[5072]: I1124 11:27:04.943679 5072 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ecd15413-8bab-481f-869c-02b3fd9fadc2-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:27:04 crc kubenswrapper[5072]: I1124 11:27:04.943695 5072 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a645a183-9f0b-4761-89d5-9ed93d898c5d-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:27:04 crc kubenswrapper[5072]: I1124 11:27:04.943714 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef0ae516-a614-4d41-b48e-6ec7544ecc8b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ef0ae516-a614-4d41-b48e-6ec7544ecc8b" (UID: "ef0ae516-a614-4d41-b48e-6ec7544ecc8b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:27:04 crc kubenswrapper[5072]: I1124 11:27:04.943930 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/784a74b5-3431-4fc5-ac75-d759b1f2a4cb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "784a74b5-3431-4fc5-ac75-d759b1f2a4cb" (UID: "784a74b5-3431-4fc5-ac75-d759b1f2a4cb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:27:04 crc kubenswrapper[5072]: I1124 11:27:04.946448 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a645a183-9f0b-4761-89d5-9ed93d898c5d-kube-api-access-jb64l" (OuterVolumeSpecName: "kube-api-access-jb64l") pod "a645a183-9f0b-4761-89d5-9ed93d898c5d" (UID: "a645a183-9f0b-4761-89d5-9ed93d898c5d"). InnerVolumeSpecName "kube-api-access-jb64l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:27:04 crc kubenswrapper[5072]: I1124 11:27:04.946480 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/784a74b5-3431-4fc5-ac75-d759b1f2a4cb-kube-api-access-4jj4r" (OuterVolumeSpecName: "kube-api-access-4jj4r") pod "784a74b5-3431-4fc5-ac75-d759b1f2a4cb" (UID: "784a74b5-3431-4fc5-ac75-d759b1f2a4cb"). InnerVolumeSpecName "kube-api-access-4jj4r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:27:04 crc kubenswrapper[5072]: I1124 11:27:04.947197 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef0ae516-a614-4d41-b48e-6ec7544ecc8b-kube-api-access-lx5mm" (OuterVolumeSpecName: "kube-api-access-lx5mm") pod "ef0ae516-a614-4d41-b48e-6ec7544ecc8b" (UID: "ef0ae516-a614-4d41-b48e-6ec7544ecc8b"). InnerVolumeSpecName "kube-api-access-lx5mm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:27:04 crc kubenswrapper[5072]: I1124 11:27:04.947681 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecd15413-8bab-481f-869c-02b3fd9fadc2-kube-api-access-xmj2h" (OuterVolumeSpecName: "kube-api-access-xmj2h") pod "ecd15413-8bab-481f-869c-02b3fd9fadc2" (UID: "ecd15413-8bab-481f-869c-02b3fd9fadc2"). InnerVolumeSpecName "kube-api-access-xmj2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:27:05 crc kubenswrapper[5072]: I1124 11:27:05.044510 5072 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/784a74b5-3431-4fc5-ac75-d759b1f2a4cb-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:27:05 crc kubenswrapper[5072]: I1124 11:27:05.044535 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xmj2h\" (UniqueName: \"kubernetes.io/projected/ecd15413-8bab-481f-869c-02b3fd9fadc2-kube-api-access-xmj2h\") on node \"crc\" DevicePath \"\"" Nov 24 11:27:05 crc kubenswrapper[5072]: I1124 11:27:05.044545 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4jj4r\" (UniqueName: \"kubernetes.io/projected/784a74b5-3431-4fc5-ac75-d759b1f2a4cb-kube-api-access-4jj4r\") on node \"crc\" DevicePath \"\"" Nov 24 11:27:05 crc kubenswrapper[5072]: I1124 11:27:05.044555 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jb64l\" (UniqueName: \"kubernetes.io/projected/a645a183-9f0b-4761-89d5-9ed93d898c5d-kube-api-access-jb64l\") on node \"crc\" DevicePath \"\"" Nov 24 11:27:05 crc kubenswrapper[5072]: I1124 11:27:05.044564 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lx5mm\" (UniqueName: \"kubernetes.io/projected/ef0ae516-a614-4d41-b48e-6ec7544ecc8b-kube-api-access-lx5mm\") on node \"crc\" DevicePath \"\"" Nov 24 11:27:05 crc kubenswrapper[5072]: I1124 11:27:05.044573 5072 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef0ae516-a614-4d41-b48e-6ec7544ecc8b-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:27:05 crc kubenswrapper[5072]: I1124 11:27:05.297884 5072 generic.go:334] "Generic (PLEG): container finished" podID="b59dad27-fffc-4e50-a269-262c2b77f88b" containerID="2e8d32fe55dd20c0d929b7cc110400e6b67ca6a7e9682dd43daf79b861e9cdf6" exitCode=0 Nov 24 11:27:05 crc kubenswrapper[5072]: I1124 11:27:05.297997 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b59dad27-fffc-4e50-a269-262c2b77f88b","Type":"ContainerDied","Data":"2e8d32fe55dd20c0d929b7cc110400e6b67ca6a7e9682dd43daf79b861e9cdf6"} Nov 24 11:27:05 crc kubenswrapper[5072]: I1124 11:27:05.300964 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-fa17-account-create-6k8xl" event={"ID":"ecd15413-8bab-481f-869c-02b3fd9fadc2","Type":"ContainerDied","Data":"bc21da92d3b59c5806e98faf8b234b91f3ba486be2a8b2ab91363ee08ae1ec27"} Nov 24 11:27:05 crc kubenswrapper[5072]: I1124 11:27:05.301007 5072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc21da92d3b59c5806e98faf8b234b91f3ba486be2a8b2ab91363ee08ae1ec27" Nov 24 11:27:05 crc kubenswrapper[5072]: I1124 11:27:05.301072 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-fa17-account-create-6k8xl" Nov 24 11:27:05 crc kubenswrapper[5072]: I1124 11:27:05.303712 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-7cpcc" event={"ID":"a645a183-9f0b-4761-89d5-9ed93d898c5d","Type":"ContainerDied","Data":"0ae6b520551cbb666ac7f9a20a6cd38622674b1d6fdb705ff6676ff9e0c4543d"} Nov 24 11:27:05 crc kubenswrapper[5072]: I1124 11:27:05.303734 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-7cpcc" Nov 24 11:27:05 crc kubenswrapper[5072]: I1124 11:27:05.303746 5072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ae6b520551cbb666ac7f9a20a6cd38622674b1d6fdb705ff6676ff9e0c4543d" Nov 24 11:27:05 crc kubenswrapper[5072]: I1124 11:27:05.306044 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-bc2xz" event={"ID":"784a74b5-3431-4fc5-ac75-d759b1f2a4cb","Type":"ContainerDied","Data":"a2a9b4e138ba2d1d4da6cf669133e1ece5ffe93624611879c54a251509c4a0b1"} Nov 24 11:27:05 crc kubenswrapper[5072]: I1124 11:27:05.306082 5072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2a9b4e138ba2d1d4da6cf669133e1ece5ffe93624611879c54a251509c4a0b1" Nov 24 11:27:05 crc kubenswrapper[5072]: I1124 11:27:05.306142 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-bc2xz" Nov 24 11:27:05 crc kubenswrapper[5072]: I1124 11:27:05.309666 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-bf4a-account-create-st8r6" event={"ID":"f6cf63fa-6157-4ba4-96fb-2b72065bbab7","Type":"ContainerDied","Data":"d8a0b386fe35a5213f04c3b9f7d12a99fbefba563a5969afcb4fbd8475a3a5ab"} Nov 24 11:27:05 crc kubenswrapper[5072]: I1124 11:27:05.309742 5072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8a0b386fe35a5213f04c3b9f7d12a99fbefba563a5969afcb4fbd8475a3a5ab" Nov 24 11:27:05 crc kubenswrapper[5072]: I1124 11:27:05.309967 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-bf4a-account-create-st8r6" Nov 24 11:27:05 crc kubenswrapper[5072]: E1124 11:27:05.315665 5072 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb59dad27_fffc_4e50_a269_262c2b77f88b.slice/crio-conmon-2e8d32fe55dd20c0d929b7cc110400e6b67ca6a7e9682dd43daf79b861e9cdf6.scope\": RecentStats: unable to find data in memory cache]" Nov 24 11:27:05 crc kubenswrapper[5072]: I1124 11:27:05.316349 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-d9mv6" event={"ID":"f47541bf-a131-46fe-81d9-30eb49272885","Type":"ContainerDied","Data":"373e497b9ced78a829e2c5baa906e31e2b86cf60b3edd5ad1588474735671d60"} Nov 24 11:27:05 crc kubenswrapper[5072]: I1124 11:27:05.316395 5072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="373e497b9ced78a829e2c5baa906e31e2b86cf60b3edd5ad1588474735671d60" Nov 24 11:27:05 crc kubenswrapper[5072]: I1124 11:27:05.316422 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-d9mv6" Nov 24 11:27:05 crc kubenswrapper[5072]: I1124 11:27:05.321950 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-47a1-account-create-w245w" event={"ID":"ef0ae516-a614-4d41-b48e-6ec7544ecc8b","Type":"ContainerDied","Data":"059bad3f58b6f4115a6c7509e6f2a743b36fb4ed2fb7eac3f8d6595671bce359"} Nov 24 11:27:05 crc kubenswrapper[5072]: I1124 11:27:05.321979 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-47a1-account-create-w245w" Nov 24 11:27:05 crc kubenswrapper[5072]: I1124 11:27:05.321989 5072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="059bad3f58b6f4115a6c7509e6f2a743b36fb4ed2fb7eac3f8d6595671bce359" Nov 24 11:27:05 crc kubenswrapper[5072]: I1124 11:27:05.457325 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 11:27:05 crc kubenswrapper[5072]: I1124 11:27:05.551710 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b59dad27-fffc-4e50-a269-262c2b77f88b-run-httpd\") pod \"b59dad27-fffc-4e50-a269-262c2b77f88b\" (UID: \"b59dad27-fffc-4e50-a269-262c2b77f88b\") " Nov 24 11:27:05 crc kubenswrapper[5072]: I1124 11:27:05.551778 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b59dad27-fffc-4e50-a269-262c2b77f88b-config-data\") pod \"b59dad27-fffc-4e50-a269-262c2b77f88b\" (UID: \"b59dad27-fffc-4e50-a269-262c2b77f88b\") " Nov 24 11:27:05 crc kubenswrapper[5072]: I1124 11:27:05.551879 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b59dad27-fffc-4e50-a269-262c2b77f88b-sg-core-conf-yaml\") pod \"b59dad27-fffc-4e50-a269-262c2b77f88b\" (UID: \"b59dad27-fffc-4e50-a269-262c2b77f88b\") " Nov 24 11:27:05 crc kubenswrapper[5072]: I1124 11:27:05.552006 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b59dad27-fffc-4e50-a269-262c2b77f88b-scripts\") pod \"b59dad27-fffc-4e50-a269-262c2b77f88b\" (UID: \"b59dad27-fffc-4e50-a269-262c2b77f88b\") " Nov 24 11:27:05 crc kubenswrapper[5072]: I1124 11:27:05.552041 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b59dad27-fffc-4e50-a269-262c2b77f88b-log-httpd\") pod \"b59dad27-fffc-4e50-a269-262c2b77f88b\" (UID: \"b59dad27-fffc-4e50-a269-262c2b77f88b\") " Nov 24 11:27:05 crc kubenswrapper[5072]: I1124 11:27:05.552099 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c26vn\" (UniqueName: \"kubernetes.io/projected/b59dad27-fffc-4e50-a269-262c2b77f88b-kube-api-access-c26vn\") pod \"b59dad27-fffc-4e50-a269-262c2b77f88b\" (UID: \"b59dad27-fffc-4e50-a269-262c2b77f88b\") " Nov 24 11:27:05 crc kubenswrapper[5072]: I1124 11:27:05.552171 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b59dad27-fffc-4e50-a269-262c2b77f88b-combined-ca-bundle\") pod \"b59dad27-fffc-4e50-a269-262c2b77f88b\" (UID: \"b59dad27-fffc-4e50-a269-262c2b77f88b\") " Nov 24 11:27:05 crc kubenswrapper[5072]: I1124 11:27:05.552089 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b59dad27-fffc-4e50-a269-262c2b77f88b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b59dad27-fffc-4e50-a269-262c2b77f88b" (UID: "b59dad27-fffc-4e50-a269-262c2b77f88b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:27:05 crc kubenswrapper[5072]: I1124 11:27:05.557296 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b59dad27-fffc-4e50-a269-262c2b77f88b-kube-api-access-c26vn" (OuterVolumeSpecName: "kube-api-access-c26vn") pod "b59dad27-fffc-4e50-a269-262c2b77f88b" (UID: "b59dad27-fffc-4e50-a269-262c2b77f88b"). InnerVolumeSpecName "kube-api-access-c26vn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:27:05 crc kubenswrapper[5072]: I1124 11:27:05.557691 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b59dad27-fffc-4e50-a269-262c2b77f88b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b59dad27-fffc-4e50-a269-262c2b77f88b" (UID: "b59dad27-fffc-4e50-a269-262c2b77f88b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:27:05 crc kubenswrapper[5072]: I1124 11:27:05.557761 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b59dad27-fffc-4e50-a269-262c2b77f88b-scripts" (OuterVolumeSpecName: "scripts") pod "b59dad27-fffc-4e50-a269-262c2b77f88b" (UID: "b59dad27-fffc-4e50-a269-262c2b77f88b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:27:05 crc kubenswrapper[5072]: I1124 11:27:05.579750 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b59dad27-fffc-4e50-a269-262c2b77f88b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b59dad27-fffc-4e50-a269-262c2b77f88b" (UID: "b59dad27-fffc-4e50-a269-262c2b77f88b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:27:05 crc kubenswrapper[5072]: I1124 11:27:05.630429 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b59dad27-fffc-4e50-a269-262c2b77f88b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b59dad27-fffc-4e50-a269-262c2b77f88b" (UID: "b59dad27-fffc-4e50-a269-262c2b77f88b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:27:05 crc kubenswrapper[5072]: I1124 11:27:05.654485 5072 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b59dad27-fffc-4e50-a269-262c2b77f88b-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 11:27:05 crc kubenswrapper[5072]: I1124 11:27:05.654748 5072 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b59dad27-fffc-4e50-a269-262c2b77f88b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 11:27:05 crc kubenswrapper[5072]: I1124 11:27:05.654816 5072 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b59dad27-fffc-4e50-a269-262c2b77f88b-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:27:05 crc kubenswrapper[5072]: I1124 11:27:05.654897 5072 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b59dad27-fffc-4e50-a269-262c2b77f88b-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 11:27:05 crc kubenswrapper[5072]: I1124 11:27:05.654976 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c26vn\" (UniqueName: \"kubernetes.io/projected/b59dad27-fffc-4e50-a269-262c2b77f88b-kube-api-access-c26vn\") on node \"crc\" DevicePath \"\"" Nov 24 11:27:05 crc kubenswrapper[5072]: I1124 11:27:05.655105 5072 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b59dad27-fffc-4e50-a269-262c2b77f88b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:27:05 crc kubenswrapper[5072]: I1124 11:27:05.670178 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b59dad27-fffc-4e50-a269-262c2b77f88b-config-data" (OuterVolumeSpecName: "config-data") pod "b59dad27-fffc-4e50-a269-262c2b77f88b" (UID: "b59dad27-fffc-4e50-a269-262c2b77f88b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:27:05 crc kubenswrapper[5072]: I1124 11:27:05.756137 5072 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b59dad27-fffc-4e50-a269-262c2b77f88b-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:27:06 crc kubenswrapper[5072]: I1124 11:27:06.125811 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-vp5q8"] Nov 24 11:27:06 crc kubenswrapper[5072]: E1124 11:27:06.126136 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6cf63fa-6157-4ba4-96fb-2b72065bbab7" containerName="mariadb-account-create" Nov 24 11:27:06 crc kubenswrapper[5072]: I1124 11:27:06.126152 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6cf63fa-6157-4ba4-96fb-2b72065bbab7" containerName="mariadb-account-create" Nov 24 11:27:06 crc kubenswrapper[5072]: E1124 11:27:06.126202 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecd15413-8bab-481f-869c-02b3fd9fadc2" containerName="mariadb-account-create" Nov 24 11:27:06 crc kubenswrapper[5072]: I1124 11:27:06.126209 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecd15413-8bab-481f-869c-02b3fd9fadc2" containerName="mariadb-account-create" Nov 24 11:27:06 crc kubenswrapper[5072]: E1124 11:27:06.126219 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f47541bf-a131-46fe-81d9-30eb49272885" containerName="mariadb-database-create" Nov 24 11:27:06 crc kubenswrapper[5072]: I1124 11:27:06.126226 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="f47541bf-a131-46fe-81d9-30eb49272885" containerName="mariadb-database-create" Nov 24 11:27:06 crc kubenswrapper[5072]: E1124 11:27:06.126233 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b59dad27-fffc-4e50-a269-262c2b77f88b" containerName="ceilometer-central-agent" Nov 24 11:27:06 crc kubenswrapper[5072]: I1124 11:27:06.126239 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="b59dad27-fffc-4e50-a269-262c2b77f88b" containerName="ceilometer-central-agent" Nov 24 11:27:06 crc kubenswrapper[5072]: E1124 11:27:06.126248 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b59dad27-fffc-4e50-a269-262c2b77f88b" containerName="sg-core" Nov 24 11:27:06 crc kubenswrapper[5072]: I1124 11:27:06.126253 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="b59dad27-fffc-4e50-a269-262c2b77f88b" containerName="sg-core" Nov 24 11:27:06 crc kubenswrapper[5072]: E1124 11:27:06.126263 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="784a74b5-3431-4fc5-ac75-d759b1f2a4cb" containerName="mariadb-database-create" Nov 24 11:27:06 crc kubenswrapper[5072]: I1124 11:27:06.126270 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="784a74b5-3431-4fc5-ac75-d759b1f2a4cb" containerName="mariadb-database-create" Nov 24 11:27:06 crc kubenswrapper[5072]: E1124 11:27:06.126278 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b59dad27-fffc-4e50-a269-262c2b77f88b" containerName="proxy-httpd" Nov 24 11:27:06 crc kubenswrapper[5072]: I1124 11:27:06.126284 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="b59dad27-fffc-4e50-a269-262c2b77f88b" containerName="proxy-httpd" Nov 24 11:27:06 crc kubenswrapper[5072]: E1124 11:27:06.126298 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a645a183-9f0b-4761-89d5-9ed93d898c5d" containerName="mariadb-database-create" Nov 24 11:27:06 crc kubenswrapper[5072]: I1124 11:27:06.126304 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="a645a183-9f0b-4761-89d5-9ed93d898c5d" containerName="mariadb-database-create" Nov 24 11:27:06 crc kubenswrapper[5072]: E1124 11:27:06.126316 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef0ae516-a614-4d41-b48e-6ec7544ecc8b" containerName="mariadb-account-create" Nov 24 11:27:06 crc kubenswrapper[5072]: I1124 11:27:06.126321 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef0ae516-a614-4d41-b48e-6ec7544ecc8b" containerName="mariadb-account-create" Nov 24 11:27:06 crc kubenswrapper[5072]: E1124 11:27:06.126330 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b59dad27-fffc-4e50-a269-262c2b77f88b" containerName="ceilometer-notification-agent" Nov 24 11:27:06 crc kubenswrapper[5072]: I1124 11:27:06.126335 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="b59dad27-fffc-4e50-a269-262c2b77f88b" containerName="ceilometer-notification-agent" Nov 24 11:27:06 crc kubenswrapper[5072]: I1124 11:27:06.126490 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6cf63fa-6157-4ba4-96fb-2b72065bbab7" containerName="mariadb-account-create" Nov 24 11:27:06 crc kubenswrapper[5072]: I1124 11:27:06.126507 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="a645a183-9f0b-4761-89d5-9ed93d898c5d" containerName="mariadb-database-create" Nov 24 11:27:06 crc kubenswrapper[5072]: I1124 11:27:06.126515 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecd15413-8bab-481f-869c-02b3fd9fadc2" containerName="mariadb-account-create" Nov 24 11:27:06 crc kubenswrapper[5072]: I1124 11:27:06.126526 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="784a74b5-3431-4fc5-ac75-d759b1f2a4cb" containerName="mariadb-database-create" Nov 24 11:27:06 crc kubenswrapper[5072]: I1124 11:27:06.126535 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="b59dad27-fffc-4e50-a269-262c2b77f88b" containerName="proxy-httpd" Nov 24 11:27:06 crc kubenswrapper[5072]: I1124 11:27:06.126544 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef0ae516-a614-4d41-b48e-6ec7544ecc8b" containerName="mariadb-account-create" Nov 24 11:27:06 crc kubenswrapper[5072]: I1124 11:27:06.126551 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="b59dad27-fffc-4e50-a269-262c2b77f88b" containerName="sg-core" Nov 24 11:27:06 crc kubenswrapper[5072]: I1124 11:27:06.126561 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="b59dad27-fffc-4e50-a269-262c2b77f88b" containerName="ceilometer-central-agent" Nov 24 11:27:06 crc kubenswrapper[5072]: I1124 11:27:06.126569 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="b59dad27-fffc-4e50-a269-262c2b77f88b" containerName="ceilometer-notification-agent" Nov 24 11:27:06 crc kubenswrapper[5072]: I1124 11:27:06.126578 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="f47541bf-a131-46fe-81d9-30eb49272885" containerName="mariadb-database-create" Nov 24 11:27:06 crc kubenswrapper[5072]: I1124 11:27:06.127099 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-vp5q8" Nov 24 11:27:06 crc kubenswrapper[5072]: I1124 11:27:06.131489 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 24 11:27:06 crc kubenswrapper[5072]: I1124 11:27:06.140072 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Nov 24 11:27:06 crc kubenswrapper[5072]: I1124 11:27:06.140092 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-ltsnd" Nov 24 11:27:06 crc kubenswrapper[5072]: I1124 11:27:06.147039 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-vp5q8"] Nov 24 11:27:06 crc kubenswrapper[5072]: I1124 11:27:06.162099 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da16f5d0-f121-4388-983a-caca760fa5c6-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-vp5q8\" (UID: \"da16f5d0-f121-4388-983a-caca760fa5c6\") " pod="openstack/nova-cell0-conductor-db-sync-vp5q8" Nov 24 11:27:06 crc kubenswrapper[5072]: I1124 11:27:06.162169 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da16f5d0-f121-4388-983a-caca760fa5c6-config-data\") pod \"nova-cell0-conductor-db-sync-vp5q8\" (UID: \"da16f5d0-f121-4388-983a-caca760fa5c6\") " pod="openstack/nova-cell0-conductor-db-sync-vp5q8" Nov 24 11:27:06 crc kubenswrapper[5072]: I1124 11:27:06.162217 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vt25z\" (UniqueName: \"kubernetes.io/projected/da16f5d0-f121-4388-983a-caca760fa5c6-kube-api-access-vt25z\") pod \"nova-cell0-conductor-db-sync-vp5q8\" (UID: \"da16f5d0-f121-4388-983a-caca760fa5c6\") " pod="openstack/nova-cell0-conductor-db-sync-vp5q8" Nov 24 11:27:06 crc kubenswrapper[5072]: I1124 11:27:06.162258 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da16f5d0-f121-4388-983a-caca760fa5c6-scripts\") pod \"nova-cell0-conductor-db-sync-vp5q8\" (UID: \"da16f5d0-f121-4388-983a-caca760fa5c6\") " pod="openstack/nova-cell0-conductor-db-sync-vp5q8" Nov 24 11:27:06 crc kubenswrapper[5072]: I1124 11:27:06.263444 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vt25z\" (UniqueName: \"kubernetes.io/projected/da16f5d0-f121-4388-983a-caca760fa5c6-kube-api-access-vt25z\") pod \"nova-cell0-conductor-db-sync-vp5q8\" (UID: \"da16f5d0-f121-4388-983a-caca760fa5c6\") " pod="openstack/nova-cell0-conductor-db-sync-vp5q8" Nov 24 11:27:06 crc kubenswrapper[5072]: I1124 11:27:06.263844 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da16f5d0-f121-4388-983a-caca760fa5c6-scripts\") pod \"nova-cell0-conductor-db-sync-vp5q8\" (UID: \"da16f5d0-f121-4388-983a-caca760fa5c6\") " pod="openstack/nova-cell0-conductor-db-sync-vp5q8" Nov 24 11:27:06 crc kubenswrapper[5072]: I1124 11:27:06.263940 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da16f5d0-f121-4388-983a-caca760fa5c6-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-vp5q8\" (UID: \"da16f5d0-f121-4388-983a-caca760fa5c6\") " pod="openstack/nova-cell0-conductor-db-sync-vp5q8" Nov 24 11:27:06 crc kubenswrapper[5072]: I1124 11:27:06.264647 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da16f5d0-f121-4388-983a-caca760fa5c6-config-data\") pod \"nova-cell0-conductor-db-sync-vp5q8\" (UID: \"da16f5d0-f121-4388-983a-caca760fa5c6\") " pod="openstack/nova-cell0-conductor-db-sync-vp5q8" Nov 24 11:27:06 crc kubenswrapper[5072]: I1124 11:27:06.268009 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da16f5d0-f121-4388-983a-caca760fa5c6-config-data\") pod \"nova-cell0-conductor-db-sync-vp5q8\" (UID: \"da16f5d0-f121-4388-983a-caca760fa5c6\") " pod="openstack/nova-cell0-conductor-db-sync-vp5q8" Nov 24 11:27:06 crc kubenswrapper[5072]: I1124 11:27:06.268994 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da16f5d0-f121-4388-983a-caca760fa5c6-scripts\") pod \"nova-cell0-conductor-db-sync-vp5q8\" (UID: \"da16f5d0-f121-4388-983a-caca760fa5c6\") " pod="openstack/nova-cell0-conductor-db-sync-vp5q8" Nov 24 11:27:06 crc kubenswrapper[5072]: I1124 11:27:06.272904 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da16f5d0-f121-4388-983a-caca760fa5c6-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-vp5q8\" (UID: \"da16f5d0-f121-4388-983a-caca760fa5c6\") " pod="openstack/nova-cell0-conductor-db-sync-vp5q8" Nov 24 11:27:06 crc kubenswrapper[5072]: I1124 11:27:06.285056 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vt25z\" (UniqueName: \"kubernetes.io/projected/da16f5d0-f121-4388-983a-caca760fa5c6-kube-api-access-vt25z\") pod \"nova-cell0-conductor-db-sync-vp5q8\" (UID: \"da16f5d0-f121-4388-983a-caca760fa5c6\") " pod="openstack/nova-cell0-conductor-db-sync-vp5q8" Nov 24 11:27:06 crc kubenswrapper[5072]: I1124 11:27:06.332749 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b59dad27-fffc-4e50-a269-262c2b77f88b","Type":"ContainerDied","Data":"2620ff1e05f4a0bcf65743172d463f6d78b3aa0e10090ab31a9fdfc08253df3f"} Nov 24 11:27:06 crc kubenswrapper[5072]: I1124 11:27:06.332817 5072 scope.go:117] "RemoveContainer" containerID="9570498efee1504907d2b0091f22953179d3d3ead2140ad2eec4b58c14fdbfbd" Nov 24 11:27:06 crc kubenswrapper[5072]: I1124 11:27:06.332979 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 11:27:06 crc kubenswrapper[5072]: I1124 11:27:06.357225 5072 scope.go:117] "RemoveContainer" containerID="c0fc8141787504e1987793eee0c5064b98e2369e7992e465ab9669d2260c3f98" Nov 24 11:27:06 crc kubenswrapper[5072]: I1124 11:27:06.383698 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:27:06 crc kubenswrapper[5072]: I1124 11:27:06.388679 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:27:06 crc kubenswrapper[5072]: I1124 11:27:06.388689 5072 scope.go:117] "RemoveContainer" containerID="277de269c6a9e9c2d5fd0d05eb43e1e69087235b58290db1896dc560fd5ef83f" Nov 24 11:27:06 crc kubenswrapper[5072]: I1124 11:27:06.395325 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:27:06 crc kubenswrapper[5072]: I1124 11:27:06.397982 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 11:27:06 crc kubenswrapper[5072]: I1124 11:27:06.401752 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 11:27:06 crc kubenswrapper[5072]: I1124 11:27:06.401837 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 11:27:06 crc kubenswrapper[5072]: I1124 11:27:06.410212 5072 scope.go:117] "RemoveContainer" containerID="2e8d32fe55dd20c0d929b7cc110400e6b67ca6a7e9682dd43daf79b861e9cdf6" Nov 24 11:27:06 crc kubenswrapper[5072]: I1124 11:27:06.412430 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:27:06 crc kubenswrapper[5072]: I1124 11:27:06.441925 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-vp5q8" Nov 24 11:27:06 crc kubenswrapper[5072]: I1124 11:27:06.466540 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/25e2c3b5-6179-4d4f-94ef-a8645a35a2ea-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"25e2c3b5-6179-4d4f-94ef-a8645a35a2ea\") " pod="openstack/ceilometer-0" Nov 24 11:27:06 crc kubenswrapper[5072]: I1124 11:27:06.466580 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25e2c3b5-6179-4d4f-94ef-a8645a35a2ea-config-data\") pod \"ceilometer-0\" (UID: \"25e2c3b5-6179-4d4f-94ef-a8645a35a2ea\") " pod="openstack/ceilometer-0" Nov 24 11:27:06 crc kubenswrapper[5072]: I1124 11:27:06.466677 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25e2c3b5-6179-4d4f-94ef-a8645a35a2ea-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"25e2c3b5-6179-4d4f-94ef-a8645a35a2ea\") " pod="openstack/ceilometer-0" Nov 24 11:27:06 crc kubenswrapper[5072]: I1124 11:27:06.466695 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25e2c3b5-6179-4d4f-94ef-a8645a35a2ea-scripts\") pod \"ceilometer-0\" (UID: \"25e2c3b5-6179-4d4f-94ef-a8645a35a2ea\") " pod="openstack/ceilometer-0" Nov 24 11:27:06 crc kubenswrapper[5072]: I1124 11:27:06.466752 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/25e2c3b5-6179-4d4f-94ef-a8645a35a2ea-log-httpd\") pod \"ceilometer-0\" (UID: \"25e2c3b5-6179-4d4f-94ef-a8645a35a2ea\") " pod="openstack/ceilometer-0" Nov 24 11:27:06 crc kubenswrapper[5072]: I1124 11:27:06.466793 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/25e2c3b5-6179-4d4f-94ef-a8645a35a2ea-run-httpd\") pod \"ceilometer-0\" (UID: \"25e2c3b5-6179-4d4f-94ef-a8645a35a2ea\") " pod="openstack/ceilometer-0" Nov 24 11:27:06 crc kubenswrapper[5072]: I1124 11:27:06.466850 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8lhp\" (UniqueName: \"kubernetes.io/projected/25e2c3b5-6179-4d4f-94ef-a8645a35a2ea-kube-api-access-g8lhp\") pod \"ceilometer-0\" (UID: \"25e2c3b5-6179-4d4f-94ef-a8645a35a2ea\") " pod="openstack/ceilometer-0" Nov 24 11:27:06 crc kubenswrapper[5072]: I1124 11:27:06.568393 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25e2c3b5-6179-4d4f-94ef-a8645a35a2ea-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"25e2c3b5-6179-4d4f-94ef-a8645a35a2ea\") " pod="openstack/ceilometer-0" Nov 24 11:27:06 crc kubenswrapper[5072]: I1124 11:27:06.568448 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25e2c3b5-6179-4d4f-94ef-a8645a35a2ea-scripts\") pod \"ceilometer-0\" (UID: \"25e2c3b5-6179-4d4f-94ef-a8645a35a2ea\") " pod="openstack/ceilometer-0" Nov 24 11:27:06 crc kubenswrapper[5072]: I1124 11:27:06.568507 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/25e2c3b5-6179-4d4f-94ef-a8645a35a2ea-log-httpd\") pod \"ceilometer-0\" (UID: \"25e2c3b5-6179-4d4f-94ef-a8645a35a2ea\") " pod="openstack/ceilometer-0" Nov 24 11:27:06 crc kubenswrapper[5072]: I1124 11:27:06.568648 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/25e2c3b5-6179-4d4f-94ef-a8645a35a2ea-run-httpd\") pod \"ceilometer-0\" (UID: \"25e2c3b5-6179-4d4f-94ef-a8645a35a2ea\") " pod="openstack/ceilometer-0" Nov 24 11:27:06 crc kubenswrapper[5072]: I1124 11:27:06.568775 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8lhp\" (UniqueName: \"kubernetes.io/projected/25e2c3b5-6179-4d4f-94ef-a8645a35a2ea-kube-api-access-g8lhp\") pod \"ceilometer-0\" (UID: \"25e2c3b5-6179-4d4f-94ef-a8645a35a2ea\") " pod="openstack/ceilometer-0" Nov 24 11:27:06 crc kubenswrapper[5072]: I1124 11:27:06.568841 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/25e2c3b5-6179-4d4f-94ef-a8645a35a2ea-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"25e2c3b5-6179-4d4f-94ef-a8645a35a2ea\") " pod="openstack/ceilometer-0" Nov 24 11:27:06 crc kubenswrapper[5072]: I1124 11:27:06.568885 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25e2c3b5-6179-4d4f-94ef-a8645a35a2ea-config-data\") pod \"ceilometer-0\" (UID: \"25e2c3b5-6179-4d4f-94ef-a8645a35a2ea\") " pod="openstack/ceilometer-0" Nov 24 11:27:06 crc kubenswrapper[5072]: I1124 11:27:06.569506 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/25e2c3b5-6179-4d4f-94ef-a8645a35a2ea-run-httpd\") pod \"ceilometer-0\" (UID: \"25e2c3b5-6179-4d4f-94ef-a8645a35a2ea\") " pod="openstack/ceilometer-0" Nov 24 11:27:06 crc kubenswrapper[5072]: I1124 11:27:06.569725 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/25e2c3b5-6179-4d4f-94ef-a8645a35a2ea-log-httpd\") pod \"ceilometer-0\" (UID: \"25e2c3b5-6179-4d4f-94ef-a8645a35a2ea\") " pod="openstack/ceilometer-0" Nov 24 11:27:06 crc kubenswrapper[5072]: I1124 11:27:06.575761 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25e2c3b5-6179-4d4f-94ef-a8645a35a2ea-config-data\") pod \"ceilometer-0\" (UID: \"25e2c3b5-6179-4d4f-94ef-a8645a35a2ea\") " pod="openstack/ceilometer-0" Nov 24 11:27:06 crc kubenswrapper[5072]: I1124 11:27:06.575842 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/25e2c3b5-6179-4d4f-94ef-a8645a35a2ea-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"25e2c3b5-6179-4d4f-94ef-a8645a35a2ea\") " pod="openstack/ceilometer-0" Nov 24 11:27:06 crc kubenswrapper[5072]: I1124 11:27:06.581553 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25e2c3b5-6179-4d4f-94ef-a8645a35a2ea-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"25e2c3b5-6179-4d4f-94ef-a8645a35a2ea\") " pod="openstack/ceilometer-0" Nov 24 11:27:06 crc kubenswrapper[5072]: I1124 11:27:06.582293 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25e2c3b5-6179-4d4f-94ef-a8645a35a2ea-scripts\") pod \"ceilometer-0\" (UID: \"25e2c3b5-6179-4d4f-94ef-a8645a35a2ea\") " pod="openstack/ceilometer-0" Nov 24 11:27:06 crc kubenswrapper[5072]: I1124 11:27:06.589135 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8lhp\" (UniqueName: \"kubernetes.io/projected/25e2c3b5-6179-4d4f-94ef-a8645a35a2ea-kube-api-access-g8lhp\") pod \"ceilometer-0\" (UID: \"25e2c3b5-6179-4d4f-94ef-a8645a35a2ea\") " pod="openstack/ceilometer-0" Nov 24 11:27:06 crc kubenswrapper[5072]: I1124 11:27:06.715629 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 11:27:06 crc kubenswrapper[5072]: I1124 11:27:06.906420 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-vp5q8"] Nov 24 11:27:07 crc kubenswrapper[5072]: I1124 11:27:07.028245 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b59dad27-fffc-4e50-a269-262c2b77f88b" path="/var/lib/kubelet/pods/b59dad27-fffc-4e50-a269-262c2b77f88b/volumes" Nov 24 11:27:07 crc kubenswrapper[5072]: I1124 11:27:07.188096 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:27:07 crc kubenswrapper[5072]: W1124 11:27:07.194014 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod25e2c3b5_6179_4d4f_94ef_a8645a35a2ea.slice/crio-9a07aafa17f4e1618c4f12050225b6b13e54de4aac51464ae939a3928d646c03 WatchSource:0}: Error finding container 9a07aafa17f4e1618c4f12050225b6b13e54de4aac51464ae939a3928d646c03: Status 404 returned error can't find the container with id 9a07aafa17f4e1618c4f12050225b6b13e54de4aac51464ae939a3928d646c03 Nov 24 11:27:07 crc kubenswrapper[5072]: I1124 11:27:07.350099 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-vp5q8" event={"ID":"da16f5d0-f121-4388-983a-caca760fa5c6","Type":"ContainerStarted","Data":"3c65e7a4ff4e2cdb58e30b32eea8a2276bfe9a42e68eafc322e8b2cbb568de5f"} Nov 24 11:27:07 crc kubenswrapper[5072]: I1124 11:27:07.354359 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"25e2c3b5-6179-4d4f-94ef-a8645a35a2ea","Type":"ContainerStarted","Data":"9a07aafa17f4e1618c4f12050225b6b13e54de4aac51464ae939a3928d646c03"} Nov 24 11:27:08 crc kubenswrapper[5072]: I1124 11:27:08.364861 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"25e2c3b5-6179-4d4f-94ef-a8645a35a2ea","Type":"ContainerStarted","Data":"3cc0994b54e5ba02fd84cb19da669f20a9e93e6c7d89899e145a9884b5a4b17c"} Nov 24 11:27:09 crc kubenswrapper[5072]: I1124 11:27:09.374655 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"25e2c3b5-6179-4d4f-94ef-a8645a35a2ea","Type":"ContainerStarted","Data":"a1bb8489e737dbd408bda813af16906c3d6f142f5853140b9e1ed5895142eb7c"} Nov 24 11:27:09 crc kubenswrapper[5072]: I1124 11:27:09.374985 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"25e2c3b5-6179-4d4f-94ef-a8645a35a2ea","Type":"ContainerStarted","Data":"f8a1b4c5e46bbb76c7a44184bc1904a0a2606302b72617deae3e4160c474ddcc"} Nov 24 11:27:13 crc kubenswrapper[5072]: I1124 11:27:13.645404 5072 patch_prober.go:28] interesting pod/machine-config-daemon-jfxnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 11:27:13 crc kubenswrapper[5072]: I1124 11:27:13.645962 5072 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 11:27:14 crc kubenswrapper[5072]: I1124 11:27:14.423284 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"25e2c3b5-6179-4d4f-94ef-a8645a35a2ea","Type":"ContainerStarted","Data":"0a047b9e318fc25aba67c8da607192621e39331fd3d67280e241bfd35a1552aa"} Nov 24 11:27:14 crc kubenswrapper[5072]: I1124 11:27:14.423626 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 11:27:14 crc kubenswrapper[5072]: I1124 11:27:14.424795 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-vp5q8" event={"ID":"da16f5d0-f121-4388-983a-caca760fa5c6","Type":"ContainerStarted","Data":"2a0b31b06b87bbc624e6f5a2b7b21d3dcc46b487c372cb54d650bc6017fdd911"} Nov 24 11:27:14 crc kubenswrapper[5072]: I1124 11:27:14.455001 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.107044623 podStartE2EDuration="8.454978421s" podCreationTimestamp="2025-11-24 11:27:06 +0000 UTC" firstStartedPulling="2025-11-24 11:27:07.19728629 +0000 UTC m=+1078.908810766" lastFinishedPulling="2025-11-24 11:27:13.545220068 +0000 UTC m=+1085.256744564" observedRunningTime="2025-11-24 11:27:14.450451846 +0000 UTC m=+1086.161976352" watchObservedRunningTime="2025-11-24 11:27:14.454978421 +0000 UTC m=+1086.166502917" Nov 24 11:27:14 crc kubenswrapper[5072]: I1124 11:27:14.485441 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-vp5q8" podStartSLOduration=1.8529713079999999 podStartE2EDuration="8.485414472s" podCreationTimestamp="2025-11-24 11:27:06 +0000 UTC" firstStartedPulling="2025-11-24 11:27:06.914003005 +0000 UTC m=+1078.625527481" lastFinishedPulling="2025-11-24 11:27:13.546446139 +0000 UTC m=+1085.257970645" observedRunningTime="2025-11-24 11:27:14.479737918 +0000 UTC m=+1086.191262414" watchObservedRunningTime="2025-11-24 11:27:14.485414472 +0000 UTC m=+1086.196938988" Nov 24 11:27:16 crc kubenswrapper[5072]: I1124 11:27:16.108070 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:27:16 crc kubenswrapper[5072]: I1124 11:27:16.442640 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="25e2c3b5-6179-4d4f-94ef-a8645a35a2ea" containerName="ceilometer-central-agent" containerID="cri-o://3cc0994b54e5ba02fd84cb19da669f20a9e93e6c7d89899e145a9884b5a4b17c" gracePeriod=30 Nov 24 11:27:16 crc kubenswrapper[5072]: I1124 11:27:16.442724 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="25e2c3b5-6179-4d4f-94ef-a8645a35a2ea" containerName="sg-core" containerID="cri-o://a1bb8489e737dbd408bda813af16906c3d6f142f5853140b9e1ed5895142eb7c" gracePeriod=30 Nov 24 11:27:16 crc kubenswrapper[5072]: I1124 11:27:16.442843 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="25e2c3b5-6179-4d4f-94ef-a8645a35a2ea" containerName="ceilometer-notification-agent" containerID="cri-o://f8a1b4c5e46bbb76c7a44184bc1904a0a2606302b72617deae3e4160c474ddcc" gracePeriod=30 Nov 24 11:27:16 crc kubenswrapper[5072]: I1124 11:27:16.442931 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="25e2c3b5-6179-4d4f-94ef-a8645a35a2ea" containerName="proxy-httpd" containerID="cri-o://0a047b9e318fc25aba67c8da607192621e39331fd3d67280e241bfd35a1552aa" gracePeriod=30 Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.185727 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.255820 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25e2c3b5-6179-4d4f-94ef-a8645a35a2ea-combined-ca-bundle\") pod \"25e2c3b5-6179-4d4f-94ef-a8645a35a2ea\" (UID: \"25e2c3b5-6179-4d4f-94ef-a8645a35a2ea\") " Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.255921 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/25e2c3b5-6179-4d4f-94ef-a8645a35a2ea-log-httpd\") pod \"25e2c3b5-6179-4d4f-94ef-a8645a35a2ea\" (UID: \"25e2c3b5-6179-4d4f-94ef-a8645a35a2ea\") " Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.255977 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/25e2c3b5-6179-4d4f-94ef-a8645a35a2ea-run-httpd\") pod \"25e2c3b5-6179-4d4f-94ef-a8645a35a2ea\" (UID: \"25e2c3b5-6179-4d4f-94ef-a8645a35a2ea\") " Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.256004 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25e2c3b5-6179-4d4f-94ef-a8645a35a2ea-scripts\") pod \"25e2c3b5-6179-4d4f-94ef-a8645a35a2ea\" (UID: \"25e2c3b5-6179-4d4f-94ef-a8645a35a2ea\") " Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.256064 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8lhp\" (UniqueName: \"kubernetes.io/projected/25e2c3b5-6179-4d4f-94ef-a8645a35a2ea-kube-api-access-g8lhp\") pod \"25e2c3b5-6179-4d4f-94ef-a8645a35a2ea\" (UID: \"25e2c3b5-6179-4d4f-94ef-a8645a35a2ea\") " Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.256084 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/25e2c3b5-6179-4d4f-94ef-a8645a35a2ea-sg-core-conf-yaml\") pod \"25e2c3b5-6179-4d4f-94ef-a8645a35a2ea\" (UID: \"25e2c3b5-6179-4d4f-94ef-a8645a35a2ea\") " Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.256507 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25e2c3b5-6179-4d4f-94ef-a8645a35a2ea-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "25e2c3b5-6179-4d4f-94ef-a8645a35a2ea" (UID: "25e2c3b5-6179-4d4f-94ef-a8645a35a2ea"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.256634 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25e2c3b5-6179-4d4f-94ef-a8645a35a2ea-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "25e2c3b5-6179-4d4f-94ef-a8645a35a2ea" (UID: "25e2c3b5-6179-4d4f-94ef-a8645a35a2ea"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.262319 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e2c3b5-6179-4d4f-94ef-a8645a35a2ea-scripts" (OuterVolumeSpecName: "scripts") pod "25e2c3b5-6179-4d4f-94ef-a8645a35a2ea" (UID: "25e2c3b5-6179-4d4f-94ef-a8645a35a2ea"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.263633 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e2c3b5-6179-4d4f-94ef-a8645a35a2ea-kube-api-access-g8lhp" (OuterVolumeSpecName: "kube-api-access-g8lhp") pod "25e2c3b5-6179-4d4f-94ef-a8645a35a2ea" (UID: "25e2c3b5-6179-4d4f-94ef-a8645a35a2ea"). InnerVolumeSpecName "kube-api-access-g8lhp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.279318 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e2c3b5-6179-4d4f-94ef-a8645a35a2ea-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "25e2c3b5-6179-4d4f-94ef-a8645a35a2ea" (UID: "25e2c3b5-6179-4d4f-94ef-a8645a35a2ea"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.338820 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e2c3b5-6179-4d4f-94ef-a8645a35a2ea-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "25e2c3b5-6179-4d4f-94ef-a8645a35a2ea" (UID: "25e2c3b5-6179-4d4f-94ef-a8645a35a2ea"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.357412 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25e2c3b5-6179-4d4f-94ef-a8645a35a2ea-config-data\") pod \"25e2c3b5-6179-4d4f-94ef-a8645a35a2ea\" (UID: \"25e2c3b5-6179-4d4f-94ef-a8645a35a2ea\") " Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.358119 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g8lhp\" (UniqueName: \"kubernetes.io/projected/25e2c3b5-6179-4d4f-94ef-a8645a35a2ea-kube-api-access-g8lhp\") on node \"crc\" DevicePath \"\"" Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.358148 5072 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/25e2c3b5-6179-4d4f-94ef-a8645a35a2ea-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.358160 5072 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25e2c3b5-6179-4d4f-94ef-a8645a35a2ea-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.358172 5072 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/25e2c3b5-6179-4d4f-94ef-a8645a35a2ea-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.358183 5072 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/25e2c3b5-6179-4d4f-94ef-a8645a35a2ea-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.358194 5072 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25e2c3b5-6179-4d4f-94ef-a8645a35a2ea-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.452848 5072 generic.go:334] "Generic (PLEG): container finished" podID="25e2c3b5-6179-4d4f-94ef-a8645a35a2ea" containerID="0a047b9e318fc25aba67c8da607192621e39331fd3d67280e241bfd35a1552aa" exitCode=0 Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.452886 5072 generic.go:334] "Generic (PLEG): container finished" podID="25e2c3b5-6179-4d4f-94ef-a8645a35a2ea" containerID="a1bb8489e737dbd408bda813af16906c3d6f142f5853140b9e1ed5895142eb7c" exitCode=2 Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.452895 5072 generic.go:334] "Generic (PLEG): container finished" podID="25e2c3b5-6179-4d4f-94ef-a8645a35a2ea" containerID="f8a1b4c5e46bbb76c7a44184bc1904a0a2606302b72617deae3e4160c474ddcc" exitCode=0 Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.452905 5072 generic.go:334] "Generic (PLEG): container finished" podID="25e2c3b5-6179-4d4f-94ef-a8645a35a2ea" containerID="3cc0994b54e5ba02fd84cb19da669f20a9e93e6c7d89899e145a9884b5a4b17c" exitCode=0 Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.452926 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"25e2c3b5-6179-4d4f-94ef-a8645a35a2ea","Type":"ContainerDied","Data":"0a047b9e318fc25aba67c8da607192621e39331fd3d67280e241bfd35a1552aa"} Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.452954 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"25e2c3b5-6179-4d4f-94ef-a8645a35a2ea","Type":"ContainerDied","Data":"a1bb8489e737dbd408bda813af16906c3d6f142f5853140b9e1ed5895142eb7c"} Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.452968 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"25e2c3b5-6179-4d4f-94ef-a8645a35a2ea","Type":"ContainerDied","Data":"f8a1b4c5e46bbb76c7a44184bc1904a0a2606302b72617deae3e4160c474ddcc"} Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.452979 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"25e2c3b5-6179-4d4f-94ef-a8645a35a2ea","Type":"ContainerDied","Data":"3cc0994b54e5ba02fd84cb19da669f20a9e93e6c7d89899e145a9884b5a4b17c"} Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.452990 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"25e2c3b5-6179-4d4f-94ef-a8645a35a2ea","Type":"ContainerDied","Data":"9a07aafa17f4e1618c4f12050225b6b13e54de4aac51464ae939a3928d646c03"} Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.453009 5072 scope.go:117] "RemoveContainer" containerID="0a047b9e318fc25aba67c8da607192621e39331fd3d67280e241bfd35a1552aa" Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.453149 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.467030 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e2c3b5-6179-4d4f-94ef-a8645a35a2ea-config-data" (OuterVolumeSpecName: "config-data") pod "25e2c3b5-6179-4d4f-94ef-a8645a35a2ea" (UID: "25e2c3b5-6179-4d4f-94ef-a8645a35a2ea"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.484317 5072 scope.go:117] "RemoveContainer" containerID="a1bb8489e737dbd408bda813af16906c3d6f142f5853140b9e1ed5895142eb7c" Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.507126 5072 scope.go:117] "RemoveContainer" containerID="f8a1b4c5e46bbb76c7a44184bc1904a0a2606302b72617deae3e4160c474ddcc" Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.539707 5072 scope.go:117] "RemoveContainer" containerID="3cc0994b54e5ba02fd84cb19da669f20a9e93e6c7d89899e145a9884b5a4b17c" Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.562306 5072 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25e2c3b5-6179-4d4f-94ef-a8645a35a2ea-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.588823 5072 scope.go:117] "RemoveContainer" containerID="0a047b9e318fc25aba67c8da607192621e39331fd3d67280e241bfd35a1552aa" Nov 24 11:27:17 crc kubenswrapper[5072]: E1124 11:27:17.589328 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a047b9e318fc25aba67c8da607192621e39331fd3d67280e241bfd35a1552aa\": container with ID starting with 0a047b9e318fc25aba67c8da607192621e39331fd3d67280e241bfd35a1552aa not found: ID does not exist" containerID="0a047b9e318fc25aba67c8da607192621e39331fd3d67280e241bfd35a1552aa" Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.589425 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a047b9e318fc25aba67c8da607192621e39331fd3d67280e241bfd35a1552aa"} err="failed to get container status \"0a047b9e318fc25aba67c8da607192621e39331fd3d67280e241bfd35a1552aa\": rpc error: code = NotFound desc = could not find container \"0a047b9e318fc25aba67c8da607192621e39331fd3d67280e241bfd35a1552aa\": container with ID starting with 0a047b9e318fc25aba67c8da607192621e39331fd3d67280e241bfd35a1552aa not found: ID does not exist" Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.589494 5072 scope.go:117] "RemoveContainer" containerID="a1bb8489e737dbd408bda813af16906c3d6f142f5853140b9e1ed5895142eb7c" Nov 24 11:27:17 crc kubenswrapper[5072]: E1124 11:27:17.590116 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1bb8489e737dbd408bda813af16906c3d6f142f5853140b9e1ed5895142eb7c\": container with ID starting with a1bb8489e737dbd408bda813af16906c3d6f142f5853140b9e1ed5895142eb7c not found: ID does not exist" containerID="a1bb8489e737dbd408bda813af16906c3d6f142f5853140b9e1ed5895142eb7c" Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.590194 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1bb8489e737dbd408bda813af16906c3d6f142f5853140b9e1ed5895142eb7c"} err="failed to get container status \"a1bb8489e737dbd408bda813af16906c3d6f142f5853140b9e1ed5895142eb7c\": rpc error: code = NotFound desc = could not find container \"a1bb8489e737dbd408bda813af16906c3d6f142f5853140b9e1ed5895142eb7c\": container with ID starting with a1bb8489e737dbd408bda813af16906c3d6f142f5853140b9e1ed5895142eb7c not found: ID does not exist" Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.590228 5072 scope.go:117] "RemoveContainer" containerID="f8a1b4c5e46bbb76c7a44184bc1904a0a2606302b72617deae3e4160c474ddcc" Nov 24 11:27:17 crc kubenswrapper[5072]: E1124 11:27:17.590636 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8a1b4c5e46bbb76c7a44184bc1904a0a2606302b72617deae3e4160c474ddcc\": container with ID starting with f8a1b4c5e46bbb76c7a44184bc1904a0a2606302b72617deae3e4160c474ddcc not found: ID does not exist" containerID="f8a1b4c5e46bbb76c7a44184bc1904a0a2606302b72617deae3e4160c474ddcc" Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.590661 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8a1b4c5e46bbb76c7a44184bc1904a0a2606302b72617deae3e4160c474ddcc"} err="failed to get container status \"f8a1b4c5e46bbb76c7a44184bc1904a0a2606302b72617deae3e4160c474ddcc\": rpc error: code = NotFound desc = could not find container \"f8a1b4c5e46bbb76c7a44184bc1904a0a2606302b72617deae3e4160c474ddcc\": container with ID starting with f8a1b4c5e46bbb76c7a44184bc1904a0a2606302b72617deae3e4160c474ddcc not found: ID does not exist" Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.590676 5072 scope.go:117] "RemoveContainer" containerID="3cc0994b54e5ba02fd84cb19da669f20a9e93e6c7d89899e145a9884b5a4b17c" Nov 24 11:27:17 crc kubenswrapper[5072]: E1124 11:27:17.591016 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3cc0994b54e5ba02fd84cb19da669f20a9e93e6c7d89899e145a9884b5a4b17c\": container with ID starting with 3cc0994b54e5ba02fd84cb19da669f20a9e93e6c7d89899e145a9884b5a4b17c not found: ID does not exist" containerID="3cc0994b54e5ba02fd84cb19da669f20a9e93e6c7d89899e145a9884b5a4b17c" Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.591056 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3cc0994b54e5ba02fd84cb19da669f20a9e93e6c7d89899e145a9884b5a4b17c"} err="failed to get container status \"3cc0994b54e5ba02fd84cb19da669f20a9e93e6c7d89899e145a9884b5a4b17c\": rpc error: code = NotFound desc = could not find container \"3cc0994b54e5ba02fd84cb19da669f20a9e93e6c7d89899e145a9884b5a4b17c\": container with ID starting with 3cc0994b54e5ba02fd84cb19da669f20a9e93e6c7d89899e145a9884b5a4b17c not found: ID does not exist" Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.591086 5072 scope.go:117] "RemoveContainer" containerID="0a047b9e318fc25aba67c8da607192621e39331fd3d67280e241bfd35a1552aa" Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.591435 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a047b9e318fc25aba67c8da607192621e39331fd3d67280e241bfd35a1552aa"} err="failed to get container status \"0a047b9e318fc25aba67c8da607192621e39331fd3d67280e241bfd35a1552aa\": rpc error: code = NotFound desc = could not find container \"0a047b9e318fc25aba67c8da607192621e39331fd3d67280e241bfd35a1552aa\": container with ID starting with 0a047b9e318fc25aba67c8da607192621e39331fd3d67280e241bfd35a1552aa not found: ID does not exist" Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.591462 5072 scope.go:117] "RemoveContainer" containerID="a1bb8489e737dbd408bda813af16906c3d6f142f5853140b9e1ed5895142eb7c" Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.591711 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1bb8489e737dbd408bda813af16906c3d6f142f5853140b9e1ed5895142eb7c"} err="failed to get container status \"a1bb8489e737dbd408bda813af16906c3d6f142f5853140b9e1ed5895142eb7c\": rpc error: code = NotFound desc = could not find container \"a1bb8489e737dbd408bda813af16906c3d6f142f5853140b9e1ed5895142eb7c\": container with ID starting with a1bb8489e737dbd408bda813af16906c3d6f142f5853140b9e1ed5895142eb7c not found: ID does not exist" Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.591731 5072 scope.go:117] "RemoveContainer" containerID="f8a1b4c5e46bbb76c7a44184bc1904a0a2606302b72617deae3e4160c474ddcc" Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.592065 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8a1b4c5e46bbb76c7a44184bc1904a0a2606302b72617deae3e4160c474ddcc"} err="failed to get container status \"f8a1b4c5e46bbb76c7a44184bc1904a0a2606302b72617deae3e4160c474ddcc\": rpc error: code = NotFound desc = could not find container \"f8a1b4c5e46bbb76c7a44184bc1904a0a2606302b72617deae3e4160c474ddcc\": container with ID starting with f8a1b4c5e46bbb76c7a44184bc1904a0a2606302b72617deae3e4160c474ddcc not found: ID does not exist" Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.592139 5072 scope.go:117] "RemoveContainer" containerID="3cc0994b54e5ba02fd84cb19da669f20a9e93e6c7d89899e145a9884b5a4b17c" Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.592443 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3cc0994b54e5ba02fd84cb19da669f20a9e93e6c7d89899e145a9884b5a4b17c"} err="failed to get container status \"3cc0994b54e5ba02fd84cb19da669f20a9e93e6c7d89899e145a9884b5a4b17c\": rpc error: code = NotFound desc = could not find container \"3cc0994b54e5ba02fd84cb19da669f20a9e93e6c7d89899e145a9884b5a4b17c\": container with ID starting with 3cc0994b54e5ba02fd84cb19da669f20a9e93e6c7d89899e145a9884b5a4b17c not found: ID does not exist" Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.592462 5072 scope.go:117] "RemoveContainer" containerID="0a047b9e318fc25aba67c8da607192621e39331fd3d67280e241bfd35a1552aa" Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.592792 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a047b9e318fc25aba67c8da607192621e39331fd3d67280e241bfd35a1552aa"} err="failed to get container status \"0a047b9e318fc25aba67c8da607192621e39331fd3d67280e241bfd35a1552aa\": rpc error: code = NotFound desc = could not find container \"0a047b9e318fc25aba67c8da607192621e39331fd3d67280e241bfd35a1552aa\": container with ID starting with 0a047b9e318fc25aba67c8da607192621e39331fd3d67280e241bfd35a1552aa not found: ID does not exist" Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.592861 5072 scope.go:117] "RemoveContainer" containerID="a1bb8489e737dbd408bda813af16906c3d6f142f5853140b9e1ed5895142eb7c" Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.593110 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1bb8489e737dbd408bda813af16906c3d6f142f5853140b9e1ed5895142eb7c"} err="failed to get container status \"a1bb8489e737dbd408bda813af16906c3d6f142f5853140b9e1ed5895142eb7c\": rpc error: code = NotFound desc = could not find container \"a1bb8489e737dbd408bda813af16906c3d6f142f5853140b9e1ed5895142eb7c\": container with ID starting with a1bb8489e737dbd408bda813af16906c3d6f142f5853140b9e1ed5895142eb7c not found: ID does not exist" Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.593129 5072 scope.go:117] "RemoveContainer" containerID="f8a1b4c5e46bbb76c7a44184bc1904a0a2606302b72617deae3e4160c474ddcc" Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.593327 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8a1b4c5e46bbb76c7a44184bc1904a0a2606302b72617deae3e4160c474ddcc"} err="failed to get container status \"f8a1b4c5e46bbb76c7a44184bc1904a0a2606302b72617deae3e4160c474ddcc\": rpc error: code = NotFound desc = could not find container \"f8a1b4c5e46bbb76c7a44184bc1904a0a2606302b72617deae3e4160c474ddcc\": container with ID starting with f8a1b4c5e46bbb76c7a44184bc1904a0a2606302b72617deae3e4160c474ddcc not found: ID does not exist" Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.593350 5072 scope.go:117] "RemoveContainer" containerID="3cc0994b54e5ba02fd84cb19da669f20a9e93e6c7d89899e145a9884b5a4b17c" Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.593576 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3cc0994b54e5ba02fd84cb19da669f20a9e93e6c7d89899e145a9884b5a4b17c"} err="failed to get container status \"3cc0994b54e5ba02fd84cb19da669f20a9e93e6c7d89899e145a9884b5a4b17c\": rpc error: code = NotFound desc = could not find container \"3cc0994b54e5ba02fd84cb19da669f20a9e93e6c7d89899e145a9884b5a4b17c\": container with ID starting with 3cc0994b54e5ba02fd84cb19da669f20a9e93e6c7d89899e145a9884b5a4b17c not found: ID does not exist" Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.593594 5072 scope.go:117] "RemoveContainer" containerID="0a047b9e318fc25aba67c8da607192621e39331fd3d67280e241bfd35a1552aa" Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.593790 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a047b9e318fc25aba67c8da607192621e39331fd3d67280e241bfd35a1552aa"} err="failed to get container status \"0a047b9e318fc25aba67c8da607192621e39331fd3d67280e241bfd35a1552aa\": rpc error: code = NotFound desc = could not find container \"0a047b9e318fc25aba67c8da607192621e39331fd3d67280e241bfd35a1552aa\": container with ID starting with 0a047b9e318fc25aba67c8da607192621e39331fd3d67280e241bfd35a1552aa not found: ID does not exist" Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.593807 5072 scope.go:117] "RemoveContainer" containerID="a1bb8489e737dbd408bda813af16906c3d6f142f5853140b9e1ed5895142eb7c" Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.594153 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1bb8489e737dbd408bda813af16906c3d6f142f5853140b9e1ed5895142eb7c"} err="failed to get container status \"a1bb8489e737dbd408bda813af16906c3d6f142f5853140b9e1ed5895142eb7c\": rpc error: code = NotFound desc = could not find container \"a1bb8489e737dbd408bda813af16906c3d6f142f5853140b9e1ed5895142eb7c\": container with ID starting with a1bb8489e737dbd408bda813af16906c3d6f142f5853140b9e1ed5895142eb7c not found: ID does not exist" Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.594191 5072 scope.go:117] "RemoveContainer" containerID="f8a1b4c5e46bbb76c7a44184bc1904a0a2606302b72617deae3e4160c474ddcc" Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.594489 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8a1b4c5e46bbb76c7a44184bc1904a0a2606302b72617deae3e4160c474ddcc"} err="failed to get container status \"f8a1b4c5e46bbb76c7a44184bc1904a0a2606302b72617deae3e4160c474ddcc\": rpc error: code = NotFound desc = could not find container \"f8a1b4c5e46bbb76c7a44184bc1904a0a2606302b72617deae3e4160c474ddcc\": container with ID starting with f8a1b4c5e46bbb76c7a44184bc1904a0a2606302b72617deae3e4160c474ddcc not found: ID does not exist" Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.594510 5072 scope.go:117] "RemoveContainer" containerID="3cc0994b54e5ba02fd84cb19da669f20a9e93e6c7d89899e145a9884b5a4b17c" Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.594714 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3cc0994b54e5ba02fd84cb19da669f20a9e93e6c7d89899e145a9884b5a4b17c"} err="failed to get container status \"3cc0994b54e5ba02fd84cb19da669f20a9e93e6c7d89899e145a9884b5a4b17c\": rpc error: code = NotFound desc = could not find container \"3cc0994b54e5ba02fd84cb19da669f20a9e93e6c7d89899e145a9884b5a4b17c\": container with ID starting with 3cc0994b54e5ba02fd84cb19da669f20a9e93e6c7d89899e145a9884b5a4b17c not found: ID does not exist" Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.793606 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.803388 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.811502 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:27:17 crc kubenswrapper[5072]: E1124 11:27:17.812049 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25e2c3b5-6179-4d4f-94ef-a8645a35a2ea" containerName="ceilometer-notification-agent" Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.812134 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="25e2c3b5-6179-4d4f-94ef-a8645a35a2ea" containerName="ceilometer-notification-agent" Nov 24 11:27:17 crc kubenswrapper[5072]: E1124 11:27:17.812198 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25e2c3b5-6179-4d4f-94ef-a8645a35a2ea" containerName="proxy-httpd" Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.812306 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="25e2c3b5-6179-4d4f-94ef-a8645a35a2ea" containerName="proxy-httpd" Nov 24 11:27:17 crc kubenswrapper[5072]: E1124 11:27:17.812392 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25e2c3b5-6179-4d4f-94ef-a8645a35a2ea" containerName="sg-core" Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.812455 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="25e2c3b5-6179-4d4f-94ef-a8645a35a2ea" containerName="sg-core" Nov 24 11:27:17 crc kubenswrapper[5072]: E1124 11:27:17.812514 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25e2c3b5-6179-4d4f-94ef-a8645a35a2ea" containerName="ceilometer-central-agent" Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.812561 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="25e2c3b5-6179-4d4f-94ef-a8645a35a2ea" containerName="ceilometer-central-agent" Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.812769 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="25e2c3b5-6179-4d4f-94ef-a8645a35a2ea" containerName="ceilometer-central-agent" Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.812835 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="25e2c3b5-6179-4d4f-94ef-a8645a35a2ea" containerName="sg-core" Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.812893 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="25e2c3b5-6179-4d4f-94ef-a8645a35a2ea" containerName="ceilometer-notification-agent" Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.812964 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="25e2c3b5-6179-4d4f-94ef-a8645a35a2ea" containerName="proxy-httpd" Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.814472 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.818268 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.818516 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.827283 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.868233 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/743c36a5-f4ff-4c6b-8b2d-386827b23ec1-config-data\") pod \"ceilometer-0\" (UID: \"743c36a5-f4ff-4c6b-8b2d-386827b23ec1\") " pod="openstack/ceilometer-0" Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.868282 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/743c36a5-f4ff-4c6b-8b2d-386827b23ec1-log-httpd\") pod \"ceilometer-0\" (UID: \"743c36a5-f4ff-4c6b-8b2d-386827b23ec1\") " pod="openstack/ceilometer-0" Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.868324 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/743c36a5-f4ff-4c6b-8b2d-386827b23ec1-run-httpd\") pod \"ceilometer-0\" (UID: \"743c36a5-f4ff-4c6b-8b2d-386827b23ec1\") " pod="openstack/ceilometer-0" Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.868487 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/743c36a5-f4ff-4c6b-8b2d-386827b23ec1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"743c36a5-f4ff-4c6b-8b2d-386827b23ec1\") " pod="openstack/ceilometer-0" Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.868537 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/743c36a5-f4ff-4c6b-8b2d-386827b23ec1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"743c36a5-f4ff-4c6b-8b2d-386827b23ec1\") " pod="openstack/ceilometer-0" Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.868579 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/743c36a5-f4ff-4c6b-8b2d-386827b23ec1-scripts\") pod \"ceilometer-0\" (UID: \"743c36a5-f4ff-4c6b-8b2d-386827b23ec1\") " pod="openstack/ceilometer-0" Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.868679 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phw6n\" (UniqueName: \"kubernetes.io/projected/743c36a5-f4ff-4c6b-8b2d-386827b23ec1-kube-api-access-phw6n\") pod \"ceilometer-0\" (UID: \"743c36a5-f4ff-4c6b-8b2d-386827b23ec1\") " pod="openstack/ceilometer-0" Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.970854 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/743c36a5-f4ff-4c6b-8b2d-386827b23ec1-run-httpd\") pod \"ceilometer-0\" (UID: \"743c36a5-f4ff-4c6b-8b2d-386827b23ec1\") " pod="openstack/ceilometer-0" Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.970924 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/743c36a5-f4ff-4c6b-8b2d-386827b23ec1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"743c36a5-f4ff-4c6b-8b2d-386827b23ec1\") " pod="openstack/ceilometer-0" Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.970955 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/743c36a5-f4ff-4c6b-8b2d-386827b23ec1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"743c36a5-f4ff-4c6b-8b2d-386827b23ec1\") " pod="openstack/ceilometer-0" Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.970993 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/743c36a5-f4ff-4c6b-8b2d-386827b23ec1-scripts\") pod \"ceilometer-0\" (UID: \"743c36a5-f4ff-4c6b-8b2d-386827b23ec1\") " pod="openstack/ceilometer-0" Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.971056 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phw6n\" (UniqueName: \"kubernetes.io/projected/743c36a5-f4ff-4c6b-8b2d-386827b23ec1-kube-api-access-phw6n\") pod \"ceilometer-0\" (UID: \"743c36a5-f4ff-4c6b-8b2d-386827b23ec1\") " pod="openstack/ceilometer-0" Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.971141 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/743c36a5-f4ff-4c6b-8b2d-386827b23ec1-config-data\") pod \"ceilometer-0\" (UID: \"743c36a5-f4ff-4c6b-8b2d-386827b23ec1\") " pod="openstack/ceilometer-0" Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.971185 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/743c36a5-f4ff-4c6b-8b2d-386827b23ec1-log-httpd\") pod \"ceilometer-0\" (UID: \"743c36a5-f4ff-4c6b-8b2d-386827b23ec1\") " pod="openstack/ceilometer-0" Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.971326 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/743c36a5-f4ff-4c6b-8b2d-386827b23ec1-run-httpd\") pod \"ceilometer-0\" (UID: \"743c36a5-f4ff-4c6b-8b2d-386827b23ec1\") " pod="openstack/ceilometer-0" Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.971725 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/743c36a5-f4ff-4c6b-8b2d-386827b23ec1-log-httpd\") pod \"ceilometer-0\" (UID: \"743c36a5-f4ff-4c6b-8b2d-386827b23ec1\") " pod="openstack/ceilometer-0" Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.977509 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/743c36a5-f4ff-4c6b-8b2d-386827b23ec1-scripts\") pod \"ceilometer-0\" (UID: \"743c36a5-f4ff-4c6b-8b2d-386827b23ec1\") " pod="openstack/ceilometer-0" Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.978720 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/743c36a5-f4ff-4c6b-8b2d-386827b23ec1-config-data\") pod \"ceilometer-0\" (UID: \"743c36a5-f4ff-4c6b-8b2d-386827b23ec1\") " pod="openstack/ceilometer-0" Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.989075 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/743c36a5-f4ff-4c6b-8b2d-386827b23ec1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"743c36a5-f4ff-4c6b-8b2d-386827b23ec1\") " pod="openstack/ceilometer-0" Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.991606 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phw6n\" (UniqueName: \"kubernetes.io/projected/743c36a5-f4ff-4c6b-8b2d-386827b23ec1-kube-api-access-phw6n\") pod \"ceilometer-0\" (UID: \"743c36a5-f4ff-4c6b-8b2d-386827b23ec1\") " pod="openstack/ceilometer-0" Nov 24 11:27:17 crc kubenswrapper[5072]: I1124 11:27:17.992536 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/743c36a5-f4ff-4c6b-8b2d-386827b23ec1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"743c36a5-f4ff-4c6b-8b2d-386827b23ec1\") " pod="openstack/ceilometer-0" Nov 24 11:27:18 crc kubenswrapper[5072]: I1124 11:27:18.130251 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 11:27:18 crc kubenswrapper[5072]: W1124 11:27:18.625222 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod743c36a5_f4ff_4c6b_8b2d_386827b23ec1.slice/crio-40368693b12b55779f6d2d447ed7003550a6090f1e48a270b2c38a8e5a444581 WatchSource:0}: Error finding container 40368693b12b55779f6d2d447ed7003550a6090f1e48a270b2c38a8e5a444581: Status 404 returned error can't find the container with id 40368693b12b55779f6d2d447ed7003550a6090f1e48a270b2c38a8e5a444581 Nov 24 11:27:18 crc kubenswrapper[5072]: I1124 11:27:18.633683 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:27:19 crc kubenswrapper[5072]: I1124 11:27:19.048041 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e2c3b5-6179-4d4f-94ef-a8645a35a2ea" path="/var/lib/kubelet/pods/25e2c3b5-6179-4d4f-94ef-a8645a35a2ea/volumes" Nov 24 11:27:19 crc kubenswrapper[5072]: I1124 11:27:19.475295 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"743c36a5-f4ff-4c6b-8b2d-386827b23ec1","Type":"ContainerStarted","Data":"3ed90f078e7a639da35ddd96ea70933999614837069375acc2126f016e4c410a"} Nov 24 11:27:19 crc kubenswrapper[5072]: I1124 11:27:19.475710 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"743c36a5-f4ff-4c6b-8b2d-386827b23ec1","Type":"ContainerStarted","Data":"40368693b12b55779f6d2d447ed7003550a6090f1e48a270b2c38a8e5a444581"} Nov 24 11:27:20 crc kubenswrapper[5072]: I1124 11:27:20.489734 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"743c36a5-f4ff-4c6b-8b2d-386827b23ec1","Type":"ContainerStarted","Data":"f97e4372d90e0a4327ee28a352f4c7287ff21b246a135f8b7cb9b22b70a7b9ca"} Nov 24 11:27:21 crc kubenswrapper[5072]: I1124 11:27:21.509003 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"743c36a5-f4ff-4c6b-8b2d-386827b23ec1","Type":"ContainerStarted","Data":"8f452ce9832c7bba8516c1eaaef237e830674368fcd24ffb815090cda369419e"} Nov 24 11:27:22 crc kubenswrapper[5072]: I1124 11:27:22.521893 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"743c36a5-f4ff-4c6b-8b2d-386827b23ec1","Type":"ContainerStarted","Data":"0b62e23958a2f2881c856aef432dfff7147e923376216bfda1bcc2f2c95a6bf9"} Nov 24 11:27:22 crc kubenswrapper[5072]: I1124 11:27:22.523678 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 11:27:22 crc kubenswrapper[5072]: I1124 11:27:22.550881 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.938311284 podStartE2EDuration="5.550862651s" podCreationTimestamp="2025-11-24 11:27:17 +0000 UTC" firstStartedPulling="2025-11-24 11:27:18.629449621 +0000 UTC m=+1090.340974117" lastFinishedPulling="2025-11-24 11:27:22.242000968 +0000 UTC m=+1093.953525484" observedRunningTime="2025-11-24 11:27:22.545986347 +0000 UTC m=+1094.257510863" watchObservedRunningTime="2025-11-24 11:27:22.550862651 +0000 UTC m=+1094.262387127" Nov 24 11:27:23 crc kubenswrapper[5072]: I1124 11:27:23.532516 5072 generic.go:334] "Generic (PLEG): container finished" podID="da16f5d0-f121-4388-983a-caca760fa5c6" containerID="2a0b31b06b87bbc624e6f5a2b7b21d3dcc46b487c372cb54d650bc6017fdd911" exitCode=0 Nov 24 11:27:23 crc kubenswrapper[5072]: I1124 11:27:23.532619 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-vp5q8" event={"ID":"da16f5d0-f121-4388-983a-caca760fa5c6","Type":"ContainerDied","Data":"2a0b31b06b87bbc624e6f5a2b7b21d3dcc46b487c372cb54d650bc6017fdd911"} Nov 24 11:27:24 crc kubenswrapper[5072]: I1124 11:27:24.972050 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-vp5q8" Nov 24 11:27:25 crc kubenswrapper[5072]: I1124 11:27:25.008556 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da16f5d0-f121-4388-983a-caca760fa5c6-combined-ca-bundle\") pod \"da16f5d0-f121-4388-983a-caca760fa5c6\" (UID: \"da16f5d0-f121-4388-983a-caca760fa5c6\") " Nov 24 11:27:25 crc kubenswrapper[5072]: I1124 11:27:25.008789 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da16f5d0-f121-4388-983a-caca760fa5c6-config-data\") pod \"da16f5d0-f121-4388-983a-caca760fa5c6\" (UID: \"da16f5d0-f121-4388-983a-caca760fa5c6\") " Nov 24 11:27:25 crc kubenswrapper[5072]: I1124 11:27:25.008823 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da16f5d0-f121-4388-983a-caca760fa5c6-scripts\") pod \"da16f5d0-f121-4388-983a-caca760fa5c6\" (UID: \"da16f5d0-f121-4388-983a-caca760fa5c6\") " Nov 24 11:27:25 crc kubenswrapper[5072]: I1124 11:27:25.009669 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt25z\" (UniqueName: \"kubernetes.io/projected/da16f5d0-f121-4388-983a-caca760fa5c6-kube-api-access-vt25z\") pod \"da16f5d0-f121-4388-983a-caca760fa5c6\" (UID: \"da16f5d0-f121-4388-983a-caca760fa5c6\") " Nov 24 11:27:25 crc kubenswrapper[5072]: I1124 11:27:25.016057 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da16f5d0-f121-4388-983a-caca760fa5c6-kube-api-access-vt25z" (OuterVolumeSpecName: "kube-api-access-vt25z") pod "da16f5d0-f121-4388-983a-caca760fa5c6" (UID: "da16f5d0-f121-4388-983a-caca760fa5c6"). InnerVolumeSpecName "kube-api-access-vt25z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:27:25 crc kubenswrapper[5072]: I1124 11:27:25.016300 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da16f5d0-f121-4388-983a-caca760fa5c6-scripts" (OuterVolumeSpecName: "scripts") pod "da16f5d0-f121-4388-983a-caca760fa5c6" (UID: "da16f5d0-f121-4388-983a-caca760fa5c6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:27:25 crc kubenswrapper[5072]: I1124 11:27:25.042612 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da16f5d0-f121-4388-983a-caca760fa5c6-config-data" (OuterVolumeSpecName: "config-data") pod "da16f5d0-f121-4388-983a-caca760fa5c6" (UID: "da16f5d0-f121-4388-983a-caca760fa5c6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:27:25 crc kubenswrapper[5072]: I1124 11:27:25.059837 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da16f5d0-f121-4388-983a-caca760fa5c6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "da16f5d0-f121-4388-983a-caca760fa5c6" (UID: "da16f5d0-f121-4388-983a-caca760fa5c6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:27:25 crc kubenswrapper[5072]: I1124 11:27:25.111518 5072 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da16f5d0-f121-4388-983a-caca760fa5c6-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:27:25 crc kubenswrapper[5072]: I1124 11:27:25.111562 5072 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da16f5d0-f121-4388-983a-caca760fa5c6-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:27:25 crc kubenswrapper[5072]: I1124 11:27:25.111583 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt25z\" (UniqueName: \"kubernetes.io/projected/da16f5d0-f121-4388-983a-caca760fa5c6-kube-api-access-vt25z\") on node \"crc\" DevicePath \"\"" Nov 24 11:27:25 crc kubenswrapper[5072]: I1124 11:27:25.111606 5072 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da16f5d0-f121-4388-983a-caca760fa5c6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:27:25 crc kubenswrapper[5072]: I1124 11:27:25.560872 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-vp5q8" event={"ID":"da16f5d0-f121-4388-983a-caca760fa5c6","Type":"ContainerDied","Data":"3c65e7a4ff4e2cdb58e30b32eea8a2276bfe9a42e68eafc322e8b2cbb568de5f"} Nov 24 11:27:25 crc kubenswrapper[5072]: I1124 11:27:25.560929 5072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c65e7a4ff4e2cdb58e30b32eea8a2276bfe9a42e68eafc322e8b2cbb568de5f" Nov 24 11:27:25 crc kubenswrapper[5072]: I1124 11:27:25.561013 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-vp5q8" Nov 24 11:27:25 crc kubenswrapper[5072]: I1124 11:27:25.761714 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 24 11:27:25 crc kubenswrapper[5072]: E1124 11:27:25.762318 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da16f5d0-f121-4388-983a-caca760fa5c6" containerName="nova-cell0-conductor-db-sync" Nov 24 11:27:25 crc kubenswrapper[5072]: I1124 11:27:25.762415 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="da16f5d0-f121-4388-983a-caca760fa5c6" containerName="nova-cell0-conductor-db-sync" Nov 24 11:27:25 crc kubenswrapper[5072]: I1124 11:27:25.762748 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="da16f5d0-f121-4388-983a-caca760fa5c6" containerName="nova-cell0-conductor-db-sync" Nov 24 11:27:25 crc kubenswrapper[5072]: I1124 11:27:25.763495 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 24 11:27:25 crc kubenswrapper[5072]: I1124 11:27:25.766703 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-ltsnd" Nov 24 11:27:25 crc kubenswrapper[5072]: I1124 11:27:25.766985 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 24 11:27:25 crc kubenswrapper[5072]: I1124 11:27:25.813716 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 24 11:27:25 crc kubenswrapper[5072]: I1124 11:27:25.824238 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnvgt\" (UniqueName: \"kubernetes.io/projected/cf68ac0f-299c-4ed5-a198-30bd0b2a7544-kube-api-access-vnvgt\") pod \"nova-cell0-conductor-0\" (UID: \"cf68ac0f-299c-4ed5-a198-30bd0b2a7544\") " pod="openstack/nova-cell0-conductor-0" Nov 24 11:27:25 crc kubenswrapper[5072]: I1124 11:27:25.824338 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf68ac0f-299c-4ed5-a198-30bd0b2a7544-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"cf68ac0f-299c-4ed5-a198-30bd0b2a7544\") " pod="openstack/nova-cell0-conductor-0" Nov 24 11:27:25 crc kubenswrapper[5072]: I1124 11:27:25.824397 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf68ac0f-299c-4ed5-a198-30bd0b2a7544-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"cf68ac0f-299c-4ed5-a198-30bd0b2a7544\") " pod="openstack/nova-cell0-conductor-0" Nov 24 11:27:25 crc kubenswrapper[5072]: E1124 11:27:25.840816 5072 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podda16f5d0_f121_4388_983a_caca760fa5c6.slice\": RecentStats: unable to find data in memory cache]" Nov 24 11:27:25 crc kubenswrapper[5072]: I1124 11:27:25.926267 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnvgt\" (UniqueName: \"kubernetes.io/projected/cf68ac0f-299c-4ed5-a198-30bd0b2a7544-kube-api-access-vnvgt\") pod \"nova-cell0-conductor-0\" (UID: \"cf68ac0f-299c-4ed5-a198-30bd0b2a7544\") " pod="openstack/nova-cell0-conductor-0" Nov 24 11:27:25 crc kubenswrapper[5072]: I1124 11:27:25.926364 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf68ac0f-299c-4ed5-a198-30bd0b2a7544-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"cf68ac0f-299c-4ed5-a198-30bd0b2a7544\") " pod="openstack/nova-cell0-conductor-0" Nov 24 11:27:25 crc kubenswrapper[5072]: I1124 11:27:25.926431 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf68ac0f-299c-4ed5-a198-30bd0b2a7544-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"cf68ac0f-299c-4ed5-a198-30bd0b2a7544\") " pod="openstack/nova-cell0-conductor-0" Nov 24 11:27:25 crc kubenswrapper[5072]: I1124 11:27:25.931038 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf68ac0f-299c-4ed5-a198-30bd0b2a7544-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"cf68ac0f-299c-4ed5-a198-30bd0b2a7544\") " pod="openstack/nova-cell0-conductor-0" Nov 24 11:27:25 crc kubenswrapper[5072]: I1124 11:27:25.932573 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf68ac0f-299c-4ed5-a198-30bd0b2a7544-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"cf68ac0f-299c-4ed5-a198-30bd0b2a7544\") " pod="openstack/nova-cell0-conductor-0" Nov 24 11:27:25 crc kubenswrapper[5072]: I1124 11:27:25.946983 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnvgt\" (UniqueName: \"kubernetes.io/projected/cf68ac0f-299c-4ed5-a198-30bd0b2a7544-kube-api-access-vnvgt\") pod \"nova-cell0-conductor-0\" (UID: \"cf68ac0f-299c-4ed5-a198-30bd0b2a7544\") " pod="openstack/nova-cell0-conductor-0" Nov 24 11:27:26 crc kubenswrapper[5072]: I1124 11:27:26.100499 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 24 11:27:26 crc kubenswrapper[5072]: I1124 11:27:26.391691 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 24 11:27:26 crc kubenswrapper[5072]: W1124 11:27:26.400927 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcf68ac0f_299c_4ed5_a198_30bd0b2a7544.slice/crio-086aa53c728f8414de74ef0b84d2d5ab515b501e41c27b63da1bafd66d7b5cf2 WatchSource:0}: Error finding container 086aa53c728f8414de74ef0b84d2d5ab515b501e41c27b63da1bafd66d7b5cf2: Status 404 returned error can't find the container with id 086aa53c728f8414de74ef0b84d2d5ab515b501e41c27b63da1bafd66d7b5cf2 Nov 24 11:27:26 crc kubenswrapper[5072]: I1124 11:27:26.570929 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"cf68ac0f-299c-4ed5-a198-30bd0b2a7544","Type":"ContainerStarted","Data":"086aa53c728f8414de74ef0b84d2d5ab515b501e41c27b63da1bafd66d7b5cf2"} Nov 24 11:27:27 crc kubenswrapper[5072]: I1124 11:27:27.585455 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"cf68ac0f-299c-4ed5-a198-30bd0b2a7544","Type":"ContainerStarted","Data":"5d22a5307e9d38d87987bf1a42fb113aa10964cf91b65554bd6b8c955e7a5850"} Nov 24 11:27:27 crc kubenswrapper[5072]: I1124 11:27:27.585826 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Nov 24 11:27:27 crc kubenswrapper[5072]: I1124 11:27:27.616620 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.616596814 podStartE2EDuration="2.616596814s" podCreationTimestamp="2025-11-24 11:27:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:27:27.614292375 +0000 UTC m=+1099.325816861" watchObservedRunningTime="2025-11-24 11:27:27.616596814 +0000 UTC m=+1099.328121300" Nov 24 11:27:31 crc kubenswrapper[5072]: I1124 11:27:31.130057 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Nov 24 11:27:31 crc kubenswrapper[5072]: I1124 11:27:31.636199 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-cd5bg"] Nov 24 11:27:31 crc kubenswrapper[5072]: I1124 11:27:31.637653 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-cd5bg" Nov 24 11:27:31 crc kubenswrapper[5072]: I1124 11:27:31.643101 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Nov 24 11:27:31 crc kubenswrapper[5072]: I1124 11:27:31.645978 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Nov 24 11:27:31 crc kubenswrapper[5072]: I1124 11:27:31.653958 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-cd5bg"] Nov 24 11:27:31 crc kubenswrapper[5072]: I1124 11:27:31.680135 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pr2mf\" (UniqueName: \"kubernetes.io/projected/08555f6e-e089-44c2-9193-b40a03e6f2f5-kube-api-access-pr2mf\") pod \"nova-cell0-cell-mapping-cd5bg\" (UID: \"08555f6e-e089-44c2-9193-b40a03e6f2f5\") " pod="openstack/nova-cell0-cell-mapping-cd5bg" Nov 24 11:27:31 crc kubenswrapper[5072]: I1124 11:27:31.680216 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08555f6e-e089-44c2-9193-b40a03e6f2f5-config-data\") pod \"nova-cell0-cell-mapping-cd5bg\" (UID: \"08555f6e-e089-44c2-9193-b40a03e6f2f5\") " pod="openstack/nova-cell0-cell-mapping-cd5bg" Nov 24 11:27:31 crc kubenswrapper[5072]: I1124 11:27:31.680240 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08555f6e-e089-44c2-9193-b40a03e6f2f5-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-cd5bg\" (UID: \"08555f6e-e089-44c2-9193-b40a03e6f2f5\") " pod="openstack/nova-cell0-cell-mapping-cd5bg" Nov 24 11:27:31 crc kubenswrapper[5072]: I1124 11:27:31.680323 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08555f6e-e089-44c2-9193-b40a03e6f2f5-scripts\") pod \"nova-cell0-cell-mapping-cd5bg\" (UID: \"08555f6e-e089-44c2-9193-b40a03e6f2f5\") " pod="openstack/nova-cell0-cell-mapping-cd5bg" Nov 24 11:27:31 crc kubenswrapper[5072]: I1124 11:27:31.763279 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 24 11:27:31 crc kubenswrapper[5072]: I1124 11:27:31.765299 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 11:27:31 crc kubenswrapper[5072]: I1124 11:27:31.768642 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 24 11:27:31 crc kubenswrapper[5072]: I1124 11:27:31.774588 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 11:27:31 crc kubenswrapper[5072]: I1124 11:27:31.781491 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pr2mf\" (UniqueName: \"kubernetes.io/projected/08555f6e-e089-44c2-9193-b40a03e6f2f5-kube-api-access-pr2mf\") pod \"nova-cell0-cell-mapping-cd5bg\" (UID: \"08555f6e-e089-44c2-9193-b40a03e6f2f5\") " pod="openstack/nova-cell0-cell-mapping-cd5bg" Nov 24 11:27:31 crc kubenswrapper[5072]: I1124 11:27:31.781608 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08555f6e-e089-44c2-9193-b40a03e6f2f5-config-data\") pod \"nova-cell0-cell-mapping-cd5bg\" (UID: \"08555f6e-e089-44c2-9193-b40a03e6f2f5\") " pod="openstack/nova-cell0-cell-mapping-cd5bg" Nov 24 11:27:31 crc kubenswrapper[5072]: I1124 11:27:31.781641 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08555f6e-e089-44c2-9193-b40a03e6f2f5-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-cd5bg\" (UID: \"08555f6e-e089-44c2-9193-b40a03e6f2f5\") " pod="openstack/nova-cell0-cell-mapping-cd5bg" Nov 24 11:27:31 crc kubenswrapper[5072]: I1124 11:27:31.781731 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08555f6e-e089-44c2-9193-b40a03e6f2f5-scripts\") pod \"nova-cell0-cell-mapping-cd5bg\" (UID: \"08555f6e-e089-44c2-9193-b40a03e6f2f5\") " pod="openstack/nova-cell0-cell-mapping-cd5bg" Nov 24 11:27:31 crc kubenswrapper[5072]: I1124 11:27:31.787530 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08555f6e-e089-44c2-9193-b40a03e6f2f5-scripts\") pod \"nova-cell0-cell-mapping-cd5bg\" (UID: \"08555f6e-e089-44c2-9193-b40a03e6f2f5\") " pod="openstack/nova-cell0-cell-mapping-cd5bg" Nov 24 11:27:31 crc kubenswrapper[5072]: I1124 11:27:31.796144 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08555f6e-e089-44c2-9193-b40a03e6f2f5-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-cd5bg\" (UID: \"08555f6e-e089-44c2-9193-b40a03e6f2f5\") " pod="openstack/nova-cell0-cell-mapping-cd5bg" Nov 24 11:27:31 crc kubenswrapper[5072]: I1124 11:27:31.814695 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08555f6e-e089-44c2-9193-b40a03e6f2f5-config-data\") pod \"nova-cell0-cell-mapping-cd5bg\" (UID: \"08555f6e-e089-44c2-9193-b40a03e6f2f5\") " pod="openstack/nova-cell0-cell-mapping-cd5bg" Nov 24 11:27:31 crc kubenswrapper[5072]: I1124 11:27:31.848113 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pr2mf\" (UniqueName: \"kubernetes.io/projected/08555f6e-e089-44c2-9193-b40a03e6f2f5-kube-api-access-pr2mf\") pod \"nova-cell0-cell-mapping-cd5bg\" (UID: \"08555f6e-e089-44c2-9193-b40a03e6f2f5\") " pod="openstack/nova-cell0-cell-mapping-cd5bg" Nov 24 11:27:31 crc kubenswrapper[5072]: I1124 11:27:31.871460 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 11:27:31 crc kubenswrapper[5072]: I1124 11:27:31.872546 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 11:27:31 crc kubenswrapper[5072]: I1124 11:27:31.878707 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 24 11:27:31 crc kubenswrapper[5072]: I1124 11:27:31.894364 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79kld\" (UniqueName: \"kubernetes.io/projected/98f36c5e-b827-4fcb-ac98-8eb62f230787-kube-api-access-79kld\") pod \"nova-scheduler-0\" (UID: \"98f36c5e-b827-4fcb-ac98-8eb62f230787\") " pod="openstack/nova-scheduler-0" Nov 24 11:27:31 crc kubenswrapper[5072]: I1124 11:27:31.894447 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/384915e0-f433-462f-82ab-d31ebaeb63d1-config-data\") pod \"nova-api-0\" (UID: \"384915e0-f433-462f-82ab-d31ebaeb63d1\") " pod="openstack/nova-api-0" Nov 24 11:27:31 crc kubenswrapper[5072]: I1124 11:27:31.894468 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/384915e0-f433-462f-82ab-d31ebaeb63d1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"384915e0-f433-462f-82ab-d31ebaeb63d1\") " pod="openstack/nova-api-0" Nov 24 11:27:31 crc kubenswrapper[5072]: I1124 11:27:31.894493 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/384915e0-f433-462f-82ab-d31ebaeb63d1-logs\") pod \"nova-api-0\" (UID: \"384915e0-f433-462f-82ab-d31ebaeb63d1\") " pod="openstack/nova-api-0" Nov 24 11:27:31 crc kubenswrapper[5072]: I1124 11:27:31.894521 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98f36c5e-b827-4fcb-ac98-8eb62f230787-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"98f36c5e-b827-4fcb-ac98-8eb62f230787\") " pod="openstack/nova-scheduler-0" Nov 24 11:27:31 crc kubenswrapper[5072]: I1124 11:27:31.894544 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98f36c5e-b827-4fcb-ac98-8eb62f230787-config-data\") pod \"nova-scheduler-0\" (UID: \"98f36c5e-b827-4fcb-ac98-8eb62f230787\") " pod="openstack/nova-scheduler-0" Nov 24 11:27:31 crc kubenswrapper[5072]: I1124 11:27:31.894563 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rc9b\" (UniqueName: \"kubernetes.io/projected/384915e0-f433-462f-82ab-d31ebaeb63d1-kube-api-access-4rc9b\") pod \"nova-api-0\" (UID: \"384915e0-f433-462f-82ab-d31ebaeb63d1\") " pod="openstack/nova-api-0" Nov 24 11:27:31 crc kubenswrapper[5072]: I1124 11:27:31.909161 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 24 11:27:31 crc kubenswrapper[5072]: I1124 11:27:31.910637 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 11:27:31 crc kubenswrapper[5072]: I1124 11:27:31.914751 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 24 11:27:31 crc kubenswrapper[5072]: I1124 11:27:31.921202 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 11:27:31 crc kubenswrapper[5072]: I1124 11:27:31.961985 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 11:27:31 crc kubenswrapper[5072]: I1124 11:27:31.975167 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 11:27:31 crc kubenswrapper[5072]: I1124 11:27:31.976491 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:27:31 crc kubenswrapper[5072]: I1124 11:27:31.979838 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 24 11:27:31 crc kubenswrapper[5072]: I1124 11:27:31.980004 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-cd5bg" Nov 24 11:27:31 crc kubenswrapper[5072]: I1124 11:27:31.986188 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 11:27:31 crc kubenswrapper[5072]: I1124 11:27:31.995520 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbg9q\" (UniqueName: \"kubernetes.io/projected/dfc34bce-a7cd-450b-8b0d-ed4d3172c2d9-kube-api-access-tbg9q\") pod \"nova-cell1-novncproxy-0\" (UID: \"dfc34bce-a7cd-450b-8b0d-ed4d3172c2d9\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:27:31 crc kubenswrapper[5072]: I1124 11:27:31.995581 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79kld\" (UniqueName: \"kubernetes.io/projected/98f36c5e-b827-4fcb-ac98-8eb62f230787-kube-api-access-79kld\") pod \"nova-scheduler-0\" (UID: \"98f36c5e-b827-4fcb-ac98-8eb62f230787\") " pod="openstack/nova-scheduler-0" Nov 24 11:27:31 crc kubenswrapper[5072]: I1124 11:27:31.995630 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/384915e0-f433-462f-82ab-d31ebaeb63d1-config-data\") pod \"nova-api-0\" (UID: \"384915e0-f433-462f-82ab-d31ebaeb63d1\") " pod="openstack/nova-api-0" Nov 24 11:27:31 crc kubenswrapper[5072]: I1124 11:27:31.995645 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/384915e0-f433-462f-82ab-d31ebaeb63d1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"384915e0-f433-462f-82ab-d31ebaeb63d1\") " pod="openstack/nova-api-0" Nov 24 11:27:31 crc kubenswrapper[5072]: I1124 11:27:31.995680 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/384915e0-f433-462f-82ab-d31ebaeb63d1-logs\") pod \"nova-api-0\" (UID: \"384915e0-f433-462f-82ab-d31ebaeb63d1\") " pod="openstack/nova-api-0" Nov 24 11:27:31 crc kubenswrapper[5072]: I1124 11:27:31.995698 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6360280b-2986-4593-86e3-e1ea63a0c6de-logs\") pod \"nova-metadata-0\" (UID: \"6360280b-2986-4593-86e3-e1ea63a0c6de\") " pod="openstack/nova-metadata-0" Nov 24 11:27:31 crc kubenswrapper[5072]: I1124 11:27:31.995715 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6360280b-2986-4593-86e3-e1ea63a0c6de-config-data\") pod \"nova-metadata-0\" (UID: \"6360280b-2986-4593-86e3-e1ea63a0c6de\") " pod="openstack/nova-metadata-0" Nov 24 11:27:31 crc kubenswrapper[5072]: I1124 11:27:31.995740 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98f36c5e-b827-4fcb-ac98-8eb62f230787-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"98f36c5e-b827-4fcb-ac98-8eb62f230787\") " pod="openstack/nova-scheduler-0" Nov 24 11:27:31 crc kubenswrapper[5072]: I1124 11:27:31.995758 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfc34bce-a7cd-450b-8b0d-ed4d3172c2d9-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"dfc34bce-a7cd-450b-8b0d-ed4d3172c2d9\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:27:31 crc kubenswrapper[5072]: I1124 11:27:31.995775 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfc34bce-a7cd-450b-8b0d-ed4d3172c2d9-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"dfc34bce-a7cd-450b-8b0d-ed4d3172c2d9\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:27:31 crc kubenswrapper[5072]: I1124 11:27:31.995790 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glmb7\" (UniqueName: \"kubernetes.io/projected/6360280b-2986-4593-86e3-e1ea63a0c6de-kube-api-access-glmb7\") pod \"nova-metadata-0\" (UID: \"6360280b-2986-4593-86e3-e1ea63a0c6de\") " pod="openstack/nova-metadata-0" Nov 24 11:27:31 crc kubenswrapper[5072]: I1124 11:27:31.995806 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98f36c5e-b827-4fcb-ac98-8eb62f230787-config-data\") pod \"nova-scheduler-0\" (UID: \"98f36c5e-b827-4fcb-ac98-8eb62f230787\") " pod="openstack/nova-scheduler-0" Nov 24 11:27:31 crc kubenswrapper[5072]: I1124 11:27:31.995829 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rc9b\" (UniqueName: \"kubernetes.io/projected/384915e0-f433-462f-82ab-d31ebaeb63d1-kube-api-access-4rc9b\") pod \"nova-api-0\" (UID: \"384915e0-f433-462f-82ab-d31ebaeb63d1\") " pod="openstack/nova-api-0" Nov 24 11:27:31 crc kubenswrapper[5072]: I1124 11:27:31.995848 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6360280b-2986-4593-86e3-e1ea63a0c6de-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6360280b-2986-4593-86e3-e1ea63a0c6de\") " pod="openstack/nova-metadata-0" Nov 24 11:27:32 crc kubenswrapper[5072]: I1124 11:27:31.999957 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98f36c5e-b827-4fcb-ac98-8eb62f230787-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"98f36c5e-b827-4fcb-ac98-8eb62f230787\") " pod="openstack/nova-scheduler-0" Nov 24 11:27:32 crc kubenswrapper[5072]: I1124 11:27:32.000099 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/384915e0-f433-462f-82ab-d31ebaeb63d1-logs\") pod \"nova-api-0\" (UID: \"384915e0-f433-462f-82ab-d31ebaeb63d1\") " pod="openstack/nova-api-0" Nov 24 11:27:32 crc kubenswrapper[5072]: I1124 11:27:32.004339 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/384915e0-f433-462f-82ab-d31ebaeb63d1-config-data\") pod \"nova-api-0\" (UID: \"384915e0-f433-462f-82ab-d31ebaeb63d1\") " pod="openstack/nova-api-0" Nov 24 11:27:32 crc kubenswrapper[5072]: I1124 11:27:32.013577 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/384915e0-f433-462f-82ab-d31ebaeb63d1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"384915e0-f433-462f-82ab-d31ebaeb63d1\") " pod="openstack/nova-api-0" Nov 24 11:27:32 crc kubenswrapper[5072]: I1124 11:27:32.019669 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98f36c5e-b827-4fcb-ac98-8eb62f230787-config-data\") pod \"nova-scheduler-0\" (UID: \"98f36c5e-b827-4fcb-ac98-8eb62f230787\") " pod="openstack/nova-scheduler-0" Nov 24 11:27:32 crc kubenswrapper[5072]: I1124 11:27:32.024249 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rc9b\" (UniqueName: \"kubernetes.io/projected/384915e0-f433-462f-82ab-d31ebaeb63d1-kube-api-access-4rc9b\") pod \"nova-api-0\" (UID: \"384915e0-f433-462f-82ab-d31ebaeb63d1\") " pod="openstack/nova-api-0" Nov 24 11:27:32 crc kubenswrapper[5072]: I1124 11:27:32.042352 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-5pgtx"] Nov 24 11:27:32 crc kubenswrapper[5072]: I1124 11:27:32.043957 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-566b5b7845-5pgtx" Nov 24 11:27:32 crc kubenswrapper[5072]: I1124 11:27:32.066959 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79kld\" (UniqueName: \"kubernetes.io/projected/98f36c5e-b827-4fcb-ac98-8eb62f230787-kube-api-access-79kld\") pod \"nova-scheduler-0\" (UID: \"98f36c5e-b827-4fcb-ac98-8eb62f230787\") " pod="openstack/nova-scheduler-0" Nov 24 11:27:32 crc kubenswrapper[5072]: I1124 11:27:32.074066 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-5pgtx"] Nov 24 11:27:32 crc kubenswrapper[5072]: I1124 11:27:32.080617 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 11:27:32 crc kubenswrapper[5072]: I1124 11:27:32.098716 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6360280b-2986-4593-86e3-e1ea63a0c6de-logs\") pod \"nova-metadata-0\" (UID: \"6360280b-2986-4593-86e3-e1ea63a0c6de\") " pod="openstack/nova-metadata-0" Nov 24 11:27:32 crc kubenswrapper[5072]: I1124 11:27:32.098937 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6360280b-2986-4593-86e3-e1ea63a0c6de-config-data\") pod \"nova-metadata-0\" (UID: \"6360280b-2986-4593-86e3-e1ea63a0c6de\") " pod="openstack/nova-metadata-0" Nov 24 11:27:32 crc kubenswrapper[5072]: I1124 11:27:32.098966 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfc34bce-a7cd-450b-8b0d-ed4d3172c2d9-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"dfc34bce-a7cd-450b-8b0d-ed4d3172c2d9\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:27:32 crc kubenswrapper[5072]: I1124 11:27:32.098985 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfc34bce-a7cd-450b-8b0d-ed4d3172c2d9-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"dfc34bce-a7cd-450b-8b0d-ed4d3172c2d9\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:27:32 crc kubenswrapper[5072]: I1124 11:27:32.099002 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glmb7\" (UniqueName: \"kubernetes.io/projected/6360280b-2986-4593-86e3-e1ea63a0c6de-kube-api-access-glmb7\") pod \"nova-metadata-0\" (UID: \"6360280b-2986-4593-86e3-e1ea63a0c6de\") " pod="openstack/nova-metadata-0" Nov 24 11:27:32 crc kubenswrapper[5072]: I1124 11:27:32.099035 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6360280b-2986-4593-86e3-e1ea63a0c6de-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6360280b-2986-4593-86e3-e1ea63a0c6de\") " pod="openstack/nova-metadata-0" Nov 24 11:27:32 crc kubenswrapper[5072]: I1124 11:27:32.099087 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3524341f-32c2-40b8-bfe3-f551f8e48de0-config\") pod \"dnsmasq-dns-566b5b7845-5pgtx\" (UID: \"3524341f-32c2-40b8-bfe3-f551f8e48de0\") " pod="openstack/dnsmasq-dns-566b5b7845-5pgtx" Nov 24 11:27:32 crc kubenswrapper[5072]: I1124 11:27:32.099106 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3524341f-32c2-40b8-bfe3-f551f8e48de0-dns-svc\") pod \"dnsmasq-dns-566b5b7845-5pgtx\" (UID: \"3524341f-32c2-40b8-bfe3-f551f8e48de0\") " pod="openstack/dnsmasq-dns-566b5b7845-5pgtx" Nov 24 11:27:32 crc kubenswrapper[5072]: I1124 11:27:32.099125 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbtg7\" (UniqueName: \"kubernetes.io/projected/3524341f-32c2-40b8-bfe3-f551f8e48de0-kube-api-access-mbtg7\") pod \"dnsmasq-dns-566b5b7845-5pgtx\" (UID: \"3524341f-32c2-40b8-bfe3-f551f8e48de0\") " pod="openstack/dnsmasq-dns-566b5b7845-5pgtx" Nov 24 11:27:32 crc kubenswrapper[5072]: I1124 11:27:32.099145 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3524341f-32c2-40b8-bfe3-f551f8e48de0-ovsdbserver-nb\") pod \"dnsmasq-dns-566b5b7845-5pgtx\" (UID: \"3524341f-32c2-40b8-bfe3-f551f8e48de0\") " pod="openstack/dnsmasq-dns-566b5b7845-5pgtx" Nov 24 11:27:32 crc kubenswrapper[5072]: I1124 11:27:32.099173 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbg9q\" (UniqueName: \"kubernetes.io/projected/dfc34bce-a7cd-450b-8b0d-ed4d3172c2d9-kube-api-access-tbg9q\") pod \"nova-cell1-novncproxy-0\" (UID: \"dfc34bce-a7cd-450b-8b0d-ed4d3172c2d9\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:27:32 crc kubenswrapper[5072]: I1124 11:27:32.099246 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3524341f-32c2-40b8-bfe3-f551f8e48de0-ovsdbserver-sb\") pod \"dnsmasq-dns-566b5b7845-5pgtx\" (UID: \"3524341f-32c2-40b8-bfe3-f551f8e48de0\") " pod="openstack/dnsmasq-dns-566b5b7845-5pgtx" Nov 24 11:27:32 crc kubenswrapper[5072]: I1124 11:27:32.102492 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6360280b-2986-4593-86e3-e1ea63a0c6de-logs\") pod \"nova-metadata-0\" (UID: \"6360280b-2986-4593-86e3-e1ea63a0c6de\") " pod="openstack/nova-metadata-0" Nov 24 11:27:32 crc kubenswrapper[5072]: I1124 11:27:32.103298 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfc34bce-a7cd-450b-8b0d-ed4d3172c2d9-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"dfc34bce-a7cd-450b-8b0d-ed4d3172c2d9\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:27:32 crc kubenswrapper[5072]: I1124 11:27:32.107768 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfc34bce-a7cd-450b-8b0d-ed4d3172c2d9-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"dfc34bce-a7cd-450b-8b0d-ed4d3172c2d9\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:27:32 crc kubenswrapper[5072]: I1124 11:27:32.110047 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6360280b-2986-4593-86e3-e1ea63a0c6de-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6360280b-2986-4593-86e3-e1ea63a0c6de\") " pod="openstack/nova-metadata-0" Nov 24 11:27:32 crc kubenswrapper[5072]: I1124 11:27:32.110726 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6360280b-2986-4593-86e3-e1ea63a0c6de-config-data\") pod \"nova-metadata-0\" (UID: \"6360280b-2986-4593-86e3-e1ea63a0c6de\") " pod="openstack/nova-metadata-0" Nov 24 11:27:32 crc kubenswrapper[5072]: I1124 11:27:32.130613 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbg9q\" (UniqueName: \"kubernetes.io/projected/dfc34bce-a7cd-450b-8b0d-ed4d3172c2d9-kube-api-access-tbg9q\") pod \"nova-cell1-novncproxy-0\" (UID: \"dfc34bce-a7cd-450b-8b0d-ed4d3172c2d9\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:27:32 crc kubenswrapper[5072]: I1124 11:27:32.131025 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glmb7\" (UniqueName: \"kubernetes.io/projected/6360280b-2986-4593-86e3-e1ea63a0c6de-kube-api-access-glmb7\") pod \"nova-metadata-0\" (UID: \"6360280b-2986-4593-86e3-e1ea63a0c6de\") " pod="openstack/nova-metadata-0" Nov 24 11:27:32 crc kubenswrapper[5072]: I1124 11:27:32.200628 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3524341f-32c2-40b8-bfe3-f551f8e48de0-ovsdbserver-sb\") pod \"dnsmasq-dns-566b5b7845-5pgtx\" (UID: \"3524341f-32c2-40b8-bfe3-f551f8e48de0\") " pod="openstack/dnsmasq-dns-566b5b7845-5pgtx" Nov 24 11:27:32 crc kubenswrapper[5072]: I1124 11:27:32.200759 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3524341f-32c2-40b8-bfe3-f551f8e48de0-config\") pod \"dnsmasq-dns-566b5b7845-5pgtx\" (UID: \"3524341f-32c2-40b8-bfe3-f551f8e48de0\") " pod="openstack/dnsmasq-dns-566b5b7845-5pgtx" Nov 24 11:27:32 crc kubenswrapper[5072]: I1124 11:27:32.200783 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3524341f-32c2-40b8-bfe3-f551f8e48de0-dns-svc\") pod \"dnsmasq-dns-566b5b7845-5pgtx\" (UID: \"3524341f-32c2-40b8-bfe3-f551f8e48de0\") " pod="openstack/dnsmasq-dns-566b5b7845-5pgtx" Nov 24 11:27:32 crc kubenswrapper[5072]: I1124 11:27:32.201609 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3524341f-32c2-40b8-bfe3-f551f8e48de0-ovsdbserver-sb\") pod \"dnsmasq-dns-566b5b7845-5pgtx\" (UID: \"3524341f-32c2-40b8-bfe3-f551f8e48de0\") " pod="openstack/dnsmasq-dns-566b5b7845-5pgtx" Nov 24 11:27:32 crc kubenswrapper[5072]: I1124 11:27:32.201828 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3524341f-32c2-40b8-bfe3-f551f8e48de0-config\") pod \"dnsmasq-dns-566b5b7845-5pgtx\" (UID: \"3524341f-32c2-40b8-bfe3-f551f8e48de0\") " pod="openstack/dnsmasq-dns-566b5b7845-5pgtx" Nov 24 11:27:32 crc kubenswrapper[5072]: I1124 11:27:32.201891 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbtg7\" (UniqueName: \"kubernetes.io/projected/3524341f-32c2-40b8-bfe3-f551f8e48de0-kube-api-access-mbtg7\") pod \"dnsmasq-dns-566b5b7845-5pgtx\" (UID: \"3524341f-32c2-40b8-bfe3-f551f8e48de0\") " pod="openstack/dnsmasq-dns-566b5b7845-5pgtx" Nov 24 11:27:32 crc kubenswrapper[5072]: I1124 11:27:32.201943 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3524341f-32c2-40b8-bfe3-f551f8e48de0-ovsdbserver-nb\") pod \"dnsmasq-dns-566b5b7845-5pgtx\" (UID: \"3524341f-32c2-40b8-bfe3-f551f8e48de0\") " pod="openstack/dnsmasq-dns-566b5b7845-5pgtx" Nov 24 11:27:32 crc kubenswrapper[5072]: I1124 11:27:32.202533 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3524341f-32c2-40b8-bfe3-f551f8e48de0-ovsdbserver-nb\") pod \"dnsmasq-dns-566b5b7845-5pgtx\" (UID: \"3524341f-32c2-40b8-bfe3-f551f8e48de0\") " pod="openstack/dnsmasq-dns-566b5b7845-5pgtx" Nov 24 11:27:32 crc kubenswrapper[5072]: I1124 11:27:32.203057 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3524341f-32c2-40b8-bfe3-f551f8e48de0-dns-svc\") pod \"dnsmasq-dns-566b5b7845-5pgtx\" (UID: \"3524341f-32c2-40b8-bfe3-f551f8e48de0\") " pod="openstack/dnsmasq-dns-566b5b7845-5pgtx" Nov 24 11:27:32 crc kubenswrapper[5072]: I1124 11:27:32.221366 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 11:27:32 crc kubenswrapper[5072]: I1124 11:27:32.222753 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbtg7\" (UniqueName: \"kubernetes.io/projected/3524341f-32c2-40b8-bfe3-f551f8e48de0-kube-api-access-mbtg7\") pod \"dnsmasq-dns-566b5b7845-5pgtx\" (UID: \"3524341f-32c2-40b8-bfe3-f551f8e48de0\") " pod="openstack/dnsmasq-dns-566b5b7845-5pgtx" Nov 24 11:27:32 crc kubenswrapper[5072]: I1124 11:27:32.235772 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 11:27:32 crc kubenswrapper[5072]: I1124 11:27:32.378990 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:27:32 crc kubenswrapper[5072]: I1124 11:27:32.403293 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-566b5b7845-5pgtx" Nov 24 11:27:32 crc kubenswrapper[5072]: I1124 11:27:32.578552 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-cd5bg"] Nov 24 11:27:32 crc kubenswrapper[5072]: W1124 11:27:32.586065 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08555f6e_e089_44c2_9193_b40a03e6f2f5.slice/crio-ea4bd93b24e287ff9a5a21c0847a70a93a791aa26280bd152c4ba8d1700500ac WatchSource:0}: Error finding container ea4bd93b24e287ff9a5a21c0847a70a93a791aa26280bd152c4ba8d1700500ac: Status 404 returned error can't find the container with id ea4bd93b24e287ff9a5a21c0847a70a93a791aa26280bd152c4ba8d1700500ac Nov 24 11:27:32 crc kubenswrapper[5072]: I1124 11:27:32.628071 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-28tkc"] Nov 24 11:27:32 crc kubenswrapper[5072]: I1124 11:27:32.629137 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-28tkc" Nov 24 11:27:32 crc kubenswrapper[5072]: I1124 11:27:32.635808 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Nov 24 11:27:32 crc kubenswrapper[5072]: I1124 11:27:32.636436 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 24 11:27:32 crc kubenswrapper[5072]: I1124 11:27:32.643643 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-28tkc"] Nov 24 11:27:32 crc kubenswrapper[5072]: I1124 11:27:32.647366 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-cd5bg" event={"ID":"08555f6e-e089-44c2-9193-b40a03e6f2f5","Type":"ContainerStarted","Data":"ea4bd93b24e287ff9a5a21c0847a70a93a791aa26280bd152c4ba8d1700500ac"} Nov 24 11:27:32 crc kubenswrapper[5072]: I1124 11:27:32.679241 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 11:27:32 crc kubenswrapper[5072]: I1124 11:27:32.738627 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 11:27:32 crc kubenswrapper[5072]: I1124 11:27:32.810585 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfb7c\" (UniqueName: \"kubernetes.io/projected/f1dfc861-93be-4798-b474-eab29b57c56b-kube-api-access-xfb7c\") pod \"nova-cell1-conductor-db-sync-28tkc\" (UID: \"f1dfc861-93be-4798-b474-eab29b57c56b\") " pod="openstack/nova-cell1-conductor-db-sync-28tkc" Nov 24 11:27:32 crc kubenswrapper[5072]: I1124 11:27:32.810634 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1dfc861-93be-4798-b474-eab29b57c56b-config-data\") pod \"nova-cell1-conductor-db-sync-28tkc\" (UID: \"f1dfc861-93be-4798-b474-eab29b57c56b\") " pod="openstack/nova-cell1-conductor-db-sync-28tkc" Nov 24 11:27:32 crc kubenswrapper[5072]: I1124 11:27:32.810701 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1dfc861-93be-4798-b474-eab29b57c56b-scripts\") pod \"nova-cell1-conductor-db-sync-28tkc\" (UID: \"f1dfc861-93be-4798-b474-eab29b57c56b\") " pod="openstack/nova-cell1-conductor-db-sync-28tkc" Nov 24 11:27:32 crc kubenswrapper[5072]: I1124 11:27:32.811149 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1dfc861-93be-4798-b474-eab29b57c56b-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-28tkc\" (UID: \"f1dfc861-93be-4798-b474-eab29b57c56b\") " pod="openstack/nova-cell1-conductor-db-sync-28tkc" Nov 24 11:27:32 crc kubenswrapper[5072]: I1124 11:27:32.852361 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 11:27:32 crc kubenswrapper[5072]: W1124 11:27:32.860326 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6360280b_2986_4593_86e3_e1ea63a0c6de.slice/crio-cc4d9a0b77750b7f166b69ef181ca88b0fb20e271477fc8661eaacb4dc2ed016 WatchSource:0}: Error finding container cc4d9a0b77750b7f166b69ef181ca88b0fb20e271477fc8661eaacb4dc2ed016: Status 404 returned error can't find the container with id cc4d9a0b77750b7f166b69ef181ca88b0fb20e271477fc8661eaacb4dc2ed016 Nov 24 11:27:32 crc kubenswrapper[5072]: I1124 11:27:32.906076 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 11:27:32 crc kubenswrapper[5072]: W1124 11:27:32.911965 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddfc34bce_a7cd_450b_8b0d_ed4d3172c2d9.slice/crio-8cdf624e856dd11d7aad1cb86a5a4eea2fabfe91215dab094f37d82aeecdd4ed WatchSource:0}: Error finding container 8cdf624e856dd11d7aad1cb86a5a4eea2fabfe91215dab094f37d82aeecdd4ed: Status 404 returned error can't find the container with id 8cdf624e856dd11d7aad1cb86a5a4eea2fabfe91215dab094f37d82aeecdd4ed Nov 24 11:27:32 crc kubenswrapper[5072]: I1124 11:27:32.912269 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfb7c\" (UniqueName: \"kubernetes.io/projected/f1dfc861-93be-4798-b474-eab29b57c56b-kube-api-access-xfb7c\") pod \"nova-cell1-conductor-db-sync-28tkc\" (UID: \"f1dfc861-93be-4798-b474-eab29b57c56b\") " pod="openstack/nova-cell1-conductor-db-sync-28tkc" Nov 24 11:27:32 crc kubenswrapper[5072]: I1124 11:27:32.912303 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1dfc861-93be-4798-b474-eab29b57c56b-config-data\") pod \"nova-cell1-conductor-db-sync-28tkc\" (UID: \"f1dfc861-93be-4798-b474-eab29b57c56b\") " pod="openstack/nova-cell1-conductor-db-sync-28tkc" Nov 24 11:27:32 crc kubenswrapper[5072]: I1124 11:27:32.912349 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1dfc861-93be-4798-b474-eab29b57c56b-scripts\") pod \"nova-cell1-conductor-db-sync-28tkc\" (UID: \"f1dfc861-93be-4798-b474-eab29b57c56b\") " pod="openstack/nova-cell1-conductor-db-sync-28tkc" Nov 24 11:27:32 crc kubenswrapper[5072]: I1124 11:27:32.912444 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1dfc861-93be-4798-b474-eab29b57c56b-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-28tkc\" (UID: \"f1dfc861-93be-4798-b474-eab29b57c56b\") " pod="openstack/nova-cell1-conductor-db-sync-28tkc" Nov 24 11:27:32 crc kubenswrapper[5072]: I1124 11:27:32.918163 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1dfc861-93be-4798-b474-eab29b57c56b-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-28tkc\" (UID: \"f1dfc861-93be-4798-b474-eab29b57c56b\") " pod="openstack/nova-cell1-conductor-db-sync-28tkc" Nov 24 11:27:32 crc kubenswrapper[5072]: I1124 11:27:32.926741 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1dfc861-93be-4798-b474-eab29b57c56b-scripts\") pod \"nova-cell1-conductor-db-sync-28tkc\" (UID: \"f1dfc861-93be-4798-b474-eab29b57c56b\") " pod="openstack/nova-cell1-conductor-db-sync-28tkc" Nov 24 11:27:32 crc kubenswrapper[5072]: I1124 11:27:32.927469 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1dfc861-93be-4798-b474-eab29b57c56b-config-data\") pod \"nova-cell1-conductor-db-sync-28tkc\" (UID: \"f1dfc861-93be-4798-b474-eab29b57c56b\") " pod="openstack/nova-cell1-conductor-db-sync-28tkc" Nov 24 11:27:32 crc kubenswrapper[5072]: I1124 11:27:32.931448 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfb7c\" (UniqueName: \"kubernetes.io/projected/f1dfc861-93be-4798-b474-eab29b57c56b-kube-api-access-xfb7c\") pod \"nova-cell1-conductor-db-sync-28tkc\" (UID: \"f1dfc861-93be-4798-b474-eab29b57c56b\") " pod="openstack/nova-cell1-conductor-db-sync-28tkc" Nov 24 11:27:32 crc kubenswrapper[5072]: I1124 11:27:32.955327 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-28tkc" Nov 24 11:27:33 crc kubenswrapper[5072]: I1124 11:27:33.035172 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-5pgtx"] Nov 24 11:27:33 crc kubenswrapper[5072]: I1124 11:27:33.411678 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-28tkc"] Nov 24 11:27:33 crc kubenswrapper[5072]: W1124 11:27:33.413868 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf1dfc861_93be_4798_b474_eab29b57c56b.slice/crio-824fd7f1f62587b9a21961ac099b8fc393640a070fe023643b44d032268822a5 WatchSource:0}: Error finding container 824fd7f1f62587b9a21961ac099b8fc393640a070fe023643b44d032268822a5: Status 404 returned error can't find the container with id 824fd7f1f62587b9a21961ac099b8fc393640a070fe023643b44d032268822a5 Nov 24 11:27:33 crc kubenswrapper[5072]: I1124 11:27:33.660555 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"384915e0-f433-462f-82ab-d31ebaeb63d1","Type":"ContainerStarted","Data":"ed1ebc5c0ee2465b641c7187acf4ef019d93470fa39026deda64a55f83d2e2ba"} Nov 24 11:27:33 crc kubenswrapper[5072]: I1124 11:27:33.663009 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-28tkc" event={"ID":"f1dfc861-93be-4798-b474-eab29b57c56b","Type":"ContainerStarted","Data":"c075a0b6c571df3a9da3865213dc0fdfafca0e85fcc958bd975825b331cd7639"} Nov 24 11:27:33 crc kubenswrapper[5072]: I1124 11:27:33.663052 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-28tkc" event={"ID":"f1dfc861-93be-4798-b474-eab29b57c56b","Type":"ContainerStarted","Data":"824fd7f1f62587b9a21961ac099b8fc393640a070fe023643b44d032268822a5"} Nov 24 11:27:33 crc kubenswrapper[5072]: I1124 11:27:33.665142 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"98f36c5e-b827-4fcb-ac98-8eb62f230787","Type":"ContainerStarted","Data":"c24bb5f70364f561494ac7860e99ae41b3dd7b78e208b13cb816c74a699d5360"} Nov 24 11:27:33 crc kubenswrapper[5072]: I1124 11:27:33.668163 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-cd5bg" event={"ID":"08555f6e-e089-44c2-9193-b40a03e6f2f5","Type":"ContainerStarted","Data":"dd9b1d0df5faeef81f5840dd58ed4436962ca833cf0b88f5779837a365ae20aa"} Nov 24 11:27:33 crc kubenswrapper[5072]: I1124 11:27:33.684493 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-28tkc" podStartSLOduration=1.6844716100000001 podStartE2EDuration="1.68447161s" podCreationTimestamp="2025-11-24 11:27:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:27:33.677879796 +0000 UTC m=+1105.389404272" watchObservedRunningTime="2025-11-24 11:27:33.68447161 +0000 UTC m=+1105.395996086" Nov 24 11:27:33 crc kubenswrapper[5072]: I1124 11:27:33.685531 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6360280b-2986-4593-86e3-e1ea63a0c6de","Type":"ContainerStarted","Data":"cc4d9a0b77750b7f166b69ef181ca88b0fb20e271477fc8661eaacb4dc2ed016"} Nov 24 11:27:33 crc kubenswrapper[5072]: I1124 11:27:33.693222 5072 generic.go:334] "Generic (PLEG): container finished" podID="3524341f-32c2-40b8-bfe3-f551f8e48de0" containerID="36aadd7da48dcfe3611e54aed6f2269821ff6eaf7dff59ccd1c6c694d1f79054" exitCode=0 Nov 24 11:27:33 crc kubenswrapper[5072]: I1124 11:27:33.693294 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-5pgtx" event={"ID":"3524341f-32c2-40b8-bfe3-f551f8e48de0","Type":"ContainerDied","Data":"36aadd7da48dcfe3611e54aed6f2269821ff6eaf7dff59ccd1c6c694d1f79054"} Nov 24 11:27:33 crc kubenswrapper[5072]: I1124 11:27:33.693320 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-5pgtx" event={"ID":"3524341f-32c2-40b8-bfe3-f551f8e48de0","Type":"ContainerStarted","Data":"2056516b7bd7c64638826d8dc8be673c35deb99c28c2dd31e1600cc00ff71bc3"} Nov 24 11:27:33 crc kubenswrapper[5072]: I1124 11:27:33.696660 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-cd5bg" podStartSLOduration=2.696650173 podStartE2EDuration="2.696650173s" podCreationTimestamp="2025-11-24 11:27:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:27:33.690660914 +0000 UTC m=+1105.402185390" watchObservedRunningTime="2025-11-24 11:27:33.696650173 +0000 UTC m=+1105.408174649" Nov 24 11:27:33 crc kubenswrapper[5072]: I1124 11:27:33.702422 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"dfc34bce-a7cd-450b-8b0d-ed4d3172c2d9","Type":"ContainerStarted","Data":"8cdf624e856dd11d7aad1cb86a5a4eea2fabfe91215dab094f37d82aeecdd4ed"} Nov 24 11:27:35 crc kubenswrapper[5072]: I1124 11:27:35.932901 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 11:27:35 crc kubenswrapper[5072]: I1124 11:27:35.941393 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 11:27:36 crc kubenswrapper[5072]: I1124 11:27:36.751301 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6360280b-2986-4593-86e3-e1ea63a0c6de","Type":"ContainerStarted","Data":"dd7f42b1411724649601df020fc2c9ae6f9cc00a6ef4ef01579c9049a308b20f"} Nov 24 11:27:36 crc kubenswrapper[5072]: I1124 11:27:36.761521 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-5pgtx" event={"ID":"3524341f-32c2-40b8-bfe3-f551f8e48de0","Type":"ContainerStarted","Data":"61e4480db97e7be4cbb9f676fa18803cc688d2588e03b938aeb98351268cc76f"} Nov 24 11:27:36 crc kubenswrapper[5072]: I1124 11:27:36.761883 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-566b5b7845-5pgtx" Nov 24 11:27:36 crc kubenswrapper[5072]: I1124 11:27:36.777812 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"dfc34bce-a7cd-450b-8b0d-ed4d3172c2d9","Type":"ContainerStarted","Data":"d333800dfc55359b3b38b4e531c7eb0c21351aa1dbd410d7878194807ee7c163"} Nov 24 11:27:36 crc kubenswrapper[5072]: I1124 11:27:36.778067 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="dfc34bce-a7cd-450b-8b0d-ed4d3172c2d9" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://d333800dfc55359b3b38b4e531c7eb0c21351aa1dbd410d7878194807ee7c163" gracePeriod=30 Nov 24 11:27:36 crc kubenswrapper[5072]: I1124 11:27:36.800300 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-566b5b7845-5pgtx" podStartSLOduration=5.800277359 podStartE2EDuration="5.800277359s" podCreationTimestamp="2025-11-24 11:27:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:27:36.791039309 +0000 UTC m=+1108.502563805" watchObservedRunningTime="2025-11-24 11:27:36.800277359 +0000 UTC m=+1108.511801845" Nov 24 11:27:36 crc kubenswrapper[5072]: I1124 11:27:36.826855 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"384915e0-f433-462f-82ab-d31ebaeb63d1","Type":"ContainerStarted","Data":"4ae2bee729ee7067014edf026176f853bbb623ff29ed5ecad8dd51ed077a485a"} Nov 24 11:27:36 crc kubenswrapper[5072]: I1124 11:27:36.831784 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"98f36c5e-b827-4fcb-ac98-8eb62f230787","Type":"ContainerStarted","Data":"ed18e3d0cc57e852fb841e3e550e978f9f8476f1f109f8ea1dd4470d23e32466"} Nov 24 11:27:36 crc kubenswrapper[5072]: I1124 11:27:36.849543 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.20678821 podStartE2EDuration="5.849530874s" podCreationTimestamp="2025-11-24 11:27:31 +0000 UTC" firstStartedPulling="2025-11-24 11:27:32.759788203 +0000 UTC m=+1104.471312679" lastFinishedPulling="2025-11-24 11:27:36.402530867 +0000 UTC m=+1108.114055343" observedRunningTime="2025-11-24 11:27:36.846360255 +0000 UTC m=+1108.557884721" watchObservedRunningTime="2025-11-24 11:27:36.849530874 +0000 UTC m=+1108.561055340" Nov 24 11:27:36 crc kubenswrapper[5072]: I1124 11:27:36.851595 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.373575398 podStartE2EDuration="5.851587215s" podCreationTimestamp="2025-11-24 11:27:31 +0000 UTC" firstStartedPulling="2025-11-24 11:27:32.916710436 +0000 UTC m=+1104.628234912" lastFinishedPulling="2025-11-24 11:27:36.394722233 +0000 UTC m=+1108.106246729" observedRunningTime="2025-11-24 11:27:36.826760478 +0000 UTC m=+1108.538284954" watchObservedRunningTime="2025-11-24 11:27:36.851587215 +0000 UTC m=+1108.563111691" Nov 24 11:27:37 crc kubenswrapper[5072]: I1124 11:27:37.222427 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 24 11:27:37 crc kubenswrapper[5072]: I1124 11:27:37.379925 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:27:37 crc kubenswrapper[5072]: I1124 11:27:37.843994 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6360280b-2986-4593-86e3-e1ea63a0c6de","Type":"ContainerStarted","Data":"c8ae88dd61f6346ceb0492cdbf60ead1cf63dcb955bcc89534f9b8bff335750c"} Nov 24 11:27:37 crc kubenswrapper[5072]: I1124 11:27:37.844240 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="6360280b-2986-4593-86e3-e1ea63a0c6de" containerName="nova-metadata-log" containerID="cri-o://dd7f42b1411724649601df020fc2c9ae6f9cc00a6ef4ef01579c9049a308b20f" gracePeriod=30 Nov 24 11:27:37 crc kubenswrapper[5072]: I1124 11:27:37.844274 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="6360280b-2986-4593-86e3-e1ea63a0c6de" containerName="nova-metadata-metadata" containerID="cri-o://c8ae88dd61f6346ceb0492cdbf60ead1cf63dcb955bcc89534f9b8bff335750c" gracePeriod=30 Nov 24 11:27:37 crc kubenswrapper[5072]: I1124 11:27:37.849338 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"384915e0-f433-462f-82ab-d31ebaeb63d1","Type":"ContainerStarted","Data":"e55bc6d582815acbbe92c98b219fe9cd994a4d47e8ddba022e60b691f5fa6fe0"} Nov 24 11:27:37 crc kubenswrapper[5072]: I1124 11:27:37.911108 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.377472594 podStartE2EDuration="6.911090744s" podCreationTimestamp="2025-11-24 11:27:31 +0000 UTC" firstStartedPulling="2025-11-24 11:27:32.862300033 +0000 UTC m=+1104.573824509" lastFinishedPulling="2025-11-24 11:27:36.395918183 +0000 UTC m=+1108.107442659" observedRunningTime="2025-11-24 11:27:37.907827442 +0000 UTC m=+1109.619351938" watchObservedRunningTime="2025-11-24 11:27:37.911090744 +0000 UTC m=+1109.622615220" Nov 24 11:27:37 crc kubenswrapper[5072]: I1124 11:27:37.930923 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.182629019 podStartE2EDuration="6.930907777s" podCreationTimestamp="2025-11-24 11:27:31 +0000 UTC" firstStartedPulling="2025-11-24 11:27:32.672349149 +0000 UTC m=+1104.383873625" lastFinishedPulling="2025-11-24 11:27:36.420627887 +0000 UTC m=+1108.132152383" observedRunningTime="2025-11-24 11:27:37.925548553 +0000 UTC m=+1109.637073039" watchObservedRunningTime="2025-11-24 11:27:37.930907777 +0000 UTC m=+1109.642432253" Nov 24 11:27:38 crc kubenswrapper[5072]: I1124 11:27:38.415338 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 11:27:38 crc kubenswrapper[5072]: I1124 11:27:38.452585 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-glmb7\" (UniqueName: \"kubernetes.io/projected/6360280b-2986-4593-86e3-e1ea63a0c6de-kube-api-access-glmb7\") pod \"6360280b-2986-4593-86e3-e1ea63a0c6de\" (UID: \"6360280b-2986-4593-86e3-e1ea63a0c6de\") " Nov 24 11:27:38 crc kubenswrapper[5072]: I1124 11:27:38.452634 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6360280b-2986-4593-86e3-e1ea63a0c6de-logs\") pod \"6360280b-2986-4593-86e3-e1ea63a0c6de\" (UID: \"6360280b-2986-4593-86e3-e1ea63a0c6de\") " Nov 24 11:27:38 crc kubenswrapper[5072]: I1124 11:27:38.452657 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6360280b-2986-4593-86e3-e1ea63a0c6de-combined-ca-bundle\") pod \"6360280b-2986-4593-86e3-e1ea63a0c6de\" (UID: \"6360280b-2986-4593-86e3-e1ea63a0c6de\") " Nov 24 11:27:38 crc kubenswrapper[5072]: I1124 11:27:38.452743 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6360280b-2986-4593-86e3-e1ea63a0c6de-config-data\") pod \"6360280b-2986-4593-86e3-e1ea63a0c6de\" (UID: \"6360280b-2986-4593-86e3-e1ea63a0c6de\") " Nov 24 11:27:38 crc kubenswrapper[5072]: I1124 11:27:38.453749 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6360280b-2986-4593-86e3-e1ea63a0c6de-logs" (OuterVolumeSpecName: "logs") pod "6360280b-2986-4593-86e3-e1ea63a0c6de" (UID: "6360280b-2986-4593-86e3-e1ea63a0c6de"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:27:38 crc kubenswrapper[5072]: I1124 11:27:38.462195 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6360280b-2986-4593-86e3-e1ea63a0c6de-kube-api-access-glmb7" (OuterVolumeSpecName: "kube-api-access-glmb7") pod "6360280b-2986-4593-86e3-e1ea63a0c6de" (UID: "6360280b-2986-4593-86e3-e1ea63a0c6de"). InnerVolumeSpecName "kube-api-access-glmb7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:27:38 crc kubenswrapper[5072]: I1124 11:27:38.493866 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6360280b-2986-4593-86e3-e1ea63a0c6de-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6360280b-2986-4593-86e3-e1ea63a0c6de" (UID: "6360280b-2986-4593-86e3-e1ea63a0c6de"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:27:38 crc kubenswrapper[5072]: I1124 11:27:38.497914 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6360280b-2986-4593-86e3-e1ea63a0c6de-config-data" (OuterVolumeSpecName: "config-data") pod "6360280b-2986-4593-86e3-e1ea63a0c6de" (UID: "6360280b-2986-4593-86e3-e1ea63a0c6de"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:27:38 crc kubenswrapper[5072]: I1124 11:27:38.555275 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-glmb7\" (UniqueName: \"kubernetes.io/projected/6360280b-2986-4593-86e3-e1ea63a0c6de-kube-api-access-glmb7\") on node \"crc\" DevicePath \"\"" Nov 24 11:27:38 crc kubenswrapper[5072]: I1124 11:27:38.555330 5072 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6360280b-2986-4593-86e3-e1ea63a0c6de-logs\") on node \"crc\" DevicePath \"\"" Nov 24 11:27:38 crc kubenswrapper[5072]: I1124 11:27:38.555353 5072 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6360280b-2986-4593-86e3-e1ea63a0c6de-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:27:38 crc kubenswrapper[5072]: I1124 11:27:38.555388 5072 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6360280b-2986-4593-86e3-e1ea63a0c6de-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:27:38 crc kubenswrapper[5072]: I1124 11:27:38.862273 5072 generic.go:334] "Generic (PLEG): container finished" podID="6360280b-2986-4593-86e3-e1ea63a0c6de" containerID="c8ae88dd61f6346ceb0492cdbf60ead1cf63dcb955bcc89534f9b8bff335750c" exitCode=0 Nov 24 11:27:38 crc kubenswrapper[5072]: I1124 11:27:38.862310 5072 generic.go:334] "Generic (PLEG): container finished" podID="6360280b-2986-4593-86e3-e1ea63a0c6de" containerID="dd7f42b1411724649601df020fc2c9ae6f9cc00a6ef4ef01579c9049a308b20f" exitCode=143 Nov 24 11:27:38 crc kubenswrapper[5072]: I1124 11:27:38.862388 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 11:27:38 crc kubenswrapper[5072]: I1124 11:27:38.862436 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6360280b-2986-4593-86e3-e1ea63a0c6de","Type":"ContainerDied","Data":"c8ae88dd61f6346ceb0492cdbf60ead1cf63dcb955bcc89534f9b8bff335750c"} Nov 24 11:27:38 crc kubenswrapper[5072]: I1124 11:27:38.862469 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6360280b-2986-4593-86e3-e1ea63a0c6de","Type":"ContainerDied","Data":"dd7f42b1411724649601df020fc2c9ae6f9cc00a6ef4ef01579c9049a308b20f"} Nov 24 11:27:38 crc kubenswrapper[5072]: I1124 11:27:38.862482 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6360280b-2986-4593-86e3-e1ea63a0c6de","Type":"ContainerDied","Data":"cc4d9a0b77750b7f166b69ef181ca88b0fb20e271477fc8661eaacb4dc2ed016"} Nov 24 11:27:38 crc kubenswrapper[5072]: I1124 11:27:38.862498 5072 scope.go:117] "RemoveContainer" containerID="c8ae88dd61f6346ceb0492cdbf60ead1cf63dcb955bcc89534f9b8bff335750c" Nov 24 11:27:38 crc kubenswrapper[5072]: I1124 11:27:38.892648 5072 scope.go:117] "RemoveContainer" containerID="dd7f42b1411724649601df020fc2c9ae6f9cc00a6ef4ef01579c9049a308b20f" Nov 24 11:27:38 crc kubenswrapper[5072]: I1124 11:27:38.912691 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 11:27:38 crc kubenswrapper[5072]: I1124 11:27:38.918664 5072 scope.go:117] "RemoveContainer" containerID="c8ae88dd61f6346ceb0492cdbf60ead1cf63dcb955bcc89534f9b8bff335750c" Nov 24 11:27:38 crc kubenswrapper[5072]: E1124 11:27:38.919178 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8ae88dd61f6346ceb0492cdbf60ead1cf63dcb955bcc89534f9b8bff335750c\": container with ID starting with c8ae88dd61f6346ceb0492cdbf60ead1cf63dcb955bcc89534f9b8bff335750c not found: ID does not exist" containerID="c8ae88dd61f6346ceb0492cdbf60ead1cf63dcb955bcc89534f9b8bff335750c" Nov 24 11:27:38 crc kubenswrapper[5072]: I1124 11:27:38.919281 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8ae88dd61f6346ceb0492cdbf60ead1cf63dcb955bcc89534f9b8bff335750c"} err="failed to get container status \"c8ae88dd61f6346ceb0492cdbf60ead1cf63dcb955bcc89534f9b8bff335750c\": rpc error: code = NotFound desc = could not find container \"c8ae88dd61f6346ceb0492cdbf60ead1cf63dcb955bcc89534f9b8bff335750c\": container with ID starting with c8ae88dd61f6346ceb0492cdbf60ead1cf63dcb955bcc89534f9b8bff335750c not found: ID does not exist" Nov 24 11:27:38 crc kubenswrapper[5072]: I1124 11:27:38.919306 5072 scope.go:117] "RemoveContainer" containerID="dd7f42b1411724649601df020fc2c9ae6f9cc00a6ef4ef01579c9049a308b20f" Nov 24 11:27:38 crc kubenswrapper[5072]: E1124 11:27:38.919593 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd7f42b1411724649601df020fc2c9ae6f9cc00a6ef4ef01579c9049a308b20f\": container with ID starting with dd7f42b1411724649601df020fc2c9ae6f9cc00a6ef4ef01579c9049a308b20f not found: ID does not exist" containerID="dd7f42b1411724649601df020fc2c9ae6f9cc00a6ef4ef01579c9049a308b20f" Nov 24 11:27:38 crc kubenswrapper[5072]: I1124 11:27:38.919630 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd7f42b1411724649601df020fc2c9ae6f9cc00a6ef4ef01579c9049a308b20f"} err="failed to get container status \"dd7f42b1411724649601df020fc2c9ae6f9cc00a6ef4ef01579c9049a308b20f\": rpc error: code = NotFound desc = could not find container \"dd7f42b1411724649601df020fc2c9ae6f9cc00a6ef4ef01579c9049a308b20f\": container with ID starting with dd7f42b1411724649601df020fc2c9ae6f9cc00a6ef4ef01579c9049a308b20f not found: ID does not exist" Nov 24 11:27:38 crc kubenswrapper[5072]: I1124 11:27:38.919648 5072 scope.go:117] "RemoveContainer" containerID="c8ae88dd61f6346ceb0492cdbf60ead1cf63dcb955bcc89534f9b8bff335750c" Nov 24 11:27:38 crc kubenswrapper[5072]: I1124 11:27:38.920368 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8ae88dd61f6346ceb0492cdbf60ead1cf63dcb955bcc89534f9b8bff335750c"} err="failed to get container status \"c8ae88dd61f6346ceb0492cdbf60ead1cf63dcb955bcc89534f9b8bff335750c\": rpc error: code = NotFound desc = could not find container \"c8ae88dd61f6346ceb0492cdbf60ead1cf63dcb955bcc89534f9b8bff335750c\": container with ID starting with c8ae88dd61f6346ceb0492cdbf60ead1cf63dcb955bcc89534f9b8bff335750c not found: ID does not exist" Nov 24 11:27:38 crc kubenswrapper[5072]: I1124 11:27:38.921015 5072 scope.go:117] "RemoveContainer" containerID="dd7f42b1411724649601df020fc2c9ae6f9cc00a6ef4ef01579c9049a308b20f" Nov 24 11:27:38 crc kubenswrapper[5072]: I1124 11:27:38.921563 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd7f42b1411724649601df020fc2c9ae6f9cc00a6ef4ef01579c9049a308b20f"} err="failed to get container status \"dd7f42b1411724649601df020fc2c9ae6f9cc00a6ef4ef01579c9049a308b20f\": rpc error: code = NotFound desc = could not find container \"dd7f42b1411724649601df020fc2c9ae6f9cc00a6ef4ef01579c9049a308b20f\": container with ID starting with dd7f42b1411724649601df020fc2c9ae6f9cc00a6ef4ef01579c9049a308b20f not found: ID does not exist" Nov 24 11:27:38 crc kubenswrapper[5072]: I1124 11:27:38.933197 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 11:27:38 crc kubenswrapper[5072]: I1124 11:27:38.945073 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 24 11:27:38 crc kubenswrapper[5072]: E1124 11:27:38.945786 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6360280b-2986-4593-86e3-e1ea63a0c6de" containerName="nova-metadata-metadata" Nov 24 11:27:38 crc kubenswrapper[5072]: I1124 11:27:38.945815 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="6360280b-2986-4593-86e3-e1ea63a0c6de" containerName="nova-metadata-metadata" Nov 24 11:27:38 crc kubenswrapper[5072]: E1124 11:27:38.945853 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6360280b-2986-4593-86e3-e1ea63a0c6de" containerName="nova-metadata-log" Nov 24 11:27:38 crc kubenswrapper[5072]: I1124 11:27:38.945864 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="6360280b-2986-4593-86e3-e1ea63a0c6de" containerName="nova-metadata-log" Nov 24 11:27:38 crc kubenswrapper[5072]: I1124 11:27:38.946153 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="6360280b-2986-4593-86e3-e1ea63a0c6de" containerName="nova-metadata-metadata" Nov 24 11:27:38 crc kubenswrapper[5072]: I1124 11:27:38.946186 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="6360280b-2986-4593-86e3-e1ea63a0c6de" containerName="nova-metadata-log" Nov 24 11:27:38 crc kubenswrapper[5072]: I1124 11:27:38.947569 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 11:27:38 crc kubenswrapper[5072]: I1124 11:27:38.950281 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 24 11:27:38 crc kubenswrapper[5072]: I1124 11:27:38.950320 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 24 11:27:38 crc kubenswrapper[5072]: I1124 11:27:38.961089 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 11:27:38 crc kubenswrapper[5072]: I1124 11:27:38.962892 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80fddc5b-cdae-434c-a4e1-1fac9405a39a-logs\") pod \"nova-metadata-0\" (UID: \"80fddc5b-cdae-434c-a4e1-1fac9405a39a\") " pod="openstack/nova-metadata-0" Nov 24 11:27:38 crc kubenswrapper[5072]: I1124 11:27:38.963019 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80fddc5b-cdae-434c-a4e1-1fac9405a39a-config-data\") pod \"nova-metadata-0\" (UID: \"80fddc5b-cdae-434c-a4e1-1fac9405a39a\") " pod="openstack/nova-metadata-0" Nov 24 11:27:38 crc kubenswrapper[5072]: I1124 11:27:38.963057 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjgw2\" (UniqueName: \"kubernetes.io/projected/80fddc5b-cdae-434c-a4e1-1fac9405a39a-kube-api-access-sjgw2\") pod \"nova-metadata-0\" (UID: \"80fddc5b-cdae-434c-a4e1-1fac9405a39a\") " pod="openstack/nova-metadata-0" Nov 24 11:27:38 crc kubenswrapper[5072]: I1124 11:27:38.963092 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/80fddc5b-cdae-434c-a4e1-1fac9405a39a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"80fddc5b-cdae-434c-a4e1-1fac9405a39a\") " pod="openstack/nova-metadata-0" Nov 24 11:27:38 crc kubenswrapper[5072]: I1124 11:27:38.963260 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80fddc5b-cdae-434c-a4e1-1fac9405a39a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"80fddc5b-cdae-434c-a4e1-1fac9405a39a\") " pod="openstack/nova-metadata-0" Nov 24 11:27:39 crc kubenswrapper[5072]: I1124 11:27:39.027186 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6360280b-2986-4593-86e3-e1ea63a0c6de" path="/var/lib/kubelet/pods/6360280b-2986-4593-86e3-e1ea63a0c6de/volumes" Nov 24 11:27:39 crc kubenswrapper[5072]: I1124 11:27:39.064730 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80fddc5b-cdae-434c-a4e1-1fac9405a39a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"80fddc5b-cdae-434c-a4e1-1fac9405a39a\") " pod="openstack/nova-metadata-0" Nov 24 11:27:39 crc kubenswrapper[5072]: I1124 11:27:39.064813 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80fddc5b-cdae-434c-a4e1-1fac9405a39a-logs\") pod \"nova-metadata-0\" (UID: \"80fddc5b-cdae-434c-a4e1-1fac9405a39a\") " pod="openstack/nova-metadata-0" Nov 24 11:27:39 crc kubenswrapper[5072]: I1124 11:27:39.064881 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80fddc5b-cdae-434c-a4e1-1fac9405a39a-config-data\") pod \"nova-metadata-0\" (UID: \"80fddc5b-cdae-434c-a4e1-1fac9405a39a\") " pod="openstack/nova-metadata-0" Nov 24 11:27:39 crc kubenswrapper[5072]: I1124 11:27:39.064902 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjgw2\" (UniqueName: \"kubernetes.io/projected/80fddc5b-cdae-434c-a4e1-1fac9405a39a-kube-api-access-sjgw2\") pod \"nova-metadata-0\" (UID: \"80fddc5b-cdae-434c-a4e1-1fac9405a39a\") " pod="openstack/nova-metadata-0" Nov 24 11:27:39 crc kubenswrapper[5072]: I1124 11:27:39.064928 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/80fddc5b-cdae-434c-a4e1-1fac9405a39a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"80fddc5b-cdae-434c-a4e1-1fac9405a39a\") " pod="openstack/nova-metadata-0" Nov 24 11:27:39 crc kubenswrapper[5072]: I1124 11:27:39.067044 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80fddc5b-cdae-434c-a4e1-1fac9405a39a-logs\") pod \"nova-metadata-0\" (UID: \"80fddc5b-cdae-434c-a4e1-1fac9405a39a\") " pod="openstack/nova-metadata-0" Nov 24 11:27:39 crc kubenswrapper[5072]: I1124 11:27:39.069103 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80fddc5b-cdae-434c-a4e1-1fac9405a39a-config-data\") pod \"nova-metadata-0\" (UID: \"80fddc5b-cdae-434c-a4e1-1fac9405a39a\") " pod="openstack/nova-metadata-0" Nov 24 11:27:39 crc kubenswrapper[5072]: I1124 11:27:39.069719 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/80fddc5b-cdae-434c-a4e1-1fac9405a39a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"80fddc5b-cdae-434c-a4e1-1fac9405a39a\") " pod="openstack/nova-metadata-0" Nov 24 11:27:39 crc kubenswrapper[5072]: I1124 11:27:39.069926 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80fddc5b-cdae-434c-a4e1-1fac9405a39a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"80fddc5b-cdae-434c-a4e1-1fac9405a39a\") " pod="openstack/nova-metadata-0" Nov 24 11:27:39 crc kubenswrapper[5072]: I1124 11:27:39.082795 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjgw2\" (UniqueName: \"kubernetes.io/projected/80fddc5b-cdae-434c-a4e1-1fac9405a39a-kube-api-access-sjgw2\") pod \"nova-metadata-0\" (UID: \"80fddc5b-cdae-434c-a4e1-1fac9405a39a\") " pod="openstack/nova-metadata-0" Nov 24 11:27:39 crc kubenswrapper[5072]: I1124 11:27:39.274275 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 11:27:39 crc kubenswrapper[5072]: I1124 11:27:39.584766 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 11:27:39 crc kubenswrapper[5072]: W1124 11:27:39.604414 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod80fddc5b_cdae_434c_a4e1_1fac9405a39a.slice/crio-f2186368736707b71ae45b8e427898796be312e0fe1fc4b03cf4dab3778bd498 WatchSource:0}: Error finding container f2186368736707b71ae45b8e427898796be312e0fe1fc4b03cf4dab3778bd498: Status 404 returned error can't find the container with id f2186368736707b71ae45b8e427898796be312e0fe1fc4b03cf4dab3778bd498 Nov 24 11:27:39 crc kubenswrapper[5072]: I1124 11:27:39.879574 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"80fddc5b-cdae-434c-a4e1-1fac9405a39a","Type":"ContainerStarted","Data":"a0fd18b66a49525b83c83eb0d7bde0e7a48eef3abb1634e63f9dbe311df9b0d6"} Nov 24 11:27:39 crc kubenswrapper[5072]: I1124 11:27:39.879968 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"80fddc5b-cdae-434c-a4e1-1fac9405a39a","Type":"ContainerStarted","Data":"f2186368736707b71ae45b8e427898796be312e0fe1fc4b03cf4dab3778bd498"} Nov 24 11:27:40 crc kubenswrapper[5072]: I1124 11:27:40.897336 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"80fddc5b-cdae-434c-a4e1-1fac9405a39a","Type":"ContainerStarted","Data":"b7182bf7de83f4516813c085958e3d1d2ac3b922820f1f3968ae7add71d9d5ad"} Nov 24 11:27:40 crc kubenswrapper[5072]: I1124 11:27:40.901635 5072 generic.go:334] "Generic (PLEG): container finished" podID="f1dfc861-93be-4798-b474-eab29b57c56b" containerID="c075a0b6c571df3a9da3865213dc0fdfafca0e85fcc958bd975825b331cd7639" exitCode=0 Nov 24 11:27:40 crc kubenswrapper[5072]: I1124 11:27:40.901710 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-28tkc" event={"ID":"f1dfc861-93be-4798-b474-eab29b57c56b","Type":"ContainerDied","Data":"c075a0b6c571df3a9da3865213dc0fdfafca0e85fcc958bd975825b331cd7639"} Nov 24 11:27:40 crc kubenswrapper[5072]: I1124 11:27:40.903593 5072 generic.go:334] "Generic (PLEG): container finished" podID="08555f6e-e089-44c2-9193-b40a03e6f2f5" containerID="dd9b1d0df5faeef81f5840dd58ed4436962ca833cf0b88f5779837a365ae20aa" exitCode=0 Nov 24 11:27:40 crc kubenswrapper[5072]: I1124 11:27:40.903640 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-cd5bg" event={"ID":"08555f6e-e089-44c2-9193-b40a03e6f2f5","Type":"ContainerDied","Data":"dd9b1d0df5faeef81f5840dd58ed4436962ca833cf0b88f5779837a365ae20aa"} Nov 24 11:27:40 crc kubenswrapper[5072]: I1124 11:27:40.934079 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.934056854 podStartE2EDuration="2.934056854s" podCreationTimestamp="2025-11-24 11:27:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:27:40.921460501 +0000 UTC m=+1112.632984977" watchObservedRunningTime="2025-11-24 11:27:40.934056854 +0000 UTC m=+1112.645581330" Nov 24 11:27:42 crc kubenswrapper[5072]: I1124 11:27:42.082110 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 24 11:27:42 crc kubenswrapper[5072]: I1124 11:27:42.083690 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 24 11:27:42 crc kubenswrapper[5072]: I1124 11:27:42.222745 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 24 11:27:42 crc kubenswrapper[5072]: I1124 11:27:42.290660 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 24 11:27:42 crc kubenswrapper[5072]: I1124 11:27:42.404783 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-566b5b7845-5pgtx" Nov 24 11:27:42 crc kubenswrapper[5072]: I1124 11:27:42.423434 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-28tkc" Nov 24 11:27:42 crc kubenswrapper[5072]: I1124 11:27:42.439781 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-cd5bg" Nov 24 11:27:42 crc kubenswrapper[5072]: I1124 11:27:42.478323 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-nf7ht"] Nov 24 11:27:42 crc kubenswrapper[5072]: I1124 11:27:42.478657 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6d97fcdd8f-nf7ht" podUID="6203439c-7b33-45b5-b052-9a09e6df2f11" containerName="dnsmasq-dns" containerID="cri-o://93c1ba59b63148f4a6709489e721e342ceab7993a340f5b44ce6e6491b48edbc" gracePeriod=10 Nov 24 11:27:42 crc kubenswrapper[5072]: I1124 11:27:42.532394 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1dfc861-93be-4798-b474-eab29b57c56b-config-data\") pod \"f1dfc861-93be-4798-b474-eab29b57c56b\" (UID: \"f1dfc861-93be-4798-b474-eab29b57c56b\") " Nov 24 11:27:42 crc kubenswrapper[5072]: I1124 11:27:42.532482 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfb7c\" (UniqueName: \"kubernetes.io/projected/f1dfc861-93be-4798-b474-eab29b57c56b-kube-api-access-xfb7c\") pod \"f1dfc861-93be-4798-b474-eab29b57c56b\" (UID: \"f1dfc861-93be-4798-b474-eab29b57c56b\") " Nov 24 11:27:42 crc kubenswrapper[5072]: I1124 11:27:42.532561 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1dfc861-93be-4798-b474-eab29b57c56b-combined-ca-bundle\") pod \"f1dfc861-93be-4798-b474-eab29b57c56b\" (UID: \"f1dfc861-93be-4798-b474-eab29b57c56b\") " Nov 24 11:27:42 crc kubenswrapper[5072]: I1124 11:27:42.532602 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1dfc861-93be-4798-b474-eab29b57c56b-scripts\") pod \"f1dfc861-93be-4798-b474-eab29b57c56b\" (UID: \"f1dfc861-93be-4798-b474-eab29b57c56b\") " Nov 24 11:27:42 crc kubenswrapper[5072]: I1124 11:27:42.548702 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1dfc861-93be-4798-b474-eab29b57c56b-scripts" (OuterVolumeSpecName: "scripts") pod "f1dfc861-93be-4798-b474-eab29b57c56b" (UID: "f1dfc861-93be-4798-b474-eab29b57c56b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:27:42 crc kubenswrapper[5072]: I1124 11:27:42.548710 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1dfc861-93be-4798-b474-eab29b57c56b-kube-api-access-xfb7c" (OuterVolumeSpecName: "kube-api-access-xfb7c") pod "f1dfc861-93be-4798-b474-eab29b57c56b" (UID: "f1dfc861-93be-4798-b474-eab29b57c56b"). InnerVolumeSpecName "kube-api-access-xfb7c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:27:42 crc kubenswrapper[5072]: I1124 11:27:42.565456 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1dfc861-93be-4798-b474-eab29b57c56b-config-data" (OuterVolumeSpecName: "config-data") pod "f1dfc861-93be-4798-b474-eab29b57c56b" (UID: "f1dfc861-93be-4798-b474-eab29b57c56b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:27:42 crc kubenswrapper[5072]: I1124 11:27:42.574251 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1dfc861-93be-4798-b474-eab29b57c56b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f1dfc861-93be-4798-b474-eab29b57c56b" (UID: "f1dfc861-93be-4798-b474-eab29b57c56b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:27:42 crc kubenswrapper[5072]: I1124 11:27:42.635111 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08555f6e-e089-44c2-9193-b40a03e6f2f5-config-data\") pod \"08555f6e-e089-44c2-9193-b40a03e6f2f5\" (UID: \"08555f6e-e089-44c2-9193-b40a03e6f2f5\") " Nov 24 11:27:42 crc kubenswrapper[5072]: I1124 11:27:42.635162 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pr2mf\" (UniqueName: \"kubernetes.io/projected/08555f6e-e089-44c2-9193-b40a03e6f2f5-kube-api-access-pr2mf\") pod \"08555f6e-e089-44c2-9193-b40a03e6f2f5\" (UID: \"08555f6e-e089-44c2-9193-b40a03e6f2f5\") " Nov 24 11:27:42 crc kubenswrapper[5072]: I1124 11:27:42.635185 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08555f6e-e089-44c2-9193-b40a03e6f2f5-scripts\") pod \"08555f6e-e089-44c2-9193-b40a03e6f2f5\" (UID: \"08555f6e-e089-44c2-9193-b40a03e6f2f5\") " Nov 24 11:27:42 crc kubenswrapper[5072]: I1124 11:27:42.635268 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08555f6e-e089-44c2-9193-b40a03e6f2f5-combined-ca-bundle\") pod \"08555f6e-e089-44c2-9193-b40a03e6f2f5\" (UID: \"08555f6e-e089-44c2-9193-b40a03e6f2f5\") " Nov 24 11:27:42 crc kubenswrapper[5072]: I1124 11:27:42.635842 5072 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1dfc861-93be-4798-b474-eab29b57c56b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:27:42 crc kubenswrapper[5072]: I1124 11:27:42.635860 5072 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1dfc861-93be-4798-b474-eab29b57c56b-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:27:42 crc kubenswrapper[5072]: I1124 11:27:42.635870 5072 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1dfc861-93be-4798-b474-eab29b57c56b-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:27:42 crc kubenswrapper[5072]: I1124 11:27:42.635879 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xfb7c\" (UniqueName: \"kubernetes.io/projected/f1dfc861-93be-4798-b474-eab29b57c56b-kube-api-access-xfb7c\") on node \"crc\" DevicePath \"\"" Nov 24 11:27:42 crc kubenswrapper[5072]: I1124 11:27:42.639789 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08555f6e-e089-44c2-9193-b40a03e6f2f5-scripts" (OuterVolumeSpecName: "scripts") pod "08555f6e-e089-44c2-9193-b40a03e6f2f5" (UID: "08555f6e-e089-44c2-9193-b40a03e6f2f5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:27:42 crc kubenswrapper[5072]: I1124 11:27:42.640185 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08555f6e-e089-44c2-9193-b40a03e6f2f5-kube-api-access-pr2mf" (OuterVolumeSpecName: "kube-api-access-pr2mf") pod "08555f6e-e089-44c2-9193-b40a03e6f2f5" (UID: "08555f6e-e089-44c2-9193-b40a03e6f2f5"). InnerVolumeSpecName "kube-api-access-pr2mf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:27:42 crc kubenswrapper[5072]: I1124 11:27:42.665504 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08555f6e-e089-44c2-9193-b40a03e6f2f5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "08555f6e-e089-44c2-9193-b40a03e6f2f5" (UID: "08555f6e-e089-44c2-9193-b40a03e6f2f5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:27:42 crc kubenswrapper[5072]: I1124 11:27:42.667545 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08555f6e-e089-44c2-9193-b40a03e6f2f5-config-data" (OuterVolumeSpecName: "config-data") pod "08555f6e-e089-44c2-9193-b40a03e6f2f5" (UID: "08555f6e-e089-44c2-9193-b40a03e6f2f5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:27:42 crc kubenswrapper[5072]: I1124 11:27:42.746038 5072 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08555f6e-e089-44c2-9193-b40a03e6f2f5-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:27:42 crc kubenswrapper[5072]: I1124 11:27:42.746076 5072 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08555f6e-e089-44c2-9193-b40a03e6f2f5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:27:42 crc kubenswrapper[5072]: I1124 11:27:42.746092 5072 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08555f6e-e089-44c2-9193-b40a03e6f2f5-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:27:42 crc kubenswrapper[5072]: I1124 11:27:42.746104 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pr2mf\" (UniqueName: \"kubernetes.io/projected/08555f6e-e089-44c2-9193-b40a03e6f2f5-kube-api-access-pr2mf\") on node \"crc\" DevicePath \"\"" Nov 24 11:27:42 crc kubenswrapper[5072]: I1124 11:27:42.880845 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d97fcdd8f-nf7ht" Nov 24 11:27:42 crc kubenswrapper[5072]: I1124 11:27:42.941557 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-28tkc" event={"ID":"f1dfc861-93be-4798-b474-eab29b57c56b","Type":"ContainerDied","Data":"824fd7f1f62587b9a21961ac099b8fc393640a070fe023643b44d032268822a5"} Nov 24 11:27:42 crc kubenswrapper[5072]: I1124 11:27:42.941620 5072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="824fd7f1f62587b9a21961ac099b8fc393640a070fe023643b44d032268822a5" Nov 24 11:27:42 crc kubenswrapper[5072]: I1124 11:27:42.941587 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-28tkc" Nov 24 11:27:42 crc kubenswrapper[5072]: I1124 11:27:42.943502 5072 generic.go:334] "Generic (PLEG): container finished" podID="6203439c-7b33-45b5-b052-9a09e6df2f11" containerID="93c1ba59b63148f4a6709489e721e342ceab7993a340f5b44ce6e6491b48edbc" exitCode=0 Nov 24 11:27:42 crc kubenswrapper[5072]: I1124 11:27:42.943558 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d97fcdd8f-nf7ht" Nov 24 11:27:42 crc kubenswrapper[5072]: I1124 11:27:42.943593 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-nf7ht" event={"ID":"6203439c-7b33-45b5-b052-9a09e6df2f11","Type":"ContainerDied","Data":"93c1ba59b63148f4a6709489e721e342ceab7993a340f5b44ce6e6491b48edbc"} Nov 24 11:27:42 crc kubenswrapper[5072]: I1124 11:27:42.943658 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-nf7ht" event={"ID":"6203439c-7b33-45b5-b052-9a09e6df2f11","Type":"ContainerDied","Data":"b2d1d68e6b7e93009ff73c815500d27b65bf45d8cd2d576e9d5affabe170d3c4"} Nov 24 11:27:42 crc kubenswrapper[5072]: I1124 11:27:42.943677 5072 scope.go:117] "RemoveContainer" containerID="93c1ba59b63148f4a6709489e721e342ceab7993a340f5b44ce6e6491b48edbc" Nov 24 11:27:42 crc kubenswrapper[5072]: I1124 11:27:42.947546 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-cd5bg" event={"ID":"08555f6e-e089-44c2-9193-b40a03e6f2f5","Type":"ContainerDied","Data":"ea4bd93b24e287ff9a5a21c0847a70a93a791aa26280bd152c4ba8d1700500ac"} Nov 24 11:27:42 crc kubenswrapper[5072]: I1124 11:27:42.947603 5072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea4bd93b24e287ff9a5a21c0847a70a93a791aa26280bd152c4ba8d1700500ac" Nov 24 11:27:42 crc kubenswrapper[5072]: I1124 11:27:42.947568 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-cd5bg" Nov 24 11:27:42 crc kubenswrapper[5072]: I1124 11:27:42.948852 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6203439c-7b33-45b5-b052-9a09e6df2f11-ovsdbserver-nb\") pod \"6203439c-7b33-45b5-b052-9a09e6df2f11\" (UID: \"6203439c-7b33-45b5-b052-9a09e6df2f11\") " Nov 24 11:27:42 crc kubenswrapper[5072]: I1124 11:27:42.948966 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rwks\" (UniqueName: \"kubernetes.io/projected/6203439c-7b33-45b5-b052-9a09e6df2f11-kube-api-access-8rwks\") pod \"6203439c-7b33-45b5-b052-9a09e6df2f11\" (UID: \"6203439c-7b33-45b5-b052-9a09e6df2f11\") " Nov 24 11:27:42 crc kubenswrapper[5072]: I1124 11:27:42.949002 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6203439c-7b33-45b5-b052-9a09e6df2f11-config\") pod \"6203439c-7b33-45b5-b052-9a09e6df2f11\" (UID: \"6203439c-7b33-45b5-b052-9a09e6df2f11\") " Nov 24 11:27:42 crc kubenswrapper[5072]: I1124 11:27:42.949110 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6203439c-7b33-45b5-b052-9a09e6df2f11-ovsdbserver-sb\") pod \"6203439c-7b33-45b5-b052-9a09e6df2f11\" (UID: \"6203439c-7b33-45b5-b052-9a09e6df2f11\") " Nov 24 11:27:42 crc kubenswrapper[5072]: I1124 11:27:42.949157 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6203439c-7b33-45b5-b052-9a09e6df2f11-dns-svc\") pod \"6203439c-7b33-45b5-b052-9a09e6df2f11\" (UID: \"6203439c-7b33-45b5-b052-9a09e6df2f11\") " Nov 24 11:27:42 crc kubenswrapper[5072]: I1124 11:27:42.955482 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6203439c-7b33-45b5-b052-9a09e6df2f11-kube-api-access-8rwks" (OuterVolumeSpecName: "kube-api-access-8rwks") pod "6203439c-7b33-45b5-b052-9a09e6df2f11" (UID: "6203439c-7b33-45b5-b052-9a09e6df2f11"). InnerVolumeSpecName "kube-api-access-8rwks". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:27:42 crc kubenswrapper[5072]: I1124 11:27:42.981329 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 24 11:27:42 crc kubenswrapper[5072]: I1124 11:27:42.988118 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6203439c-7b33-45b5-b052-9a09e6df2f11-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6203439c-7b33-45b5-b052-9a09e6df2f11" (UID: "6203439c-7b33-45b5-b052-9a09e6df2f11"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:27:42 crc kubenswrapper[5072]: I1124 11:27:42.994492 5072 scope.go:117] "RemoveContainer" containerID="360a5e1a79b597dfa0f67f0a5d0a5d957255ee193b6dcc9922402499eeb0affb" Nov 24 11:27:42 crc kubenswrapper[5072]: I1124 11:27:42.995735 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6203439c-7b33-45b5-b052-9a09e6df2f11-config" (OuterVolumeSpecName: "config") pod "6203439c-7b33-45b5-b052-9a09e6df2f11" (UID: "6203439c-7b33-45b5-b052-9a09e6df2f11"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:27:43 crc kubenswrapper[5072]: I1124 11:27:43.028875 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6203439c-7b33-45b5-b052-9a09e6df2f11-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6203439c-7b33-45b5-b052-9a09e6df2f11" (UID: "6203439c-7b33-45b5-b052-9a09e6df2f11"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:27:43 crc kubenswrapper[5072]: I1124 11:27:43.035208 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6203439c-7b33-45b5-b052-9a09e6df2f11-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6203439c-7b33-45b5-b052-9a09e6df2f11" (UID: "6203439c-7b33-45b5-b052-9a09e6df2f11"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:27:43 crc kubenswrapper[5072]: I1124 11:27:43.048381 5072 scope.go:117] "RemoveContainer" containerID="93c1ba59b63148f4a6709489e721e342ceab7993a340f5b44ce6e6491b48edbc" Nov 24 11:27:43 crc kubenswrapper[5072]: E1124 11:27:43.049995 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93c1ba59b63148f4a6709489e721e342ceab7993a340f5b44ce6e6491b48edbc\": container with ID starting with 93c1ba59b63148f4a6709489e721e342ceab7993a340f5b44ce6e6491b48edbc not found: ID does not exist" containerID="93c1ba59b63148f4a6709489e721e342ceab7993a340f5b44ce6e6491b48edbc" Nov 24 11:27:43 crc kubenswrapper[5072]: I1124 11:27:43.050040 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93c1ba59b63148f4a6709489e721e342ceab7993a340f5b44ce6e6491b48edbc"} err="failed to get container status \"93c1ba59b63148f4a6709489e721e342ceab7993a340f5b44ce6e6491b48edbc\": rpc error: code = NotFound desc = could not find container \"93c1ba59b63148f4a6709489e721e342ceab7993a340f5b44ce6e6491b48edbc\": container with ID starting with 93c1ba59b63148f4a6709489e721e342ceab7993a340f5b44ce6e6491b48edbc not found: ID does not exist" Nov 24 11:27:43 crc kubenswrapper[5072]: I1124 11:27:43.050064 5072 scope.go:117] "RemoveContainer" containerID="360a5e1a79b597dfa0f67f0a5d0a5d957255ee193b6dcc9922402499eeb0affb" Nov 24 11:27:43 crc kubenswrapper[5072]: E1124 11:27:43.051458 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"360a5e1a79b597dfa0f67f0a5d0a5d957255ee193b6dcc9922402499eeb0affb\": container with ID starting with 360a5e1a79b597dfa0f67f0a5d0a5d957255ee193b6dcc9922402499eeb0affb not found: ID does not exist" containerID="360a5e1a79b597dfa0f67f0a5d0a5d957255ee193b6dcc9922402499eeb0affb" Nov 24 11:27:43 crc kubenswrapper[5072]: I1124 11:27:43.051508 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"360a5e1a79b597dfa0f67f0a5d0a5d957255ee193b6dcc9922402499eeb0affb"} err="failed to get container status \"360a5e1a79b597dfa0f67f0a5d0a5d957255ee193b6dcc9922402499eeb0affb\": rpc error: code = NotFound desc = could not find container \"360a5e1a79b597dfa0f67f0a5d0a5d957255ee193b6dcc9922402499eeb0affb\": container with ID starting with 360a5e1a79b597dfa0f67f0a5d0a5d957255ee193b6dcc9922402499eeb0affb not found: ID does not exist" Nov 24 11:27:43 crc kubenswrapper[5072]: I1124 11:27:43.053305 5072 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6203439c-7b33-45b5-b052-9a09e6df2f11-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 11:27:43 crc kubenswrapper[5072]: I1124 11:27:43.053324 5072 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6203439c-7b33-45b5-b052-9a09e6df2f11-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 11:27:43 crc kubenswrapper[5072]: I1124 11:27:43.053334 5072 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6203439c-7b33-45b5-b052-9a09e6df2f11-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 11:27:43 crc kubenswrapper[5072]: I1124 11:27:43.053343 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8rwks\" (UniqueName: \"kubernetes.io/projected/6203439c-7b33-45b5-b052-9a09e6df2f11-kube-api-access-8rwks\") on node \"crc\" DevicePath \"\"" Nov 24 11:27:43 crc kubenswrapper[5072]: I1124 11:27:43.053351 5072 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6203439c-7b33-45b5-b052-9a09e6df2f11-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:27:43 crc kubenswrapper[5072]: I1124 11:27:43.080669 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 24 11:27:43 crc kubenswrapper[5072]: E1124 11:27:43.081046 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6203439c-7b33-45b5-b052-9a09e6df2f11" containerName="init" Nov 24 11:27:43 crc kubenswrapper[5072]: I1124 11:27:43.081069 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="6203439c-7b33-45b5-b052-9a09e6df2f11" containerName="init" Nov 24 11:27:43 crc kubenswrapper[5072]: E1124 11:27:43.081089 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6203439c-7b33-45b5-b052-9a09e6df2f11" containerName="dnsmasq-dns" Nov 24 11:27:43 crc kubenswrapper[5072]: I1124 11:27:43.081099 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="6203439c-7b33-45b5-b052-9a09e6df2f11" containerName="dnsmasq-dns" Nov 24 11:27:43 crc kubenswrapper[5072]: E1124 11:27:43.081114 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08555f6e-e089-44c2-9193-b40a03e6f2f5" containerName="nova-manage" Nov 24 11:27:43 crc kubenswrapper[5072]: I1124 11:27:43.081121 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="08555f6e-e089-44c2-9193-b40a03e6f2f5" containerName="nova-manage" Nov 24 11:27:43 crc kubenswrapper[5072]: E1124 11:27:43.081156 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1dfc861-93be-4798-b474-eab29b57c56b" containerName="nova-cell1-conductor-db-sync" Nov 24 11:27:43 crc kubenswrapper[5072]: I1124 11:27:43.081163 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1dfc861-93be-4798-b474-eab29b57c56b" containerName="nova-cell1-conductor-db-sync" Nov 24 11:27:43 crc kubenswrapper[5072]: I1124 11:27:43.081359 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="08555f6e-e089-44c2-9193-b40a03e6f2f5" containerName="nova-manage" Nov 24 11:27:43 crc kubenswrapper[5072]: I1124 11:27:43.081488 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="6203439c-7b33-45b5-b052-9a09e6df2f11" containerName="dnsmasq-dns" Nov 24 11:27:43 crc kubenswrapper[5072]: I1124 11:27:43.081522 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1dfc861-93be-4798-b474-eab29b57c56b" containerName="nova-cell1-conductor-db-sync" Nov 24 11:27:43 crc kubenswrapper[5072]: I1124 11:27:43.082185 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 24 11:27:43 crc kubenswrapper[5072]: I1124 11:27:43.085093 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 24 11:27:43 crc kubenswrapper[5072]: I1124 11:27:43.092739 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 24 11:27:43 crc kubenswrapper[5072]: I1124 11:27:43.154975 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42a95d10-e572-4170-aa79-9b98d2c290b7-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"42a95d10-e572-4170-aa79-9b98d2c290b7\") " pod="openstack/nova-cell1-conductor-0" Nov 24 11:27:43 crc kubenswrapper[5072]: I1124 11:27:43.155043 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42a95d10-e572-4170-aa79-9b98d2c290b7-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"42a95d10-e572-4170-aa79-9b98d2c290b7\") " pod="openstack/nova-cell1-conductor-0" Nov 24 11:27:43 crc kubenswrapper[5072]: I1124 11:27:43.155114 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rs8g7\" (UniqueName: \"kubernetes.io/projected/42a95d10-e572-4170-aa79-9b98d2c290b7-kube-api-access-rs8g7\") pod \"nova-cell1-conductor-0\" (UID: \"42a95d10-e572-4170-aa79-9b98d2c290b7\") " pod="openstack/nova-cell1-conductor-0" Nov 24 11:27:43 crc kubenswrapper[5072]: I1124 11:27:43.166533 5072 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="384915e0-f433-462f-82ab-d31ebaeb63d1" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.167:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 11:27:43 crc kubenswrapper[5072]: I1124 11:27:43.166801 5072 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="384915e0-f433-462f-82ab-d31ebaeb63d1" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.167:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 11:27:43 crc kubenswrapper[5072]: I1124 11:27:43.178189 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 24 11:27:43 crc kubenswrapper[5072]: I1124 11:27:43.178548 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="384915e0-f433-462f-82ab-d31ebaeb63d1" containerName="nova-api-api" containerID="cri-o://e55bc6d582815acbbe92c98b219fe9cd994a4d47e8ddba022e60b691f5fa6fe0" gracePeriod=30 Nov 24 11:27:43 crc kubenswrapper[5072]: I1124 11:27:43.178781 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="384915e0-f433-462f-82ab-d31ebaeb63d1" containerName="nova-api-log" containerID="cri-o://4ae2bee729ee7067014edf026176f853bbb623ff29ed5ecad8dd51ed077a485a" gracePeriod=30 Nov 24 11:27:43 crc kubenswrapper[5072]: I1124 11:27:43.220909 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 11:27:43 crc kubenswrapper[5072]: I1124 11:27:43.221543 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="80fddc5b-cdae-434c-a4e1-1fac9405a39a" containerName="nova-metadata-log" containerID="cri-o://a0fd18b66a49525b83c83eb0d7bde0e7a48eef3abb1634e63f9dbe311df9b0d6" gracePeriod=30 Nov 24 11:27:43 crc kubenswrapper[5072]: I1124 11:27:43.222001 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="80fddc5b-cdae-434c-a4e1-1fac9405a39a" containerName="nova-metadata-metadata" containerID="cri-o://b7182bf7de83f4516813c085958e3d1d2ac3b922820f1f3968ae7add71d9d5ad" gracePeriod=30 Nov 24 11:27:43 crc kubenswrapper[5072]: I1124 11:27:43.256485 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rs8g7\" (UniqueName: \"kubernetes.io/projected/42a95d10-e572-4170-aa79-9b98d2c290b7-kube-api-access-rs8g7\") pod \"nova-cell1-conductor-0\" (UID: \"42a95d10-e572-4170-aa79-9b98d2c290b7\") " pod="openstack/nova-cell1-conductor-0" Nov 24 11:27:43 crc kubenswrapper[5072]: I1124 11:27:43.256616 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42a95d10-e572-4170-aa79-9b98d2c290b7-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"42a95d10-e572-4170-aa79-9b98d2c290b7\") " pod="openstack/nova-cell1-conductor-0" Nov 24 11:27:43 crc kubenswrapper[5072]: I1124 11:27:43.256673 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42a95d10-e572-4170-aa79-9b98d2c290b7-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"42a95d10-e572-4170-aa79-9b98d2c290b7\") " pod="openstack/nova-cell1-conductor-0" Nov 24 11:27:43 crc kubenswrapper[5072]: I1124 11:27:43.261156 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42a95d10-e572-4170-aa79-9b98d2c290b7-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"42a95d10-e572-4170-aa79-9b98d2c290b7\") " pod="openstack/nova-cell1-conductor-0" Nov 24 11:27:43 crc kubenswrapper[5072]: I1124 11:27:43.261164 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42a95d10-e572-4170-aa79-9b98d2c290b7-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"42a95d10-e572-4170-aa79-9b98d2c290b7\") " pod="openstack/nova-cell1-conductor-0" Nov 24 11:27:43 crc kubenswrapper[5072]: I1124 11:27:43.274512 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-nf7ht"] Nov 24 11:27:43 crc kubenswrapper[5072]: I1124 11:27:43.277541 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rs8g7\" (UniqueName: \"kubernetes.io/projected/42a95d10-e572-4170-aa79-9b98d2c290b7-kube-api-access-rs8g7\") pod \"nova-cell1-conductor-0\" (UID: \"42a95d10-e572-4170-aa79-9b98d2c290b7\") " pod="openstack/nova-cell1-conductor-0" Nov 24 11:27:43 crc kubenswrapper[5072]: I1124 11:27:43.278668 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-nf7ht"] Nov 24 11:27:43 crc kubenswrapper[5072]: I1124 11:27:43.405951 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 24 11:27:43 crc kubenswrapper[5072]: I1124 11:27:43.462555 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 11:27:43 crc kubenswrapper[5072]: I1124 11:27:43.644723 5072 patch_prober.go:28] interesting pod/machine-config-daemon-jfxnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 11:27:43 crc kubenswrapper[5072]: I1124 11:27:43.645036 5072 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 11:27:43 crc kubenswrapper[5072]: I1124 11:27:43.806172 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 11:27:43 crc kubenswrapper[5072]: I1124 11:27:43.867117 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjgw2\" (UniqueName: \"kubernetes.io/projected/80fddc5b-cdae-434c-a4e1-1fac9405a39a-kube-api-access-sjgw2\") pod \"80fddc5b-cdae-434c-a4e1-1fac9405a39a\" (UID: \"80fddc5b-cdae-434c-a4e1-1fac9405a39a\") " Nov 24 11:27:43 crc kubenswrapper[5072]: I1124 11:27:43.867193 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80fddc5b-cdae-434c-a4e1-1fac9405a39a-config-data\") pod \"80fddc5b-cdae-434c-a4e1-1fac9405a39a\" (UID: \"80fddc5b-cdae-434c-a4e1-1fac9405a39a\") " Nov 24 11:27:43 crc kubenswrapper[5072]: I1124 11:27:43.867259 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80fddc5b-cdae-434c-a4e1-1fac9405a39a-logs\") pod \"80fddc5b-cdae-434c-a4e1-1fac9405a39a\" (UID: \"80fddc5b-cdae-434c-a4e1-1fac9405a39a\") " Nov 24 11:27:43 crc kubenswrapper[5072]: I1124 11:27:43.867294 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80fddc5b-cdae-434c-a4e1-1fac9405a39a-combined-ca-bundle\") pod \"80fddc5b-cdae-434c-a4e1-1fac9405a39a\" (UID: \"80fddc5b-cdae-434c-a4e1-1fac9405a39a\") " Nov 24 11:27:43 crc kubenswrapper[5072]: I1124 11:27:43.867407 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/80fddc5b-cdae-434c-a4e1-1fac9405a39a-nova-metadata-tls-certs\") pod \"80fddc5b-cdae-434c-a4e1-1fac9405a39a\" (UID: \"80fddc5b-cdae-434c-a4e1-1fac9405a39a\") " Nov 24 11:27:43 crc kubenswrapper[5072]: I1124 11:27:43.868219 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80fddc5b-cdae-434c-a4e1-1fac9405a39a-logs" (OuterVolumeSpecName: "logs") pod "80fddc5b-cdae-434c-a4e1-1fac9405a39a" (UID: "80fddc5b-cdae-434c-a4e1-1fac9405a39a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:27:43 crc kubenswrapper[5072]: I1124 11:27:43.889289 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80fddc5b-cdae-434c-a4e1-1fac9405a39a-kube-api-access-sjgw2" (OuterVolumeSpecName: "kube-api-access-sjgw2") pod "80fddc5b-cdae-434c-a4e1-1fac9405a39a" (UID: "80fddc5b-cdae-434c-a4e1-1fac9405a39a"). InnerVolumeSpecName "kube-api-access-sjgw2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:27:43 crc kubenswrapper[5072]: I1124 11:27:43.896080 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80fddc5b-cdae-434c-a4e1-1fac9405a39a-config-data" (OuterVolumeSpecName: "config-data") pod "80fddc5b-cdae-434c-a4e1-1fac9405a39a" (UID: "80fddc5b-cdae-434c-a4e1-1fac9405a39a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:27:43 crc kubenswrapper[5072]: I1124 11:27:43.903437 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80fddc5b-cdae-434c-a4e1-1fac9405a39a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "80fddc5b-cdae-434c-a4e1-1fac9405a39a" (UID: "80fddc5b-cdae-434c-a4e1-1fac9405a39a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:27:43 crc kubenswrapper[5072]: I1124 11:27:43.940343 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 24 11:27:43 crc kubenswrapper[5072]: I1124 11:27:43.942309 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80fddc5b-cdae-434c-a4e1-1fac9405a39a-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "80fddc5b-cdae-434c-a4e1-1fac9405a39a" (UID: "80fddc5b-cdae-434c-a4e1-1fac9405a39a"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:27:43 crc kubenswrapper[5072]: W1124 11:27:43.945539 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42a95d10_e572_4170_aa79_9b98d2c290b7.slice/crio-979253ace31f97c41e4b0f6afec63e22f72534b81e9e6acc108624a9ee16733b WatchSource:0}: Error finding container 979253ace31f97c41e4b0f6afec63e22f72534b81e9e6acc108624a9ee16733b: Status 404 returned error can't find the container with id 979253ace31f97c41e4b0f6afec63e22f72534b81e9e6acc108624a9ee16733b Nov 24 11:27:43 crc kubenswrapper[5072]: I1124 11:27:43.956937 5072 generic.go:334] "Generic (PLEG): container finished" podID="384915e0-f433-462f-82ab-d31ebaeb63d1" containerID="4ae2bee729ee7067014edf026176f853bbb623ff29ed5ecad8dd51ed077a485a" exitCode=143 Nov 24 11:27:43 crc kubenswrapper[5072]: I1124 11:27:43.957016 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"384915e0-f433-462f-82ab-d31ebaeb63d1","Type":"ContainerDied","Data":"4ae2bee729ee7067014edf026176f853bbb623ff29ed5ecad8dd51ed077a485a"} Nov 24 11:27:43 crc kubenswrapper[5072]: I1124 11:27:43.959725 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"42a95d10-e572-4170-aa79-9b98d2c290b7","Type":"ContainerStarted","Data":"979253ace31f97c41e4b0f6afec63e22f72534b81e9e6acc108624a9ee16733b"} Nov 24 11:27:43 crc kubenswrapper[5072]: I1124 11:27:43.961926 5072 generic.go:334] "Generic (PLEG): container finished" podID="80fddc5b-cdae-434c-a4e1-1fac9405a39a" containerID="b7182bf7de83f4516813c085958e3d1d2ac3b922820f1f3968ae7add71d9d5ad" exitCode=0 Nov 24 11:27:43 crc kubenswrapper[5072]: I1124 11:27:43.961946 5072 generic.go:334] "Generic (PLEG): container finished" podID="80fddc5b-cdae-434c-a4e1-1fac9405a39a" containerID="a0fd18b66a49525b83c83eb0d7bde0e7a48eef3abb1634e63f9dbe311df9b0d6" exitCode=143 Nov 24 11:27:43 crc kubenswrapper[5072]: I1124 11:27:43.961987 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"80fddc5b-cdae-434c-a4e1-1fac9405a39a","Type":"ContainerDied","Data":"b7182bf7de83f4516813c085958e3d1d2ac3b922820f1f3968ae7add71d9d5ad"} Nov 24 11:27:43 crc kubenswrapper[5072]: I1124 11:27:43.962025 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 11:27:43 crc kubenswrapper[5072]: I1124 11:27:43.962048 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"80fddc5b-cdae-434c-a4e1-1fac9405a39a","Type":"ContainerDied","Data":"a0fd18b66a49525b83c83eb0d7bde0e7a48eef3abb1634e63f9dbe311df9b0d6"} Nov 24 11:27:43 crc kubenswrapper[5072]: I1124 11:27:43.962065 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"80fddc5b-cdae-434c-a4e1-1fac9405a39a","Type":"ContainerDied","Data":"f2186368736707b71ae45b8e427898796be312e0fe1fc4b03cf4dab3778bd498"} Nov 24 11:27:43 crc kubenswrapper[5072]: I1124 11:27:43.962086 5072 scope.go:117] "RemoveContainer" containerID="b7182bf7de83f4516813c085958e3d1d2ac3b922820f1f3968ae7add71d9d5ad" Nov 24 11:27:43 crc kubenswrapper[5072]: I1124 11:27:43.971804 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sjgw2\" (UniqueName: \"kubernetes.io/projected/80fddc5b-cdae-434c-a4e1-1fac9405a39a-kube-api-access-sjgw2\") on node \"crc\" DevicePath \"\"" Nov 24 11:27:43 crc kubenswrapper[5072]: I1124 11:27:43.971852 5072 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80fddc5b-cdae-434c-a4e1-1fac9405a39a-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:27:43 crc kubenswrapper[5072]: I1124 11:27:43.971869 5072 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80fddc5b-cdae-434c-a4e1-1fac9405a39a-logs\") on node \"crc\" DevicePath \"\"" Nov 24 11:27:43 crc kubenswrapper[5072]: I1124 11:27:43.971884 5072 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80fddc5b-cdae-434c-a4e1-1fac9405a39a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:27:43 crc kubenswrapper[5072]: I1124 11:27:43.971901 5072 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/80fddc5b-cdae-434c-a4e1-1fac9405a39a-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 11:27:43 crc kubenswrapper[5072]: I1124 11:27:43.998272 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 11:27:44 crc kubenswrapper[5072]: I1124 11:27:44.000562 5072 scope.go:117] "RemoveContainer" containerID="a0fd18b66a49525b83c83eb0d7bde0e7a48eef3abb1634e63f9dbe311df9b0d6" Nov 24 11:27:44 crc kubenswrapper[5072]: I1124 11:27:44.017241 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 11:27:44 crc kubenswrapper[5072]: I1124 11:27:44.048442 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 24 11:27:44 crc kubenswrapper[5072]: E1124 11:27:44.049151 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80fddc5b-cdae-434c-a4e1-1fac9405a39a" containerName="nova-metadata-metadata" Nov 24 11:27:44 crc kubenswrapper[5072]: I1124 11:27:44.049243 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="80fddc5b-cdae-434c-a4e1-1fac9405a39a" containerName="nova-metadata-metadata" Nov 24 11:27:44 crc kubenswrapper[5072]: E1124 11:27:44.049334 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80fddc5b-cdae-434c-a4e1-1fac9405a39a" containerName="nova-metadata-log" Nov 24 11:27:44 crc kubenswrapper[5072]: I1124 11:27:44.049421 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="80fddc5b-cdae-434c-a4e1-1fac9405a39a" containerName="nova-metadata-log" Nov 24 11:27:44 crc kubenswrapper[5072]: I1124 11:27:44.049711 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="80fddc5b-cdae-434c-a4e1-1fac9405a39a" containerName="nova-metadata-log" Nov 24 11:27:44 crc kubenswrapper[5072]: I1124 11:27:44.049819 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="80fddc5b-cdae-434c-a4e1-1fac9405a39a" containerName="nova-metadata-metadata" Nov 24 11:27:44 crc kubenswrapper[5072]: I1124 11:27:44.051106 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 11:27:44 crc kubenswrapper[5072]: I1124 11:27:44.058004 5072 scope.go:117] "RemoveContainer" containerID="b7182bf7de83f4516813c085958e3d1d2ac3b922820f1f3968ae7add71d9d5ad" Nov 24 11:27:44 crc kubenswrapper[5072]: I1124 11:27:44.058336 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 24 11:27:44 crc kubenswrapper[5072]: I1124 11:27:44.058399 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 24 11:27:44 crc kubenswrapper[5072]: E1124 11:27:44.058905 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7182bf7de83f4516813c085958e3d1d2ac3b922820f1f3968ae7add71d9d5ad\": container with ID starting with b7182bf7de83f4516813c085958e3d1d2ac3b922820f1f3968ae7add71d9d5ad not found: ID does not exist" containerID="b7182bf7de83f4516813c085958e3d1d2ac3b922820f1f3968ae7add71d9d5ad" Nov 24 11:27:44 crc kubenswrapper[5072]: I1124 11:27:44.059020 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7182bf7de83f4516813c085958e3d1d2ac3b922820f1f3968ae7add71d9d5ad"} err="failed to get container status \"b7182bf7de83f4516813c085958e3d1d2ac3b922820f1f3968ae7add71d9d5ad\": rpc error: code = NotFound desc = could not find container \"b7182bf7de83f4516813c085958e3d1d2ac3b922820f1f3968ae7add71d9d5ad\": container with ID starting with b7182bf7de83f4516813c085958e3d1d2ac3b922820f1f3968ae7add71d9d5ad not found: ID does not exist" Nov 24 11:27:44 crc kubenswrapper[5072]: I1124 11:27:44.059116 5072 scope.go:117] "RemoveContainer" containerID="a0fd18b66a49525b83c83eb0d7bde0e7a48eef3abb1634e63f9dbe311df9b0d6" Nov 24 11:27:44 crc kubenswrapper[5072]: E1124 11:27:44.059462 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0fd18b66a49525b83c83eb0d7bde0e7a48eef3abb1634e63f9dbe311df9b0d6\": container with ID starting with a0fd18b66a49525b83c83eb0d7bde0e7a48eef3abb1634e63f9dbe311df9b0d6 not found: ID does not exist" containerID="a0fd18b66a49525b83c83eb0d7bde0e7a48eef3abb1634e63f9dbe311df9b0d6" Nov 24 11:27:44 crc kubenswrapper[5072]: I1124 11:27:44.059563 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0fd18b66a49525b83c83eb0d7bde0e7a48eef3abb1634e63f9dbe311df9b0d6"} err="failed to get container status \"a0fd18b66a49525b83c83eb0d7bde0e7a48eef3abb1634e63f9dbe311df9b0d6\": rpc error: code = NotFound desc = could not find container \"a0fd18b66a49525b83c83eb0d7bde0e7a48eef3abb1634e63f9dbe311df9b0d6\": container with ID starting with a0fd18b66a49525b83c83eb0d7bde0e7a48eef3abb1634e63f9dbe311df9b0d6 not found: ID does not exist" Nov 24 11:27:44 crc kubenswrapper[5072]: I1124 11:27:44.059661 5072 scope.go:117] "RemoveContainer" containerID="b7182bf7de83f4516813c085958e3d1d2ac3b922820f1f3968ae7add71d9d5ad" Nov 24 11:27:44 crc kubenswrapper[5072]: I1124 11:27:44.062063 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7182bf7de83f4516813c085958e3d1d2ac3b922820f1f3968ae7add71d9d5ad"} err="failed to get container status \"b7182bf7de83f4516813c085958e3d1d2ac3b922820f1f3968ae7add71d9d5ad\": rpc error: code = NotFound desc = could not find container \"b7182bf7de83f4516813c085958e3d1d2ac3b922820f1f3968ae7add71d9d5ad\": container with ID starting with b7182bf7de83f4516813c085958e3d1d2ac3b922820f1f3968ae7add71d9d5ad not found: ID does not exist" Nov 24 11:27:44 crc kubenswrapper[5072]: I1124 11:27:44.062106 5072 scope.go:117] "RemoveContainer" containerID="a0fd18b66a49525b83c83eb0d7bde0e7a48eef3abb1634e63f9dbe311df9b0d6" Nov 24 11:27:44 crc kubenswrapper[5072]: I1124 11:27:44.063641 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0fd18b66a49525b83c83eb0d7bde0e7a48eef3abb1634e63f9dbe311df9b0d6"} err="failed to get container status \"a0fd18b66a49525b83c83eb0d7bde0e7a48eef3abb1634e63f9dbe311df9b0d6\": rpc error: code = NotFound desc = could not find container \"a0fd18b66a49525b83c83eb0d7bde0e7a48eef3abb1634e63f9dbe311df9b0d6\": container with ID starting with a0fd18b66a49525b83c83eb0d7bde0e7a48eef3abb1634e63f9dbe311df9b0d6 not found: ID does not exist" Nov 24 11:27:44 crc kubenswrapper[5072]: I1124 11:27:44.073709 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ab8c206-f9b3-4aa1-96c7-3a19f7a9b1b2-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2ab8c206-f9b3-4aa1-96c7-3a19f7a9b1b2\") " pod="openstack/nova-metadata-0" Nov 24 11:27:44 crc kubenswrapper[5072]: I1124 11:27:44.073887 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpwws\" (UniqueName: \"kubernetes.io/projected/2ab8c206-f9b3-4aa1-96c7-3a19f7a9b1b2-kube-api-access-jpwws\") pod \"nova-metadata-0\" (UID: \"2ab8c206-f9b3-4aa1-96c7-3a19f7a9b1b2\") " pod="openstack/nova-metadata-0" Nov 24 11:27:44 crc kubenswrapper[5072]: I1124 11:27:44.074096 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ab8c206-f9b3-4aa1-96c7-3a19f7a9b1b2-logs\") pod \"nova-metadata-0\" (UID: \"2ab8c206-f9b3-4aa1-96c7-3a19f7a9b1b2\") " pod="openstack/nova-metadata-0" Nov 24 11:27:44 crc kubenswrapper[5072]: I1124 11:27:44.074201 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ab8c206-f9b3-4aa1-96c7-3a19f7a9b1b2-config-data\") pod \"nova-metadata-0\" (UID: \"2ab8c206-f9b3-4aa1-96c7-3a19f7a9b1b2\") " pod="openstack/nova-metadata-0" Nov 24 11:27:44 crc kubenswrapper[5072]: I1124 11:27:44.074585 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ab8c206-f9b3-4aa1-96c7-3a19f7a9b1b2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2ab8c206-f9b3-4aa1-96c7-3a19f7a9b1b2\") " pod="openstack/nova-metadata-0" Nov 24 11:27:44 crc kubenswrapper[5072]: I1124 11:27:44.079349 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 11:27:44 crc kubenswrapper[5072]: I1124 11:27:44.177073 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ab8c206-f9b3-4aa1-96c7-3a19f7a9b1b2-config-data\") pod \"nova-metadata-0\" (UID: \"2ab8c206-f9b3-4aa1-96c7-3a19f7a9b1b2\") " pod="openstack/nova-metadata-0" Nov 24 11:27:44 crc kubenswrapper[5072]: I1124 11:27:44.177518 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ab8c206-f9b3-4aa1-96c7-3a19f7a9b1b2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2ab8c206-f9b3-4aa1-96c7-3a19f7a9b1b2\") " pod="openstack/nova-metadata-0" Nov 24 11:27:44 crc kubenswrapper[5072]: I1124 11:27:44.177577 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ab8c206-f9b3-4aa1-96c7-3a19f7a9b1b2-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2ab8c206-f9b3-4aa1-96c7-3a19f7a9b1b2\") " pod="openstack/nova-metadata-0" Nov 24 11:27:44 crc kubenswrapper[5072]: I1124 11:27:44.177670 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpwws\" (UniqueName: \"kubernetes.io/projected/2ab8c206-f9b3-4aa1-96c7-3a19f7a9b1b2-kube-api-access-jpwws\") pod \"nova-metadata-0\" (UID: \"2ab8c206-f9b3-4aa1-96c7-3a19f7a9b1b2\") " pod="openstack/nova-metadata-0" Nov 24 11:27:44 crc kubenswrapper[5072]: I1124 11:27:44.177729 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ab8c206-f9b3-4aa1-96c7-3a19f7a9b1b2-logs\") pod \"nova-metadata-0\" (UID: \"2ab8c206-f9b3-4aa1-96c7-3a19f7a9b1b2\") " pod="openstack/nova-metadata-0" Nov 24 11:27:44 crc kubenswrapper[5072]: I1124 11:27:44.180215 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ab8c206-f9b3-4aa1-96c7-3a19f7a9b1b2-logs\") pod \"nova-metadata-0\" (UID: \"2ab8c206-f9b3-4aa1-96c7-3a19f7a9b1b2\") " pod="openstack/nova-metadata-0" Nov 24 11:27:44 crc kubenswrapper[5072]: I1124 11:27:44.181963 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ab8c206-f9b3-4aa1-96c7-3a19f7a9b1b2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2ab8c206-f9b3-4aa1-96c7-3a19f7a9b1b2\") " pod="openstack/nova-metadata-0" Nov 24 11:27:44 crc kubenswrapper[5072]: I1124 11:27:44.182526 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ab8c206-f9b3-4aa1-96c7-3a19f7a9b1b2-config-data\") pod \"nova-metadata-0\" (UID: \"2ab8c206-f9b3-4aa1-96c7-3a19f7a9b1b2\") " pod="openstack/nova-metadata-0" Nov 24 11:27:44 crc kubenswrapper[5072]: I1124 11:27:44.180999 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ab8c206-f9b3-4aa1-96c7-3a19f7a9b1b2-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2ab8c206-f9b3-4aa1-96c7-3a19f7a9b1b2\") " pod="openstack/nova-metadata-0" Nov 24 11:27:44 crc kubenswrapper[5072]: I1124 11:27:44.193466 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpwws\" (UniqueName: \"kubernetes.io/projected/2ab8c206-f9b3-4aa1-96c7-3a19f7a9b1b2-kube-api-access-jpwws\") pod \"nova-metadata-0\" (UID: \"2ab8c206-f9b3-4aa1-96c7-3a19f7a9b1b2\") " pod="openstack/nova-metadata-0" Nov 24 11:27:44 crc kubenswrapper[5072]: I1124 11:27:44.384589 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 11:27:44 crc kubenswrapper[5072]: I1124 11:27:44.971244 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"42a95d10-e572-4170-aa79-9b98d2c290b7","Type":"ContainerStarted","Data":"6ac0538616a3648f71e6a4666406a24204efd5fd5b3dddce26eb4a5e79c8f94b"} Nov 24 11:27:44 crc kubenswrapper[5072]: I1124 11:27:44.971729 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Nov 24 11:27:44 crc kubenswrapper[5072]: I1124 11:27:44.973649 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="98f36c5e-b827-4fcb-ac98-8eb62f230787" containerName="nova-scheduler-scheduler" containerID="cri-o://ed18e3d0cc57e852fb841e3e550e978f9f8476f1f109f8ea1dd4470d23e32466" gracePeriod=30 Nov 24 11:27:44 crc kubenswrapper[5072]: I1124 11:27:44.991249 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=1.991234214 podStartE2EDuration="1.991234214s" podCreationTimestamp="2025-11-24 11:27:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:27:44.987794778 +0000 UTC m=+1116.699319254" watchObservedRunningTime="2025-11-24 11:27:44.991234214 +0000 UTC m=+1116.702758690" Nov 24 11:27:45 crc kubenswrapper[5072]: I1124 11:27:45.025098 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6203439c-7b33-45b5-b052-9a09e6df2f11" path="/var/lib/kubelet/pods/6203439c-7b33-45b5-b052-9a09e6df2f11/volumes" Nov 24 11:27:45 crc kubenswrapper[5072]: I1124 11:27:45.025920 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80fddc5b-cdae-434c-a4e1-1fac9405a39a" path="/var/lib/kubelet/pods/80fddc5b-cdae-434c-a4e1-1fac9405a39a/volumes" Nov 24 11:27:45 crc kubenswrapper[5072]: I1124 11:27:45.179252 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 11:27:45 crc kubenswrapper[5072]: I1124 11:27:45.985296 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2ab8c206-f9b3-4aa1-96c7-3a19f7a9b1b2","Type":"ContainerStarted","Data":"bf8f5fd1e53d40c0f76857d4a12e1ce7b670df788f3055203f8069d9cbb7ee24"} Nov 24 11:27:45 crc kubenswrapper[5072]: I1124 11:27:45.986049 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2ab8c206-f9b3-4aa1-96c7-3a19f7a9b1b2","Type":"ContainerStarted","Data":"7ed93f6dfb00cf4d5234145c5d3271873d4c1eac308bc55c4d300f8b1e890d2a"} Nov 24 11:27:45 crc kubenswrapper[5072]: I1124 11:27:45.986078 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2ab8c206-f9b3-4aa1-96c7-3a19f7a9b1b2","Type":"ContainerStarted","Data":"14b022c2f3bfb8a8194c032c26d63079f77f6358a0d2e077b5d2c41cc672c28a"} Nov 24 11:27:46 crc kubenswrapper[5072]: I1124 11:27:46.011871 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.011845476 podStartE2EDuration="3.011845476s" podCreationTimestamp="2025-11-24 11:27:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:27:46.008509703 +0000 UTC m=+1117.720034219" watchObservedRunningTime="2025-11-24 11:27:46.011845476 +0000 UTC m=+1117.723369982" Nov 24 11:27:47 crc kubenswrapper[5072]: E1124 11:27:47.225130 5072 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ed18e3d0cc57e852fb841e3e550e978f9f8476f1f109f8ea1dd4470d23e32466" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 24 11:27:47 crc kubenswrapper[5072]: E1124 11:27:47.228162 5072 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ed18e3d0cc57e852fb841e3e550e978f9f8476f1f109f8ea1dd4470d23e32466" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 24 11:27:47 crc kubenswrapper[5072]: E1124 11:27:47.230650 5072 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ed18e3d0cc57e852fb841e3e550e978f9f8476f1f109f8ea1dd4470d23e32466" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 24 11:27:47 crc kubenswrapper[5072]: E1124 11:27:47.230739 5072 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="98f36c5e-b827-4fcb-ac98-8eb62f230787" containerName="nova-scheduler-scheduler" Nov 24 11:27:48 crc kubenswrapper[5072]: I1124 11:27:48.013265 5072 generic.go:334] "Generic (PLEG): container finished" podID="98f36c5e-b827-4fcb-ac98-8eb62f230787" containerID="ed18e3d0cc57e852fb841e3e550e978f9f8476f1f109f8ea1dd4470d23e32466" exitCode=0 Nov 24 11:27:48 crc kubenswrapper[5072]: I1124 11:27:48.013457 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"98f36c5e-b827-4fcb-ac98-8eb62f230787","Type":"ContainerDied","Data":"ed18e3d0cc57e852fb841e3e550e978f9f8476f1f109f8ea1dd4470d23e32466"} Nov 24 11:27:48 crc kubenswrapper[5072]: I1124 11:27:48.138737 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 24 11:27:48 crc kubenswrapper[5072]: I1124 11:27:48.400716 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 11:27:48 crc kubenswrapper[5072]: I1124 11:27:48.483244 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-79kld\" (UniqueName: \"kubernetes.io/projected/98f36c5e-b827-4fcb-ac98-8eb62f230787-kube-api-access-79kld\") pod \"98f36c5e-b827-4fcb-ac98-8eb62f230787\" (UID: \"98f36c5e-b827-4fcb-ac98-8eb62f230787\") " Nov 24 11:27:48 crc kubenswrapper[5072]: I1124 11:27:48.483431 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98f36c5e-b827-4fcb-ac98-8eb62f230787-combined-ca-bundle\") pod \"98f36c5e-b827-4fcb-ac98-8eb62f230787\" (UID: \"98f36c5e-b827-4fcb-ac98-8eb62f230787\") " Nov 24 11:27:48 crc kubenswrapper[5072]: I1124 11:27:48.483499 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98f36c5e-b827-4fcb-ac98-8eb62f230787-config-data\") pod \"98f36c5e-b827-4fcb-ac98-8eb62f230787\" (UID: \"98f36c5e-b827-4fcb-ac98-8eb62f230787\") " Nov 24 11:27:48 crc kubenswrapper[5072]: I1124 11:27:48.496756 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98f36c5e-b827-4fcb-ac98-8eb62f230787-kube-api-access-79kld" (OuterVolumeSpecName: "kube-api-access-79kld") pod "98f36c5e-b827-4fcb-ac98-8eb62f230787" (UID: "98f36c5e-b827-4fcb-ac98-8eb62f230787"). InnerVolumeSpecName "kube-api-access-79kld". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:27:48 crc kubenswrapper[5072]: I1124 11:27:48.511109 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98f36c5e-b827-4fcb-ac98-8eb62f230787-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "98f36c5e-b827-4fcb-ac98-8eb62f230787" (UID: "98f36c5e-b827-4fcb-ac98-8eb62f230787"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:27:48 crc kubenswrapper[5072]: I1124 11:27:48.514903 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98f36c5e-b827-4fcb-ac98-8eb62f230787-config-data" (OuterVolumeSpecName: "config-data") pod "98f36c5e-b827-4fcb-ac98-8eb62f230787" (UID: "98f36c5e-b827-4fcb-ac98-8eb62f230787"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:27:48 crc kubenswrapper[5072]: I1124 11:27:48.585082 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-79kld\" (UniqueName: \"kubernetes.io/projected/98f36c5e-b827-4fcb-ac98-8eb62f230787-kube-api-access-79kld\") on node \"crc\" DevicePath \"\"" Nov 24 11:27:48 crc kubenswrapper[5072]: I1124 11:27:48.585118 5072 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98f36c5e-b827-4fcb-ac98-8eb62f230787-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:27:48 crc kubenswrapper[5072]: I1124 11:27:48.585128 5072 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98f36c5e-b827-4fcb-ac98-8eb62f230787-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:27:48 crc kubenswrapper[5072]: I1124 11:27:48.953592 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 11:27:48 crc kubenswrapper[5072]: I1124 11:27:48.990257 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/384915e0-f433-462f-82ab-d31ebaeb63d1-combined-ca-bundle\") pod \"384915e0-f433-462f-82ab-d31ebaeb63d1\" (UID: \"384915e0-f433-462f-82ab-d31ebaeb63d1\") " Nov 24 11:27:48 crc kubenswrapper[5072]: I1124 11:27:48.990564 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4rc9b\" (UniqueName: \"kubernetes.io/projected/384915e0-f433-462f-82ab-d31ebaeb63d1-kube-api-access-4rc9b\") pod \"384915e0-f433-462f-82ab-d31ebaeb63d1\" (UID: \"384915e0-f433-462f-82ab-d31ebaeb63d1\") " Nov 24 11:27:48 crc kubenswrapper[5072]: I1124 11:27:48.990693 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/384915e0-f433-462f-82ab-d31ebaeb63d1-logs\") pod \"384915e0-f433-462f-82ab-d31ebaeb63d1\" (UID: \"384915e0-f433-462f-82ab-d31ebaeb63d1\") " Nov 24 11:27:48 crc kubenswrapper[5072]: I1124 11:27:48.990752 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/384915e0-f433-462f-82ab-d31ebaeb63d1-config-data\") pod \"384915e0-f433-462f-82ab-d31ebaeb63d1\" (UID: \"384915e0-f433-462f-82ab-d31ebaeb63d1\") " Nov 24 11:27:48 crc kubenswrapper[5072]: I1124 11:27:48.991064 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/384915e0-f433-462f-82ab-d31ebaeb63d1-logs" (OuterVolumeSpecName: "logs") pod "384915e0-f433-462f-82ab-d31ebaeb63d1" (UID: "384915e0-f433-462f-82ab-d31ebaeb63d1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:27:48 crc kubenswrapper[5072]: I1124 11:27:48.991470 5072 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/384915e0-f433-462f-82ab-d31ebaeb63d1-logs\") on node \"crc\" DevicePath \"\"" Nov 24 11:27:48 crc kubenswrapper[5072]: I1124 11:27:48.997189 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/384915e0-f433-462f-82ab-d31ebaeb63d1-kube-api-access-4rc9b" (OuterVolumeSpecName: "kube-api-access-4rc9b") pod "384915e0-f433-462f-82ab-d31ebaeb63d1" (UID: "384915e0-f433-462f-82ab-d31ebaeb63d1"). InnerVolumeSpecName "kube-api-access-4rc9b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:27:49 crc kubenswrapper[5072]: I1124 11:27:49.023651 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/384915e0-f433-462f-82ab-d31ebaeb63d1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "384915e0-f433-462f-82ab-d31ebaeb63d1" (UID: "384915e0-f433-462f-82ab-d31ebaeb63d1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:27:49 crc kubenswrapper[5072]: I1124 11:27:49.027751 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 11:27:49 crc kubenswrapper[5072]: I1124 11:27:49.032634 5072 generic.go:334] "Generic (PLEG): container finished" podID="384915e0-f433-462f-82ab-d31ebaeb63d1" containerID="e55bc6d582815acbbe92c98b219fe9cd994a4d47e8ddba022e60b691f5fa6fe0" exitCode=0 Nov 24 11:27:49 crc kubenswrapper[5072]: I1124 11:27:49.032733 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 11:27:49 crc kubenswrapper[5072]: I1124 11:27:49.033471 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"98f36c5e-b827-4fcb-ac98-8eb62f230787","Type":"ContainerDied","Data":"c24bb5f70364f561494ac7860e99ae41b3dd7b78e208b13cb816c74a699d5360"} Nov 24 11:27:49 crc kubenswrapper[5072]: I1124 11:27:49.033525 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"384915e0-f433-462f-82ab-d31ebaeb63d1","Type":"ContainerDied","Data":"e55bc6d582815acbbe92c98b219fe9cd994a4d47e8ddba022e60b691f5fa6fe0"} Nov 24 11:27:49 crc kubenswrapper[5072]: I1124 11:27:49.033542 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"384915e0-f433-462f-82ab-d31ebaeb63d1","Type":"ContainerDied","Data":"ed1ebc5c0ee2465b641c7187acf4ef019d93470fa39026deda64a55f83d2e2ba"} Nov 24 11:27:49 crc kubenswrapper[5072]: I1124 11:27:49.033562 5072 scope.go:117] "RemoveContainer" containerID="ed18e3d0cc57e852fb841e3e550e978f9f8476f1f109f8ea1dd4470d23e32466" Nov 24 11:27:49 crc kubenswrapper[5072]: I1124 11:27:49.054593 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/384915e0-f433-462f-82ab-d31ebaeb63d1-config-data" (OuterVolumeSpecName: "config-data") pod "384915e0-f433-462f-82ab-d31ebaeb63d1" (UID: "384915e0-f433-462f-82ab-d31ebaeb63d1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:27:49 crc kubenswrapper[5072]: I1124 11:27:49.093157 5072 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/384915e0-f433-462f-82ab-d31ebaeb63d1-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:27:49 crc kubenswrapper[5072]: I1124 11:27:49.093189 5072 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/384915e0-f433-462f-82ab-d31ebaeb63d1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:27:49 crc kubenswrapper[5072]: I1124 11:27:49.093200 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4rc9b\" (UniqueName: \"kubernetes.io/projected/384915e0-f433-462f-82ab-d31ebaeb63d1-kube-api-access-4rc9b\") on node \"crc\" DevicePath \"\"" Nov 24 11:27:49 crc kubenswrapper[5072]: I1124 11:27:49.150240 5072 scope.go:117] "RemoveContainer" containerID="e55bc6d582815acbbe92c98b219fe9cd994a4d47e8ddba022e60b691f5fa6fe0" Nov 24 11:27:49 crc kubenswrapper[5072]: I1124 11:27:49.166869 5072 scope.go:117] "RemoveContainer" containerID="4ae2bee729ee7067014edf026176f853bbb623ff29ed5ecad8dd51ed077a485a" Nov 24 11:27:49 crc kubenswrapper[5072]: I1124 11:27:49.184384 5072 scope.go:117] "RemoveContainer" containerID="e55bc6d582815acbbe92c98b219fe9cd994a4d47e8ddba022e60b691f5fa6fe0" Nov 24 11:27:49 crc kubenswrapper[5072]: E1124 11:27:49.184924 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e55bc6d582815acbbe92c98b219fe9cd994a4d47e8ddba022e60b691f5fa6fe0\": container with ID starting with e55bc6d582815acbbe92c98b219fe9cd994a4d47e8ddba022e60b691f5fa6fe0 not found: ID does not exist" containerID="e55bc6d582815acbbe92c98b219fe9cd994a4d47e8ddba022e60b691f5fa6fe0" Nov 24 11:27:49 crc kubenswrapper[5072]: I1124 11:27:49.184960 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e55bc6d582815acbbe92c98b219fe9cd994a4d47e8ddba022e60b691f5fa6fe0"} err="failed to get container status \"e55bc6d582815acbbe92c98b219fe9cd994a4d47e8ddba022e60b691f5fa6fe0\": rpc error: code = NotFound desc = could not find container \"e55bc6d582815acbbe92c98b219fe9cd994a4d47e8ddba022e60b691f5fa6fe0\": container with ID starting with e55bc6d582815acbbe92c98b219fe9cd994a4d47e8ddba022e60b691f5fa6fe0 not found: ID does not exist" Nov 24 11:27:49 crc kubenswrapper[5072]: I1124 11:27:49.184986 5072 scope.go:117] "RemoveContainer" containerID="4ae2bee729ee7067014edf026176f853bbb623ff29ed5ecad8dd51ed077a485a" Nov 24 11:27:49 crc kubenswrapper[5072]: E1124 11:27:49.185299 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ae2bee729ee7067014edf026176f853bbb623ff29ed5ecad8dd51ed077a485a\": container with ID starting with 4ae2bee729ee7067014edf026176f853bbb623ff29ed5ecad8dd51ed077a485a not found: ID does not exist" containerID="4ae2bee729ee7067014edf026176f853bbb623ff29ed5ecad8dd51ed077a485a" Nov 24 11:27:49 crc kubenswrapper[5072]: I1124 11:27:49.185322 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ae2bee729ee7067014edf026176f853bbb623ff29ed5ecad8dd51ed077a485a"} err="failed to get container status \"4ae2bee729ee7067014edf026176f853bbb623ff29ed5ecad8dd51ed077a485a\": rpc error: code = NotFound desc = could not find container \"4ae2bee729ee7067014edf026176f853bbb623ff29ed5ecad8dd51ed077a485a\": container with ID starting with 4ae2bee729ee7067014edf026176f853bbb623ff29ed5ecad8dd51ed077a485a not found: ID does not exist" Nov 24 11:27:49 crc kubenswrapper[5072]: I1124 11:27:49.371270 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 24 11:27:49 crc kubenswrapper[5072]: I1124 11:27:49.385215 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 24 11:27:49 crc kubenswrapper[5072]: I1124 11:27:49.385273 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 24 11:27:49 crc kubenswrapper[5072]: I1124 11:27:49.390913 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 24 11:27:49 crc kubenswrapper[5072]: I1124 11:27:49.402084 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 24 11:27:49 crc kubenswrapper[5072]: E1124 11:27:49.402993 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="384915e0-f433-462f-82ab-d31ebaeb63d1" containerName="nova-api-log" Nov 24 11:27:49 crc kubenswrapper[5072]: I1124 11:27:49.403025 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="384915e0-f433-462f-82ab-d31ebaeb63d1" containerName="nova-api-log" Nov 24 11:27:49 crc kubenswrapper[5072]: E1124 11:27:49.403055 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="384915e0-f433-462f-82ab-d31ebaeb63d1" containerName="nova-api-api" Nov 24 11:27:49 crc kubenswrapper[5072]: I1124 11:27:49.403071 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="384915e0-f433-462f-82ab-d31ebaeb63d1" containerName="nova-api-api" Nov 24 11:27:49 crc kubenswrapper[5072]: E1124 11:27:49.403121 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98f36c5e-b827-4fcb-ac98-8eb62f230787" containerName="nova-scheduler-scheduler" Nov 24 11:27:49 crc kubenswrapper[5072]: I1124 11:27:49.403139 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="98f36c5e-b827-4fcb-ac98-8eb62f230787" containerName="nova-scheduler-scheduler" Nov 24 11:27:49 crc kubenswrapper[5072]: I1124 11:27:49.403594 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="384915e0-f433-462f-82ab-d31ebaeb63d1" containerName="nova-api-log" Nov 24 11:27:49 crc kubenswrapper[5072]: I1124 11:27:49.403661 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="384915e0-f433-462f-82ab-d31ebaeb63d1" containerName="nova-api-api" Nov 24 11:27:49 crc kubenswrapper[5072]: I1124 11:27:49.403694 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="98f36c5e-b827-4fcb-ac98-8eb62f230787" containerName="nova-scheduler-scheduler" Nov 24 11:27:49 crc kubenswrapper[5072]: I1124 11:27:49.405514 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 11:27:49 crc kubenswrapper[5072]: I1124 11:27:49.411184 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 24 11:27:49 crc kubenswrapper[5072]: I1124 11:27:49.415281 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 11:27:49 crc kubenswrapper[5072]: I1124 11:27:49.498656 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38e65ee4-652d-4453-9ea6-50b067da9715-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"38e65ee4-652d-4453-9ea6-50b067da9715\") " pod="openstack/nova-api-0" Nov 24 11:27:49 crc kubenswrapper[5072]: I1124 11:27:49.498826 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/38e65ee4-652d-4453-9ea6-50b067da9715-logs\") pod \"nova-api-0\" (UID: \"38e65ee4-652d-4453-9ea6-50b067da9715\") " pod="openstack/nova-api-0" Nov 24 11:27:49 crc kubenswrapper[5072]: I1124 11:27:49.498903 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7f488\" (UniqueName: \"kubernetes.io/projected/38e65ee4-652d-4453-9ea6-50b067da9715-kube-api-access-7f488\") pod \"nova-api-0\" (UID: \"38e65ee4-652d-4453-9ea6-50b067da9715\") " pod="openstack/nova-api-0" Nov 24 11:27:49 crc kubenswrapper[5072]: I1124 11:27:49.499006 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38e65ee4-652d-4453-9ea6-50b067da9715-config-data\") pod \"nova-api-0\" (UID: \"38e65ee4-652d-4453-9ea6-50b067da9715\") " pod="openstack/nova-api-0" Nov 24 11:27:49 crc kubenswrapper[5072]: I1124 11:27:49.600126 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38e65ee4-652d-4453-9ea6-50b067da9715-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"38e65ee4-652d-4453-9ea6-50b067da9715\") " pod="openstack/nova-api-0" Nov 24 11:27:49 crc kubenswrapper[5072]: I1124 11:27:49.600223 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/38e65ee4-652d-4453-9ea6-50b067da9715-logs\") pod \"nova-api-0\" (UID: \"38e65ee4-652d-4453-9ea6-50b067da9715\") " pod="openstack/nova-api-0" Nov 24 11:27:49 crc kubenswrapper[5072]: I1124 11:27:49.600274 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7f488\" (UniqueName: \"kubernetes.io/projected/38e65ee4-652d-4453-9ea6-50b067da9715-kube-api-access-7f488\") pod \"nova-api-0\" (UID: \"38e65ee4-652d-4453-9ea6-50b067da9715\") " pod="openstack/nova-api-0" Nov 24 11:27:49 crc kubenswrapper[5072]: I1124 11:27:49.600300 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38e65ee4-652d-4453-9ea6-50b067da9715-config-data\") pod \"nova-api-0\" (UID: \"38e65ee4-652d-4453-9ea6-50b067da9715\") " pod="openstack/nova-api-0" Nov 24 11:27:49 crc kubenswrapper[5072]: I1124 11:27:49.600789 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/38e65ee4-652d-4453-9ea6-50b067da9715-logs\") pod \"nova-api-0\" (UID: \"38e65ee4-652d-4453-9ea6-50b067da9715\") " pod="openstack/nova-api-0" Nov 24 11:27:49 crc kubenswrapper[5072]: I1124 11:27:49.605099 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38e65ee4-652d-4453-9ea6-50b067da9715-config-data\") pod \"nova-api-0\" (UID: \"38e65ee4-652d-4453-9ea6-50b067da9715\") " pod="openstack/nova-api-0" Nov 24 11:27:49 crc kubenswrapper[5072]: I1124 11:27:49.606648 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38e65ee4-652d-4453-9ea6-50b067da9715-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"38e65ee4-652d-4453-9ea6-50b067da9715\") " pod="openstack/nova-api-0" Nov 24 11:27:49 crc kubenswrapper[5072]: I1124 11:27:49.632018 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7f488\" (UniqueName: \"kubernetes.io/projected/38e65ee4-652d-4453-9ea6-50b067da9715-kube-api-access-7f488\") pod \"nova-api-0\" (UID: \"38e65ee4-652d-4453-9ea6-50b067da9715\") " pod="openstack/nova-api-0" Nov 24 11:27:49 crc kubenswrapper[5072]: I1124 11:27:49.732056 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 11:27:50 crc kubenswrapper[5072]: I1124 11:27:50.195137 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 11:27:50 crc kubenswrapper[5072]: I1124 11:27:50.382059 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 11:27:50 crc kubenswrapper[5072]: I1124 11:27:50.382543 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="550025c7-4dd7-452e-85f8-6355aaa6feb6" containerName="kube-state-metrics" containerID="cri-o://4ae022196a19d67accf88e4d57f525bb2bb37c8d0e158d122c4641d674f78983" gracePeriod=30 Nov 24 11:27:50 crc kubenswrapper[5072]: I1124 11:27:50.785449 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 24 11:27:50 crc kubenswrapper[5072]: I1124 11:27:50.823933 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hhsr4\" (UniqueName: \"kubernetes.io/projected/550025c7-4dd7-452e-85f8-6355aaa6feb6-kube-api-access-hhsr4\") pod \"550025c7-4dd7-452e-85f8-6355aaa6feb6\" (UID: \"550025c7-4dd7-452e-85f8-6355aaa6feb6\") " Nov 24 11:27:50 crc kubenswrapper[5072]: I1124 11:27:50.829561 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/550025c7-4dd7-452e-85f8-6355aaa6feb6-kube-api-access-hhsr4" (OuterVolumeSpecName: "kube-api-access-hhsr4") pod "550025c7-4dd7-452e-85f8-6355aaa6feb6" (UID: "550025c7-4dd7-452e-85f8-6355aaa6feb6"). InnerVolumeSpecName "kube-api-access-hhsr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:27:50 crc kubenswrapper[5072]: I1124 11:27:50.925802 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hhsr4\" (UniqueName: \"kubernetes.io/projected/550025c7-4dd7-452e-85f8-6355aaa6feb6-kube-api-access-hhsr4\") on node \"crc\" DevicePath \"\"" Nov 24 11:27:51 crc kubenswrapper[5072]: I1124 11:27:51.026386 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="384915e0-f433-462f-82ab-d31ebaeb63d1" path="/var/lib/kubelet/pods/384915e0-f433-462f-82ab-d31ebaeb63d1/volumes" Nov 24 11:27:51 crc kubenswrapper[5072]: I1124 11:27:51.061673 5072 generic.go:334] "Generic (PLEG): container finished" podID="550025c7-4dd7-452e-85f8-6355aaa6feb6" containerID="4ae022196a19d67accf88e4d57f525bb2bb37c8d0e158d122c4641d674f78983" exitCode=2 Nov 24 11:27:51 crc kubenswrapper[5072]: I1124 11:27:51.061773 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"550025c7-4dd7-452e-85f8-6355aaa6feb6","Type":"ContainerDied","Data":"4ae022196a19d67accf88e4d57f525bb2bb37c8d0e158d122c4641d674f78983"} Nov 24 11:27:51 crc kubenswrapper[5072]: I1124 11:27:51.061808 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"550025c7-4dd7-452e-85f8-6355aaa6feb6","Type":"ContainerDied","Data":"f6dd3c766c75daad560ecfaf23e6c529a4a3e71322c280b71db505fb9d9412b6"} Nov 24 11:27:51 crc kubenswrapper[5072]: I1124 11:27:51.061844 5072 scope.go:117] "RemoveContainer" containerID="4ae022196a19d67accf88e4d57f525bb2bb37c8d0e158d122c4641d674f78983" Nov 24 11:27:51 crc kubenswrapper[5072]: I1124 11:27:51.062032 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 24 11:27:51 crc kubenswrapper[5072]: I1124 11:27:51.065288 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"38e65ee4-652d-4453-9ea6-50b067da9715","Type":"ContainerStarted","Data":"e514dd679a5970456992ef29bdcbc5e10593cb5f01ff47e87295d9d61faa44c3"} Nov 24 11:27:51 crc kubenswrapper[5072]: I1124 11:27:51.065350 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"38e65ee4-652d-4453-9ea6-50b067da9715","Type":"ContainerStarted","Data":"590b271e7d29a2015a1d4fe6d86ecbaae249029946c53767a0af9e9128711204"} Nov 24 11:27:51 crc kubenswrapper[5072]: I1124 11:27:51.065394 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"38e65ee4-652d-4453-9ea6-50b067da9715","Type":"ContainerStarted","Data":"57ba67da85711e3fffd4685322d1892775d86a75ec0f1f2fcda7dc44ccf8c818"} Nov 24 11:27:51 crc kubenswrapper[5072]: I1124 11:27:51.114595 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.114576449 podStartE2EDuration="2.114576449s" podCreationTimestamp="2025-11-24 11:27:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:27:51.085404073 +0000 UTC m=+1122.796928569" watchObservedRunningTime="2025-11-24 11:27:51.114576449 +0000 UTC m=+1122.826100925" Nov 24 11:27:51 crc kubenswrapper[5072]: I1124 11:27:51.118233 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 11:27:51 crc kubenswrapper[5072]: I1124 11:27:51.126630 5072 scope.go:117] "RemoveContainer" containerID="4ae022196a19d67accf88e4d57f525bb2bb37c8d0e158d122c4641d674f78983" Nov 24 11:27:51 crc kubenswrapper[5072]: E1124 11:27:51.131605 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ae022196a19d67accf88e4d57f525bb2bb37c8d0e158d122c4641d674f78983\": container with ID starting with 4ae022196a19d67accf88e4d57f525bb2bb37c8d0e158d122c4641d674f78983 not found: ID does not exist" containerID="4ae022196a19d67accf88e4d57f525bb2bb37c8d0e158d122c4641d674f78983" Nov 24 11:27:51 crc kubenswrapper[5072]: I1124 11:27:51.131663 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ae022196a19d67accf88e4d57f525bb2bb37c8d0e158d122c4641d674f78983"} err="failed to get container status \"4ae022196a19d67accf88e4d57f525bb2bb37c8d0e158d122c4641d674f78983\": rpc error: code = NotFound desc = could not find container \"4ae022196a19d67accf88e4d57f525bb2bb37c8d0e158d122c4641d674f78983\": container with ID starting with 4ae022196a19d67accf88e4d57f525bb2bb37c8d0e158d122c4641d674f78983 not found: ID does not exist" Nov 24 11:27:51 crc kubenswrapper[5072]: I1124 11:27:51.134688 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 11:27:51 crc kubenswrapper[5072]: I1124 11:27:51.146496 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 11:27:51 crc kubenswrapper[5072]: E1124 11:27:51.146982 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="550025c7-4dd7-452e-85f8-6355aaa6feb6" containerName="kube-state-metrics" Nov 24 11:27:51 crc kubenswrapper[5072]: I1124 11:27:51.147005 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="550025c7-4dd7-452e-85f8-6355aaa6feb6" containerName="kube-state-metrics" Nov 24 11:27:51 crc kubenswrapper[5072]: I1124 11:27:51.147190 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="550025c7-4dd7-452e-85f8-6355aaa6feb6" containerName="kube-state-metrics" Nov 24 11:27:51 crc kubenswrapper[5072]: I1124 11:27:51.147921 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 24 11:27:51 crc kubenswrapper[5072]: I1124 11:27:51.150913 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Nov 24 11:27:51 crc kubenswrapper[5072]: I1124 11:27:51.151243 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Nov 24 11:27:51 crc kubenswrapper[5072]: I1124 11:27:51.155253 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 11:27:51 crc kubenswrapper[5072]: I1124 11:27:51.331968 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d9aa589-2a3a-4e9a-a1d6-92fc939cf2f6-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"4d9aa589-2a3a-4e9a-a1d6-92fc939cf2f6\") " pod="openstack/kube-state-metrics-0" Nov 24 11:27:51 crc kubenswrapper[5072]: I1124 11:27:51.332392 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d9aa589-2a3a-4e9a-a1d6-92fc939cf2f6-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"4d9aa589-2a3a-4e9a-a1d6-92fc939cf2f6\") " pod="openstack/kube-state-metrics-0" Nov 24 11:27:51 crc kubenswrapper[5072]: I1124 11:27:51.332528 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdhqt\" (UniqueName: \"kubernetes.io/projected/4d9aa589-2a3a-4e9a-a1d6-92fc939cf2f6-kube-api-access-rdhqt\") pod \"kube-state-metrics-0\" (UID: \"4d9aa589-2a3a-4e9a-a1d6-92fc939cf2f6\") " pod="openstack/kube-state-metrics-0" Nov 24 11:27:51 crc kubenswrapper[5072]: I1124 11:27:51.332617 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/4d9aa589-2a3a-4e9a-a1d6-92fc939cf2f6-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"4d9aa589-2a3a-4e9a-a1d6-92fc939cf2f6\") " pod="openstack/kube-state-metrics-0" Nov 24 11:27:51 crc kubenswrapper[5072]: I1124 11:27:51.434689 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d9aa589-2a3a-4e9a-a1d6-92fc939cf2f6-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"4d9aa589-2a3a-4e9a-a1d6-92fc939cf2f6\") " pod="openstack/kube-state-metrics-0" Nov 24 11:27:51 crc kubenswrapper[5072]: I1124 11:27:51.434802 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdhqt\" (UniqueName: \"kubernetes.io/projected/4d9aa589-2a3a-4e9a-a1d6-92fc939cf2f6-kube-api-access-rdhqt\") pod \"kube-state-metrics-0\" (UID: \"4d9aa589-2a3a-4e9a-a1d6-92fc939cf2f6\") " pod="openstack/kube-state-metrics-0" Nov 24 11:27:51 crc kubenswrapper[5072]: I1124 11:27:51.434866 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/4d9aa589-2a3a-4e9a-a1d6-92fc939cf2f6-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"4d9aa589-2a3a-4e9a-a1d6-92fc939cf2f6\") " pod="openstack/kube-state-metrics-0" Nov 24 11:27:51 crc kubenswrapper[5072]: I1124 11:27:51.435003 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d9aa589-2a3a-4e9a-a1d6-92fc939cf2f6-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"4d9aa589-2a3a-4e9a-a1d6-92fc939cf2f6\") " pod="openstack/kube-state-metrics-0" Nov 24 11:27:51 crc kubenswrapper[5072]: I1124 11:27:51.439653 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/4d9aa589-2a3a-4e9a-a1d6-92fc939cf2f6-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"4d9aa589-2a3a-4e9a-a1d6-92fc939cf2f6\") " pod="openstack/kube-state-metrics-0" Nov 24 11:27:51 crc kubenswrapper[5072]: I1124 11:27:51.440406 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d9aa589-2a3a-4e9a-a1d6-92fc939cf2f6-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"4d9aa589-2a3a-4e9a-a1d6-92fc939cf2f6\") " pod="openstack/kube-state-metrics-0" Nov 24 11:27:51 crc kubenswrapper[5072]: I1124 11:27:51.441616 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d9aa589-2a3a-4e9a-a1d6-92fc939cf2f6-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"4d9aa589-2a3a-4e9a-a1d6-92fc939cf2f6\") " pod="openstack/kube-state-metrics-0" Nov 24 11:27:51 crc kubenswrapper[5072]: I1124 11:27:51.466571 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdhqt\" (UniqueName: \"kubernetes.io/projected/4d9aa589-2a3a-4e9a-a1d6-92fc939cf2f6-kube-api-access-rdhqt\") pod \"kube-state-metrics-0\" (UID: \"4d9aa589-2a3a-4e9a-a1d6-92fc939cf2f6\") " pod="openstack/kube-state-metrics-0" Nov 24 11:27:51 crc kubenswrapper[5072]: I1124 11:27:51.493150 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:27:51 crc kubenswrapper[5072]: I1124 11:27:51.493477 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="743c36a5-f4ff-4c6b-8b2d-386827b23ec1" containerName="ceilometer-central-agent" containerID="cri-o://3ed90f078e7a639da35ddd96ea70933999614837069375acc2126f016e4c410a" gracePeriod=30 Nov 24 11:27:51 crc kubenswrapper[5072]: I1124 11:27:51.493600 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="743c36a5-f4ff-4c6b-8b2d-386827b23ec1" containerName="ceilometer-notification-agent" containerID="cri-o://f97e4372d90e0a4327ee28a352f4c7287ff21b246a135f8b7cb9b22b70a7b9ca" gracePeriod=30 Nov 24 11:27:51 crc kubenswrapper[5072]: I1124 11:27:51.493584 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="743c36a5-f4ff-4c6b-8b2d-386827b23ec1" containerName="sg-core" containerID="cri-o://8f452ce9832c7bba8516c1eaaef237e830674368fcd24ffb815090cda369419e" gracePeriod=30 Nov 24 11:27:51 crc kubenswrapper[5072]: I1124 11:27:51.493630 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="743c36a5-f4ff-4c6b-8b2d-386827b23ec1" containerName="proxy-httpd" containerID="cri-o://0b62e23958a2f2881c856aef432dfff7147e923376216bfda1bcc2f2c95a6bf9" gracePeriod=30 Nov 24 11:27:51 crc kubenswrapper[5072]: I1124 11:27:51.763696 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 24 11:27:52 crc kubenswrapper[5072]: I1124 11:27:52.081744 5072 generic.go:334] "Generic (PLEG): container finished" podID="743c36a5-f4ff-4c6b-8b2d-386827b23ec1" containerID="0b62e23958a2f2881c856aef432dfff7147e923376216bfda1bcc2f2c95a6bf9" exitCode=0 Nov 24 11:27:52 crc kubenswrapper[5072]: I1124 11:27:52.081972 5072 generic.go:334] "Generic (PLEG): container finished" podID="743c36a5-f4ff-4c6b-8b2d-386827b23ec1" containerID="8f452ce9832c7bba8516c1eaaef237e830674368fcd24ffb815090cda369419e" exitCode=2 Nov 24 11:27:52 crc kubenswrapper[5072]: I1124 11:27:52.081981 5072 generic.go:334] "Generic (PLEG): container finished" podID="743c36a5-f4ff-4c6b-8b2d-386827b23ec1" containerID="3ed90f078e7a639da35ddd96ea70933999614837069375acc2126f016e4c410a" exitCode=0 Nov 24 11:27:52 crc kubenswrapper[5072]: I1124 11:27:52.081820 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"743c36a5-f4ff-4c6b-8b2d-386827b23ec1","Type":"ContainerDied","Data":"0b62e23958a2f2881c856aef432dfff7147e923376216bfda1bcc2f2c95a6bf9"} Nov 24 11:27:52 crc kubenswrapper[5072]: I1124 11:27:52.082039 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"743c36a5-f4ff-4c6b-8b2d-386827b23ec1","Type":"ContainerDied","Data":"8f452ce9832c7bba8516c1eaaef237e830674368fcd24ffb815090cda369419e"} Nov 24 11:27:52 crc kubenswrapper[5072]: I1124 11:27:52.082054 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"743c36a5-f4ff-4c6b-8b2d-386827b23ec1","Type":"ContainerDied","Data":"3ed90f078e7a639da35ddd96ea70933999614837069375acc2126f016e4c410a"} Nov 24 11:27:52 crc kubenswrapper[5072]: W1124 11:27:52.214607 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4d9aa589_2a3a_4e9a_a1d6_92fc939cf2f6.slice/crio-77f423b917adf9ab65936aa860535c7bd47c395a8cbc7bc52d066c34393def6f WatchSource:0}: Error finding container 77f423b917adf9ab65936aa860535c7bd47c395a8cbc7bc52d066c34393def6f: Status 404 returned error can't find the container with id 77f423b917adf9ab65936aa860535c7bd47c395a8cbc7bc52d066c34393def6f Nov 24 11:27:52 crc kubenswrapper[5072]: I1124 11:27:52.224158 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 11:27:53 crc kubenswrapper[5072]: I1124 11:27:53.031239 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="550025c7-4dd7-452e-85f8-6355aaa6feb6" path="/var/lib/kubelet/pods/550025c7-4dd7-452e-85f8-6355aaa6feb6/volumes" Nov 24 11:27:53 crc kubenswrapper[5072]: I1124 11:27:53.098108 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"4d9aa589-2a3a-4e9a-a1d6-92fc939cf2f6","Type":"ContainerStarted","Data":"f7d67423d8eeaa4ac4afe24c2b3d698740fa561e89ae76777c1ea850719e4320"} Nov 24 11:27:53 crc kubenswrapper[5072]: I1124 11:27:53.098146 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"4d9aa589-2a3a-4e9a-a1d6-92fc939cf2f6","Type":"ContainerStarted","Data":"77f423b917adf9ab65936aa860535c7bd47c395a8cbc7bc52d066c34393def6f"} Nov 24 11:27:53 crc kubenswrapper[5072]: I1124 11:27:53.098237 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 24 11:27:53 crc kubenswrapper[5072]: I1124 11:27:53.118289 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.645072441 podStartE2EDuration="2.118270379s" podCreationTimestamp="2025-11-24 11:27:51 +0000 UTC" firstStartedPulling="2025-11-24 11:27:52.217163629 +0000 UTC m=+1123.928688105" lastFinishedPulling="2025-11-24 11:27:52.690361527 +0000 UTC m=+1124.401886043" observedRunningTime="2025-11-24 11:27:53.113841459 +0000 UTC m=+1124.825365935" watchObservedRunningTime="2025-11-24 11:27:53.118270379 +0000 UTC m=+1124.829794845" Nov 24 11:27:53 crc kubenswrapper[5072]: I1124 11:27:53.454858 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Nov 24 11:27:54 crc kubenswrapper[5072]: I1124 11:27:54.385495 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 24 11:27:54 crc kubenswrapper[5072]: I1124 11:27:54.385750 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 24 11:27:55 crc kubenswrapper[5072]: I1124 11:27:55.402593 5072 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="2ab8c206-f9b3-4aa1-96c7-3a19f7a9b1b2" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.175:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 11:27:55 crc kubenswrapper[5072]: I1124 11:27:55.402620 5072 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="2ab8c206-f9b3-4aa1-96c7-3a19f7a9b1b2" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.175:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 11:27:55 crc kubenswrapper[5072]: I1124 11:27:55.857117 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 11:27:55 crc kubenswrapper[5072]: I1124 11:27:55.915953 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/743c36a5-f4ff-4c6b-8b2d-386827b23ec1-config-data\") pod \"743c36a5-f4ff-4c6b-8b2d-386827b23ec1\" (UID: \"743c36a5-f4ff-4c6b-8b2d-386827b23ec1\") " Nov 24 11:27:55 crc kubenswrapper[5072]: I1124 11:27:55.916003 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/743c36a5-f4ff-4c6b-8b2d-386827b23ec1-combined-ca-bundle\") pod \"743c36a5-f4ff-4c6b-8b2d-386827b23ec1\" (UID: \"743c36a5-f4ff-4c6b-8b2d-386827b23ec1\") " Nov 24 11:27:55 crc kubenswrapper[5072]: I1124 11:27:55.916050 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/743c36a5-f4ff-4c6b-8b2d-386827b23ec1-scripts\") pod \"743c36a5-f4ff-4c6b-8b2d-386827b23ec1\" (UID: \"743c36a5-f4ff-4c6b-8b2d-386827b23ec1\") " Nov 24 11:27:55 crc kubenswrapper[5072]: I1124 11:27:55.916094 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phw6n\" (UniqueName: \"kubernetes.io/projected/743c36a5-f4ff-4c6b-8b2d-386827b23ec1-kube-api-access-phw6n\") pod \"743c36a5-f4ff-4c6b-8b2d-386827b23ec1\" (UID: \"743c36a5-f4ff-4c6b-8b2d-386827b23ec1\") " Nov 24 11:27:55 crc kubenswrapper[5072]: I1124 11:27:55.916154 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/743c36a5-f4ff-4c6b-8b2d-386827b23ec1-sg-core-conf-yaml\") pod \"743c36a5-f4ff-4c6b-8b2d-386827b23ec1\" (UID: \"743c36a5-f4ff-4c6b-8b2d-386827b23ec1\") " Nov 24 11:27:55 crc kubenswrapper[5072]: I1124 11:27:55.916172 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/743c36a5-f4ff-4c6b-8b2d-386827b23ec1-run-httpd\") pod \"743c36a5-f4ff-4c6b-8b2d-386827b23ec1\" (UID: \"743c36a5-f4ff-4c6b-8b2d-386827b23ec1\") " Nov 24 11:27:55 crc kubenswrapper[5072]: I1124 11:27:55.916199 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/743c36a5-f4ff-4c6b-8b2d-386827b23ec1-log-httpd\") pod \"743c36a5-f4ff-4c6b-8b2d-386827b23ec1\" (UID: \"743c36a5-f4ff-4c6b-8b2d-386827b23ec1\") " Nov 24 11:27:55 crc kubenswrapper[5072]: I1124 11:27:55.916823 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/743c36a5-f4ff-4c6b-8b2d-386827b23ec1-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "743c36a5-f4ff-4c6b-8b2d-386827b23ec1" (UID: "743c36a5-f4ff-4c6b-8b2d-386827b23ec1"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:27:55 crc kubenswrapper[5072]: I1124 11:27:55.916881 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/743c36a5-f4ff-4c6b-8b2d-386827b23ec1-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "743c36a5-f4ff-4c6b-8b2d-386827b23ec1" (UID: "743c36a5-f4ff-4c6b-8b2d-386827b23ec1"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:27:55 crc kubenswrapper[5072]: I1124 11:27:55.917288 5072 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/743c36a5-f4ff-4c6b-8b2d-386827b23ec1-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 11:27:55 crc kubenswrapper[5072]: I1124 11:27:55.917301 5072 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/743c36a5-f4ff-4c6b-8b2d-386827b23ec1-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 11:27:55 crc kubenswrapper[5072]: I1124 11:27:55.922997 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/743c36a5-f4ff-4c6b-8b2d-386827b23ec1-kube-api-access-phw6n" (OuterVolumeSpecName: "kube-api-access-phw6n") pod "743c36a5-f4ff-4c6b-8b2d-386827b23ec1" (UID: "743c36a5-f4ff-4c6b-8b2d-386827b23ec1"). InnerVolumeSpecName "kube-api-access-phw6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:27:55 crc kubenswrapper[5072]: I1124 11:27:55.943077 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/743c36a5-f4ff-4c6b-8b2d-386827b23ec1-scripts" (OuterVolumeSpecName: "scripts") pod "743c36a5-f4ff-4c6b-8b2d-386827b23ec1" (UID: "743c36a5-f4ff-4c6b-8b2d-386827b23ec1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:27:55 crc kubenswrapper[5072]: I1124 11:27:55.947898 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/743c36a5-f4ff-4c6b-8b2d-386827b23ec1-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "743c36a5-f4ff-4c6b-8b2d-386827b23ec1" (UID: "743c36a5-f4ff-4c6b-8b2d-386827b23ec1"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:27:56 crc kubenswrapper[5072]: I1124 11:27:56.007648 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/743c36a5-f4ff-4c6b-8b2d-386827b23ec1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "743c36a5-f4ff-4c6b-8b2d-386827b23ec1" (UID: "743c36a5-f4ff-4c6b-8b2d-386827b23ec1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:27:56 crc kubenswrapper[5072]: I1124 11:27:56.018227 5072 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/743c36a5-f4ff-4c6b-8b2d-386827b23ec1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:27:56 crc kubenswrapper[5072]: I1124 11:27:56.018298 5072 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/743c36a5-f4ff-4c6b-8b2d-386827b23ec1-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:27:56 crc kubenswrapper[5072]: I1124 11:27:56.018319 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-phw6n\" (UniqueName: \"kubernetes.io/projected/743c36a5-f4ff-4c6b-8b2d-386827b23ec1-kube-api-access-phw6n\") on node \"crc\" DevicePath \"\"" Nov 24 11:27:56 crc kubenswrapper[5072]: I1124 11:27:56.018335 5072 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/743c36a5-f4ff-4c6b-8b2d-386827b23ec1-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 11:27:56 crc kubenswrapper[5072]: I1124 11:27:56.019482 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/743c36a5-f4ff-4c6b-8b2d-386827b23ec1-config-data" (OuterVolumeSpecName: "config-data") pod "743c36a5-f4ff-4c6b-8b2d-386827b23ec1" (UID: "743c36a5-f4ff-4c6b-8b2d-386827b23ec1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:27:56 crc kubenswrapper[5072]: I1124 11:27:56.125289 5072 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/743c36a5-f4ff-4c6b-8b2d-386827b23ec1-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:27:56 crc kubenswrapper[5072]: I1124 11:27:56.134787 5072 generic.go:334] "Generic (PLEG): container finished" podID="743c36a5-f4ff-4c6b-8b2d-386827b23ec1" containerID="f97e4372d90e0a4327ee28a352f4c7287ff21b246a135f8b7cb9b22b70a7b9ca" exitCode=0 Nov 24 11:27:56 crc kubenswrapper[5072]: I1124 11:27:56.134840 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"743c36a5-f4ff-4c6b-8b2d-386827b23ec1","Type":"ContainerDied","Data":"f97e4372d90e0a4327ee28a352f4c7287ff21b246a135f8b7cb9b22b70a7b9ca"} Nov 24 11:27:56 crc kubenswrapper[5072]: I1124 11:27:56.134877 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"743c36a5-f4ff-4c6b-8b2d-386827b23ec1","Type":"ContainerDied","Data":"40368693b12b55779f6d2d447ed7003550a6090f1e48a270b2c38a8e5a444581"} Nov 24 11:27:56 crc kubenswrapper[5072]: I1124 11:27:56.134905 5072 scope.go:117] "RemoveContainer" containerID="0b62e23958a2f2881c856aef432dfff7147e923376216bfda1bcc2f2c95a6bf9" Nov 24 11:27:56 crc kubenswrapper[5072]: I1124 11:27:56.135081 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 11:27:56 crc kubenswrapper[5072]: I1124 11:27:56.155243 5072 scope.go:117] "RemoveContainer" containerID="8f452ce9832c7bba8516c1eaaef237e830674368fcd24ffb815090cda369419e" Nov 24 11:27:56 crc kubenswrapper[5072]: I1124 11:27:56.183213 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:27:56 crc kubenswrapper[5072]: I1124 11:27:56.190465 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:27:56 crc kubenswrapper[5072]: I1124 11:27:56.190621 5072 scope.go:117] "RemoveContainer" containerID="f97e4372d90e0a4327ee28a352f4c7287ff21b246a135f8b7cb9b22b70a7b9ca" Nov 24 11:27:56 crc kubenswrapper[5072]: I1124 11:27:56.201008 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:27:56 crc kubenswrapper[5072]: E1124 11:27:56.201326 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="743c36a5-f4ff-4c6b-8b2d-386827b23ec1" containerName="ceilometer-notification-agent" Nov 24 11:27:56 crc kubenswrapper[5072]: I1124 11:27:56.201344 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="743c36a5-f4ff-4c6b-8b2d-386827b23ec1" containerName="ceilometer-notification-agent" Nov 24 11:27:56 crc kubenswrapper[5072]: E1124 11:27:56.201388 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="743c36a5-f4ff-4c6b-8b2d-386827b23ec1" containerName="sg-core" Nov 24 11:27:56 crc kubenswrapper[5072]: I1124 11:27:56.201395 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="743c36a5-f4ff-4c6b-8b2d-386827b23ec1" containerName="sg-core" Nov 24 11:27:56 crc kubenswrapper[5072]: E1124 11:27:56.201407 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="743c36a5-f4ff-4c6b-8b2d-386827b23ec1" containerName="ceilometer-central-agent" Nov 24 11:27:56 crc kubenswrapper[5072]: I1124 11:27:56.201413 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="743c36a5-f4ff-4c6b-8b2d-386827b23ec1" containerName="ceilometer-central-agent" Nov 24 11:27:56 crc kubenswrapper[5072]: E1124 11:27:56.201420 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="743c36a5-f4ff-4c6b-8b2d-386827b23ec1" containerName="proxy-httpd" Nov 24 11:27:56 crc kubenswrapper[5072]: I1124 11:27:56.201425 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="743c36a5-f4ff-4c6b-8b2d-386827b23ec1" containerName="proxy-httpd" Nov 24 11:27:56 crc kubenswrapper[5072]: I1124 11:27:56.201582 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="743c36a5-f4ff-4c6b-8b2d-386827b23ec1" containerName="sg-core" Nov 24 11:27:56 crc kubenswrapper[5072]: I1124 11:27:56.201596 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="743c36a5-f4ff-4c6b-8b2d-386827b23ec1" containerName="proxy-httpd" Nov 24 11:27:56 crc kubenswrapper[5072]: I1124 11:27:56.201606 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="743c36a5-f4ff-4c6b-8b2d-386827b23ec1" containerName="ceilometer-central-agent" Nov 24 11:27:56 crc kubenswrapper[5072]: I1124 11:27:56.201621 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="743c36a5-f4ff-4c6b-8b2d-386827b23ec1" containerName="ceilometer-notification-agent" Nov 24 11:27:56 crc kubenswrapper[5072]: I1124 11:27:56.203026 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 11:27:56 crc kubenswrapper[5072]: I1124 11:27:56.206056 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 11:27:56 crc kubenswrapper[5072]: I1124 11:27:56.206258 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 11:27:56 crc kubenswrapper[5072]: I1124 11:27:56.206430 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 24 11:27:56 crc kubenswrapper[5072]: I1124 11:27:56.219393 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:27:56 crc kubenswrapper[5072]: I1124 11:27:56.239035 5072 scope.go:117] "RemoveContainer" containerID="3ed90f078e7a639da35ddd96ea70933999614837069375acc2126f016e4c410a" Nov 24 11:27:56 crc kubenswrapper[5072]: I1124 11:27:56.264521 5072 scope.go:117] "RemoveContainer" containerID="0b62e23958a2f2881c856aef432dfff7147e923376216bfda1bcc2f2c95a6bf9" Nov 24 11:27:56 crc kubenswrapper[5072]: E1124 11:27:56.265064 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b62e23958a2f2881c856aef432dfff7147e923376216bfda1bcc2f2c95a6bf9\": container with ID starting with 0b62e23958a2f2881c856aef432dfff7147e923376216bfda1bcc2f2c95a6bf9 not found: ID does not exist" containerID="0b62e23958a2f2881c856aef432dfff7147e923376216bfda1bcc2f2c95a6bf9" Nov 24 11:27:56 crc kubenswrapper[5072]: I1124 11:27:56.265107 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b62e23958a2f2881c856aef432dfff7147e923376216bfda1bcc2f2c95a6bf9"} err="failed to get container status \"0b62e23958a2f2881c856aef432dfff7147e923376216bfda1bcc2f2c95a6bf9\": rpc error: code = NotFound desc = could not find container \"0b62e23958a2f2881c856aef432dfff7147e923376216bfda1bcc2f2c95a6bf9\": container with ID starting with 0b62e23958a2f2881c856aef432dfff7147e923376216bfda1bcc2f2c95a6bf9 not found: ID does not exist" Nov 24 11:27:56 crc kubenswrapper[5072]: I1124 11:27:56.265136 5072 scope.go:117] "RemoveContainer" containerID="8f452ce9832c7bba8516c1eaaef237e830674368fcd24ffb815090cda369419e" Nov 24 11:27:56 crc kubenswrapper[5072]: E1124 11:27:56.265453 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f452ce9832c7bba8516c1eaaef237e830674368fcd24ffb815090cda369419e\": container with ID starting with 8f452ce9832c7bba8516c1eaaef237e830674368fcd24ffb815090cda369419e not found: ID does not exist" containerID="8f452ce9832c7bba8516c1eaaef237e830674368fcd24ffb815090cda369419e" Nov 24 11:27:56 crc kubenswrapper[5072]: I1124 11:27:56.265509 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f452ce9832c7bba8516c1eaaef237e830674368fcd24ffb815090cda369419e"} err="failed to get container status \"8f452ce9832c7bba8516c1eaaef237e830674368fcd24ffb815090cda369419e\": rpc error: code = NotFound desc = could not find container \"8f452ce9832c7bba8516c1eaaef237e830674368fcd24ffb815090cda369419e\": container with ID starting with 8f452ce9832c7bba8516c1eaaef237e830674368fcd24ffb815090cda369419e not found: ID does not exist" Nov 24 11:27:56 crc kubenswrapper[5072]: I1124 11:27:56.265539 5072 scope.go:117] "RemoveContainer" containerID="f97e4372d90e0a4327ee28a352f4c7287ff21b246a135f8b7cb9b22b70a7b9ca" Nov 24 11:27:56 crc kubenswrapper[5072]: E1124 11:27:56.265820 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f97e4372d90e0a4327ee28a352f4c7287ff21b246a135f8b7cb9b22b70a7b9ca\": container with ID starting with f97e4372d90e0a4327ee28a352f4c7287ff21b246a135f8b7cb9b22b70a7b9ca not found: ID does not exist" containerID="f97e4372d90e0a4327ee28a352f4c7287ff21b246a135f8b7cb9b22b70a7b9ca" Nov 24 11:27:56 crc kubenswrapper[5072]: I1124 11:27:56.265854 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f97e4372d90e0a4327ee28a352f4c7287ff21b246a135f8b7cb9b22b70a7b9ca"} err="failed to get container status \"f97e4372d90e0a4327ee28a352f4c7287ff21b246a135f8b7cb9b22b70a7b9ca\": rpc error: code = NotFound desc = could not find container \"f97e4372d90e0a4327ee28a352f4c7287ff21b246a135f8b7cb9b22b70a7b9ca\": container with ID starting with f97e4372d90e0a4327ee28a352f4c7287ff21b246a135f8b7cb9b22b70a7b9ca not found: ID does not exist" Nov 24 11:27:56 crc kubenswrapper[5072]: I1124 11:27:56.265875 5072 scope.go:117] "RemoveContainer" containerID="3ed90f078e7a639da35ddd96ea70933999614837069375acc2126f016e4c410a" Nov 24 11:27:56 crc kubenswrapper[5072]: E1124 11:27:56.266187 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ed90f078e7a639da35ddd96ea70933999614837069375acc2126f016e4c410a\": container with ID starting with 3ed90f078e7a639da35ddd96ea70933999614837069375acc2126f016e4c410a not found: ID does not exist" containerID="3ed90f078e7a639da35ddd96ea70933999614837069375acc2126f016e4c410a" Nov 24 11:27:56 crc kubenswrapper[5072]: I1124 11:27:56.266214 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ed90f078e7a639da35ddd96ea70933999614837069375acc2126f016e4c410a"} err="failed to get container status \"3ed90f078e7a639da35ddd96ea70933999614837069375acc2126f016e4c410a\": rpc error: code = NotFound desc = could not find container \"3ed90f078e7a639da35ddd96ea70933999614837069375acc2126f016e4c410a\": container with ID starting with 3ed90f078e7a639da35ddd96ea70933999614837069375acc2126f016e4c410a not found: ID does not exist" Nov 24 11:27:56 crc kubenswrapper[5072]: I1124 11:27:56.328293 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b8c03edf-cb55-4853-8227-b65c429794bd-run-httpd\") pod \"ceilometer-0\" (UID: \"b8c03edf-cb55-4853-8227-b65c429794bd\") " pod="openstack/ceilometer-0" Nov 24 11:27:56 crc kubenswrapper[5072]: I1124 11:27:56.328410 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b8c03edf-cb55-4853-8227-b65c429794bd-log-httpd\") pod \"ceilometer-0\" (UID: \"b8c03edf-cb55-4853-8227-b65c429794bd\") " pod="openstack/ceilometer-0" Nov 24 11:27:56 crc kubenswrapper[5072]: I1124 11:27:56.328660 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b8c03edf-cb55-4853-8227-b65c429794bd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b8c03edf-cb55-4853-8227-b65c429794bd\") " pod="openstack/ceilometer-0" Nov 24 11:27:56 crc kubenswrapper[5072]: I1124 11:27:56.328726 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b8c03edf-cb55-4853-8227-b65c429794bd-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b8c03edf-cb55-4853-8227-b65c429794bd\") " pod="openstack/ceilometer-0" Nov 24 11:27:56 crc kubenswrapper[5072]: I1124 11:27:56.328773 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hmg8\" (UniqueName: \"kubernetes.io/projected/b8c03edf-cb55-4853-8227-b65c429794bd-kube-api-access-2hmg8\") pod \"ceilometer-0\" (UID: \"b8c03edf-cb55-4853-8227-b65c429794bd\") " pod="openstack/ceilometer-0" Nov 24 11:27:56 crc kubenswrapper[5072]: I1124 11:27:56.329306 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8c03edf-cb55-4853-8227-b65c429794bd-scripts\") pod \"ceilometer-0\" (UID: \"b8c03edf-cb55-4853-8227-b65c429794bd\") " pod="openstack/ceilometer-0" Nov 24 11:27:56 crc kubenswrapper[5072]: I1124 11:27:56.329609 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8c03edf-cb55-4853-8227-b65c429794bd-config-data\") pod \"ceilometer-0\" (UID: \"b8c03edf-cb55-4853-8227-b65c429794bd\") " pod="openstack/ceilometer-0" Nov 24 11:27:56 crc kubenswrapper[5072]: I1124 11:27:56.329893 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8c03edf-cb55-4853-8227-b65c429794bd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b8c03edf-cb55-4853-8227-b65c429794bd\") " pod="openstack/ceilometer-0" Nov 24 11:27:56 crc kubenswrapper[5072]: I1124 11:27:56.431597 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b8c03edf-cb55-4853-8227-b65c429794bd-log-httpd\") pod \"ceilometer-0\" (UID: \"b8c03edf-cb55-4853-8227-b65c429794bd\") " pod="openstack/ceilometer-0" Nov 24 11:27:56 crc kubenswrapper[5072]: I1124 11:27:56.431693 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b8c03edf-cb55-4853-8227-b65c429794bd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b8c03edf-cb55-4853-8227-b65c429794bd\") " pod="openstack/ceilometer-0" Nov 24 11:27:56 crc kubenswrapper[5072]: I1124 11:27:56.431729 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b8c03edf-cb55-4853-8227-b65c429794bd-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b8c03edf-cb55-4853-8227-b65c429794bd\") " pod="openstack/ceilometer-0" Nov 24 11:27:56 crc kubenswrapper[5072]: I1124 11:27:56.431766 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hmg8\" (UniqueName: \"kubernetes.io/projected/b8c03edf-cb55-4853-8227-b65c429794bd-kube-api-access-2hmg8\") pod \"ceilometer-0\" (UID: \"b8c03edf-cb55-4853-8227-b65c429794bd\") " pod="openstack/ceilometer-0" Nov 24 11:27:56 crc kubenswrapper[5072]: I1124 11:27:56.431829 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8c03edf-cb55-4853-8227-b65c429794bd-scripts\") pod \"ceilometer-0\" (UID: \"b8c03edf-cb55-4853-8227-b65c429794bd\") " pod="openstack/ceilometer-0" Nov 24 11:27:56 crc kubenswrapper[5072]: I1124 11:27:56.431879 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8c03edf-cb55-4853-8227-b65c429794bd-config-data\") pod \"ceilometer-0\" (UID: \"b8c03edf-cb55-4853-8227-b65c429794bd\") " pod="openstack/ceilometer-0" Nov 24 11:27:56 crc kubenswrapper[5072]: I1124 11:27:56.431936 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8c03edf-cb55-4853-8227-b65c429794bd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b8c03edf-cb55-4853-8227-b65c429794bd\") " pod="openstack/ceilometer-0" Nov 24 11:27:56 crc kubenswrapper[5072]: I1124 11:27:56.432012 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b8c03edf-cb55-4853-8227-b65c429794bd-run-httpd\") pod \"ceilometer-0\" (UID: \"b8c03edf-cb55-4853-8227-b65c429794bd\") " pod="openstack/ceilometer-0" Nov 24 11:27:56 crc kubenswrapper[5072]: I1124 11:27:56.432089 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b8c03edf-cb55-4853-8227-b65c429794bd-log-httpd\") pod \"ceilometer-0\" (UID: \"b8c03edf-cb55-4853-8227-b65c429794bd\") " pod="openstack/ceilometer-0" Nov 24 11:27:56 crc kubenswrapper[5072]: I1124 11:27:56.433047 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b8c03edf-cb55-4853-8227-b65c429794bd-run-httpd\") pod \"ceilometer-0\" (UID: \"b8c03edf-cb55-4853-8227-b65c429794bd\") " pod="openstack/ceilometer-0" Nov 24 11:27:56 crc kubenswrapper[5072]: I1124 11:27:56.437062 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b8c03edf-cb55-4853-8227-b65c429794bd-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b8c03edf-cb55-4853-8227-b65c429794bd\") " pod="openstack/ceilometer-0" Nov 24 11:27:56 crc kubenswrapper[5072]: I1124 11:27:56.438637 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8c03edf-cb55-4853-8227-b65c429794bd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b8c03edf-cb55-4853-8227-b65c429794bd\") " pod="openstack/ceilometer-0" Nov 24 11:27:56 crc kubenswrapper[5072]: I1124 11:27:56.439435 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8c03edf-cb55-4853-8227-b65c429794bd-config-data\") pod \"ceilometer-0\" (UID: \"b8c03edf-cb55-4853-8227-b65c429794bd\") " pod="openstack/ceilometer-0" Nov 24 11:27:56 crc kubenswrapper[5072]: I1124 11:27:56.440326 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8c03edf-cb55-4853-8227-b65c429794bd-scripts\") pod \"ceilometer-0\" (UID: \"b8c03edf-cb55-4853-8227-b65c429794bd\") " pod="openstack/ceilometer-0" Nov 24 11:27:56 crc kubenswrapper[5072]: I1124 11:27:56.440981 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b8c03edf-cb55-4853-8227-b65c429794bd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b8c03edf-cb55-4853-8227-b65c429794bd\") " pod="openstack/ceilometer-0" Nov 24 11:27:56 crc kubenswrapper[5072]: I1124 11:27:56.456826 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hmg8\" (UniqueName: \"kubernetes.io/projected/b8c03edf-cb55-4853-8227-b65c429794bd-kube-api-access-2hmg8\") pod \"ceilometer-0\" (UID: \"b8c03edf-cb55-4853-8227-b65c429794bd\") " pod="openstack/ceilometer-0" Nov 24 11:27:56 crc kubenswrapper[5072]: I1124 11:27:56.537589 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 11:27:57 crc kubenswrapper[5072]: I1124 11:27:57.026302 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="743c36a5-f4ff-4c6b-8b2d-386827b23ec1" path="/var/lib/kubelet/pods/743c36a5-f4ff-4c6b-8b2d-386827b23ec1/volumes" Nov 24 11:27:57 crc kubenswrapper[5072]: I1124 11:27:57.076537 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:27:57 crc kubenswrapper[5072]: I1124 11:27:57.156444 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b8c03edf-cb55-4853-8227-b65c429794bd","Type":"ContainerStarted","Data":"38af200b3a98fcbff28587186482cf655a09ea98491d5c7d6445daff9eba24a1"} Nov 24 11:27:58 crc kubenswrapper[5072]: I1124 11:27:58.167629 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b8c03edf-cb55-4853-8227-b65c429794bd","Type":"ContainerStarted","Data":"fa72750bccd5724b03966ce2905ef4ca1c605e5f17621ac12dbc4a30fabd3b61"} Nov 24 11:27:59 crc kubenswrapper[5072]: I1124 11:27:59.187810 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b8c03edf-cb55-4853-8227-b65c429794bd","Type":"ContainerStarted","Data":"116c7b03ea1d5434926d249492f873ad44dcfbbc46a6fe941a618bcad53eee0b"} Nov 24 11:27:59 crc kubenswrapper[5072]: I1124 11:27:59.188145 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b8c03edf-cb55-4853-8227-b65c429794bd","Type":"ContainerStarted","Data":"851f5ff11469a32be48240ef4f81d0b7c0e6b06d47a31c096ad77d7de819f41e"} Nov 24 11:27:59 crc kubenswrapper[5072]: I1124 11:27:59.733364 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 24 11:27:59 crc kubenswrapper[5072]: I1124 11:27:59.733730 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 24 11:28:00 crc kubenswrapper[5072]: I1124 11:28:00.815535 5072 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="38e65ee4-652d-4453-9ea6-50b067da9715" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.176:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 11:28:00 crc kubenswrapper[5072]: I1124 11:28:00.815624 5072 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="38e65ee4-652d-4453-9ea6-50b067da9715" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.176:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 11:28:01 crc kubenswrapper[5072]: I1124 11:28:01.206962 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b8c03edf-cb55-4853-8227-b65c429794bd","Type":"ContainerStarted","Data":"ee79a059c940f82154a7f5309fb75bca4181df17cdfd9bf0b938d5e5869a6560"} Nov 24 11:28:01 crc kubenswrapper[5072]: I1124 11:28:01.207158 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 11:28:01 crc kubenswrapper[5072]: I1124 11:28:01.258240 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.137844825 podStartE2EDuration="5.258218287s" podCreationTimestamp="2025-11-24 11:27:56 +0000 UTC" firstStartedPulling="2025-11-24 11:27:57.068024648 +0000 UTC m=+1128.779549114" lastFinishedPulling="2025-11-24 11:28:00.18839809 +0000 UTC m=+1131.899922576" observedRunningTime="2025-11-24 11:28:01.247570792 +0000 UTC m=+1132.959095298" watchObservedRunningTime="2025-11-24 11:28:01.258218287 +0000 UTC m=+1132.969742773" Nov 24 11:28:01 crc kubenswrapper[5072]: I1124 11:28:01.786598 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 24 11:28:04 crc kubenswrapper[5072]: I1124 11:28:04.393663 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 24 11:28:04 crc kubenswrapper[5072]: I1124 11:28:04.394462 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 24 11:28:04 crc kubenswrapper[5072]: I1124 11:28:04.402166 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 24 11:28:04 crc kubenswrapper[5072]: I1124 11:28:04.402764 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 24 11:28:07 crc kubenswrapper[5072]: I1124 11:28:07.241066 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:28:07 crc kubenswrapper[5072]: I1124 11:28:07.264536 5072 generic.go:334] "Generic (PLEG): container finished" podID="dfc34bce-a7cd-450b-8b0d-ed4d3172c2d9" containerID="d333800dfc55359b3b38b4e531c7eb0c21351aa1dbd410d7878194807ee7c163" exitCode=137 Nov 24 11:28:07 crc kubenswrapper[5072]: I1124 11:28:07.264594 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"dfc34bce-a7cd-450b-8b0d-ed4d3172c2d9","Type":"ContainerDied","Data":"d333800dfc55359b3b38b4e531c7eb0c21351aa1dbd410d7878194807ee7c163"} Nov 24 11:28:07 crc kubenswrapper[5072]: I1124 11:28:07.264623 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"dfc34bce-a7cd-450b-8b0d-ed4d3172c2d9","Type":"ContainerDied","Data":"8cdf624e856dd11d7aad1cb86a5a4eea2fabfe91215dab094f37d82aeecdd4ed"} Nov 24 11:28:07 crc kubenswrapper[5072]: I1124 11:28:07.264643 5072 scope.go:117] "RemoveContainer" containerID="d333800dfc55359b3b38b4e531c7eb0c21351aa1dbd410d7878194807ee7c163" Nov 24 11:28:07 crc kubenswrapper[5072]: I1124 11:28:07.264775 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:28:07 crc kubenswrapper[5072]: I1124 11:28:07.295228 5072 scope.go:117] "RemoveContainer" containerID="d333800dfc55359b3b38b4e531c7eb0c21351aa1dbd410d7878194807ee7c163" Nov 24 11:28:07 crc kubenswrapper[5072]: E1124 11:28:07.295832 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d333800dfc55359b3b38b4e531c7eb0c21351aa1dbd410d7878194807ee7c163\": container with ID starting with d333800dfc55359b3b38b4e531c7eb0c21351aa1dbd410d7878194807ee7c163 not found: ID does not exist" containerID="d333800dfc55359b3b38b4e531c7eb0c21351aa1dbd410d7878194807ee7c163" Nov 24 11:28:07 crc kubenswrapper[5072]: I1124 11:28:07.295887 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d333800dfc55359b3b38b4e531c7eb0c21351aa1dbd410d7878194807ee7c163"} err="failed to get container status \"d333800dfc55359b3b38b4e531c7eb0c21351aa1dbd410d7878194807ee7c163\": rpc error: code = NotFound desc = could not find container \"d333800dfc55359b3b38b4e531c7eb0c21351aa1dbd410d7878194807ee7c163\": container with ID starting with d333800dfc55359b3b38b4e531c7eb0c21351aa1dbd410d7878194807ee7c163 not found: ID does not exist" Nov 24 11:28:07 crc kubenswrapper[5072]: I1124 11:28:07.354127 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfc34bce-a7cd-450b-8b0d-ed4d3172c2d9-config-data\") pod \"dfc34bce-a7cd-450b-8b0d-ed4d3172c2d9\" (UID: \"dfc34bce-a7cd-450b-8b0d-ed4d3172c2d9\") " Nov 24 11:28:07 crc kubenswrapper[5072]: I1124 11:28:07.354281 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tbg9q\" (UniqueName: \"kubernetes.io/projected/dfc34bce-a7cd-450b-8b0d-ed4d3172c2d9-kube-api-access-tbg9q\") pod \"dfc34bce-a7cd-450b-8b0d-ed4d3172c2d9\" (UID: \"dfc34bce-a7cd-450b-8b0d-ed4d3172c2d9\") " Nov 24 11:28:07 crc kubenswrapper[5072]: I1124 11:28:07.354356 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfc34bce-a7cd-450b-8b0d-ed4d3172c2d9-combined-ca-bundle\") pod \"dfc34bce-a7cd-450b-8b0d-ed4d3172c2d9\" (UID: \"dfc34bce-a7cd-450b-8b0d-ed4d3172c2d9\") " Nov 24 11:28:07 crc kubenswrapper[5072]: I1124 11:28:07.381756 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfc34bce-a7cd-450b-8b0d-ed4d3172c2d9-kube-api-access-tbg9q" (OuterVolumeSpecName: "kube-api-access-tbg9q") pod "dfc34bce-a7cd-450b-8b0d-ed4d3172c2d9" (UID: "dfc34bce-a7cd-450b-8b0d-ed4d3172c2d9"). InnerVolumeSpecName "kube-api-access-tbg9q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:28:07 crc kubenswrapper[5072]: I1124 11:28:07.404467 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfc34bce-a7cd-450b-8b0d-ed4d3172c2d9-config-data" (OuterVolumeSpecName: "config-data") pod "dfc34bce-a7cd-450b-8b0d-ed4d3172c2d9" (UID: "dfc34bce-a7cd-450b-8b0d-ed4d3172c2d9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:28:07 crc kubenswrapper[5072]: I1124 11:28:07.406760 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfc34bce-a7cd-450b-8b0d-ed4d3172c2d9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dfc34bce-a7cd-450b-8b0d-ed4d3172c2d9" (UID: "dfc34bce-a7cd-450b-8b0d-ed4d3172c2d9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:28:07 crc kubenswrapper[5072]: I1124 11:28:07.456450 5072 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfc34bce-a7cd-450b-8b0d-ed4d3172c2d9-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:28:07 crc kubenswrapper[5072]: I1124 11:28:07.456495 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tbg9q\" (UniqueName: \"kubernetes.io/projected/dfc34bce-a7cd-450b-8b0d-ed4d3172c2d9-kube-api-access-tbg9q\") on node \"crc\" DevicePath \"\"" Nov 24 11:28:07 crc kubenswrapper[5072]: I1124 11:28:07.456505 5072 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfc34bce-a7cd-450b-8b0d-ed4d3172c2d9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:28:07 crc kubenswrapper[5072]: I1124 11:28:07.601555 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 11:28:07 crc kubenswrapper[5072]: I1124 11:28:07.614592 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 11:28:07 crc kubenswrapper[5072]: I1124 11:28:07.628331 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 11:28:07 crc kubenswrapper[5072]: E1124 11:28:07.628822 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfc34bce-a7cd-450b-8b0d-ed4d3172c2d9" containerName="nova-cell1-novncproxy-novncproxy" Nov 24 11:28:07 crc kubenswrapper[5072]: I1124 11:28:07.628843 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfc34bce-a7cd-450b-8b0d-ed4d3172c2d9" containerName="nova-cell1-novncproxy-novncproxy" Nov 24 11:28:07 crc kubenswrapper[5072]: I1124 11:28:07.629084 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfc34bce-a7cd-450b-8b0d-ed4d3172c2d9" containerName="nova-cell1-novncproxy-novncproxy" Nov 24 11:28:07 crc kubenswrapper[5072]: I1124 11:28:07.629840 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:28:07 crc kubenswrapper[5072]: I1124 11:28:07.633619 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 24 11:28:07 crc kubenswrapper[5072]: I1124 11:28:07.633848 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Nov 24 11:28:07 crc kubenswrapper[5072]: I1124 11:28:07.634000 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Nov 24 11:28:07 crc kubenswrapper[5072]: I1124 11:28:07.637570 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 11:28:07 crc kubenswrapper[5072]: I1124 11:28:07.761404 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a061135-fd7e-4c6c-bbca-422e684c0ccb-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"8a061135-fd7e-4c6c-bbca-422e684c0ccb\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:28:07 crc kubenswrapper[5072]: I1124 11:28:07.761478 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a061135-fd7e-4c6c-bbca-422e684c0ccb-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8a061135-fd7e-4c6c-bbca-422e684c0ccb\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:28:07 crc kubenswrapper[5072]: I1124 11:28:07.761653 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a061135-fd7e-4c6c-bbca-422e684c0ccb-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"8a061135-fd7e-4c6c-bbca-422e684c0ccb\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:28:07 crc kubenswrapper[5072]: I1124 11:28:07.761917 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzm6s\" (UniqueName: \"kubernetes.io/projected/8a061135-fd7e-4c6c-bbca-422e684c0ccb-kube-api-access-lzm6s\") pod \"nova-cell1-novncproxy-0\" (UID: \"8a061135-fd7e-4c6c-bbca-422e684c0ccb\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:28:07 crc kubenswrapper[5072]: I1124 11:28:07.762051 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a061135-fd7e-4c6c-bbca-422e684c0ccb-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8a061135-fd7e-4c6c-bbca-422e684c0ccb\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:28:07 crc kubenswrapper[5072]: I1124 11:28:07.864010 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a061135-fd7e-4c6c-bbca-422e684c0ccb-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8a061135-fd7e-4c6c-bbca-422e684c0ccb\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:28:07 crc kubenswrapper[5072]: I1124 11:28:07.864071 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a061135-fd7e-4c6c-bbca-422e684c0ccb-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"8a061135-fd7e-4c6c-bbca-422e684c0ccb\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:28:07 crc kubenswrapper[5072]: I1124 11:28:07.864109 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzm6s\" (UniqueName: \"kubernetes.io/projected/8a061135-fd7e-4c6c-bbca-422e684c0ccb-kube-api-access-lzm6s\") pod \"nova-cell1-novncproxy-0\" (UID: \"8a061135-fd7e-4c6c-bbca-422e684c0ccb\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:28:07 crc kubenswrapper[5072]: I1124 11:28:07.864129 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a061135-fd7e-4c6c-bbca-422e684c0ccb-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8a061135-fd7e-4c6c-bbca-422e684c0ccb\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:28:07 crc kubenswrapper[5072]: I1124 11:28:07.864218 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a061135-fd7e-4c6c-bbca-422e684c0ccb-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"8a061135-fd7e-4c6c-bbca-422e684c0ccb\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:28:07 crc kubenswrapper[5072]: I1124 11:28:07.868050 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a061135-fd7e-4c6c-bbca-422e684c0ccb-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8a061135-fd7e-4c6c-bbca-422e684c0ccb\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:28:07 crc kubenswrapper[5072]: I1124 11:28:07.868924 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a061135-fd7e-4c6c-bbca-422e684c0ccb-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8a061135-fd7e-4c6c-bbca-422e684c0ccb\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:28:07 crc kubenswrapper[5072]: I1124 11:28:07.869113 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a061135-fd7e-4c6c-bbca-422e684c0ccb-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"8a061135-fd7e-4c6c-bbca-422e684c0ccb\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:28:07 crc kubenswrapper[5072]: I1124 11:28:07.871321 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a061135-fd7e-4c6c-bbca-422e684c0ccb-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"8a061135-fd7e-4c6c-bbca-422e684c0ccb\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:28:07 crc kubenswrapper[5072]: I1124 11:28:07.880201 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzm6s\" (UniqueName: \"kubernetes.io/projected/8a061135-fd7e-4c6c-bbca-422e684c0ccb-kube-api-access-lzm6s\") pod \"nova-cell1-novncproxy-0\" (UID: \"8a061135-fd7e-4c6c-bbca-422e684c0ccb\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:28:07 crc kubenswrapper[5072]: I1124 11:28:07.989518 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:28:08 crc kubenswrapper[5072]: I1124 11:28:08.440765 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 11:28:08 crc kubenswrapper[5072]: W1124 11:28:08.441431 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8a061135_fd7e_4c6c_bbca_422e684c0ccb.slice/crio-dd20d2954d04723cf2108d1f10a8dae901fe6ebabec7eec3cb5b4892033ae02e WatchSource:0}: Error finding container dd20d2954d04723cf2108d1f10a8dae901fe6ebabec7eec3cb5b4892033ae02e: Status 404 returned error can't find the container with id dd20d2954d04723cf2108d1f10a8dae901fe6ebabec7eec3cb5b4892033ae02e Nov 24 11:28:09 crc kubenswrapper[5072]: I1124 11:28:09.033060 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dfc34bce-a7cd-450b-8b0d-ed4d3172c2d9" path="/var/lib/kubelet/pods/dfc34bce-a7cd-450b-8b0d-ed4d3172c2d9/volumes" Nov 24 11:28:09 crc kubenswrapper[5072]: I1124 11:28:09.283906 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"8a061135-fd7e-4c6c-bbca-422e684c0ccb","Type":"ContainerStarted","Data":"d6f20aa28893cc255bbd4458c739bb3d8502d5dd5dccf2e3afb9a11ceea39c2f"} Nov 24 11:28:09 crc kubenswrapper[5072]: I1124 11:28:09.283969 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"8a061135-fd7e-4c6c-bbca-422e684c0ccb","Type":"ContainerStarted","Data":"dd20d2954d04723cf2108d1f10a8dae901fe6ebabec7eec3cb5b4892033ae02e"} Nov 24 11:28:09 crc kubenswrapper[5072]: I1124 11:28:09.316278 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.316257987 podStartE2EDuration="2.316257987s" podCreationTimestamp="2025-11-24 11:28:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:28:09.313659262 +0000 UTC m=+1141.025183818" watchObservedRunningTime="2025-11-24 11:28:09.316257987 +0000 UTC m=+1141.027782473" Nov 24 11:28:09 crc kubenswrapper[5072]: I1124 11:28:09.735957 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 24 11:28:09 crc kubenswrapper[5072]: I1124 11:28:09.736582 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 24 11:28:09 crc kubenswrapper[5072]: I1124 11:28:09.737512 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 24 11:28:09 crc kubenswrapper[5072]: I1124 11:28:09.740789 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 24 11:28:10 crc kubenswrapper[5072]: I1124 11:28:10.293979 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 24 11:28:10 crc kubenswrapper[5072]: I1124 11:28:10.298557 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 24 11:28:10 crc kubenswrapper[5072]: I1124 11:28:10.496945 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-hl4mn"] Nov 24 11:28:10 crc kubenswrapper[5072]: I1124 11:28:10.499025 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b856c5697-hl4mn" Nov 24 11:28:10 crc kubenswrapper[5072]: I1124 11:28:10.525512 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-hl4mn"] Nov 24 11:28:10 crc kubenswrapper[5072]: I1124 11:28:10.624719 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8a85681e-0caa-48f6-8782-301c059a6380-ovsdbserver-nb\") pod \"dnsmasq-dns-5b856c5697-hl4mn\" (UID: \"8a85681e-0caa-48f6-8782-301c059a6380\") " pod="openstack/dnsmasq-dns-5b856c5697-hl4mn" Nov 24 11:28:10 crc kubenswrapper[5072]: I1124 11:28:10.624839 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bj9m\" (UniqueName: \"kubernetes.io/projected/8a85681e-0caa-48f6-8782-301c059a6380-kube-api-access-4bj9m\") pod \"dnsmasq-dns-5b856c5697-hl4mn\" (UID: \"8a85681e-0caa-48f6-8782-301c059a6380\") " pod="openstack/dnsmasq-dns-5b856c5697-hl4mn" Nov 24 11:28:10 crc kubenswrapper[5072]: I1124 11:28:10.624904 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8a85681e-0caa-48f6-8782-301c059a6380-ovsdbserver-sb\") pod \"dnsmasq-dns-5b856c5697-hl4mn\" (UID: \"8a85681e-0caa-48f6-8782-301c059a6380\") " pod="openstack/dnsmasq-dns-5b856c5697-hl4mn" Nov 24 11:28:10 crc kubenswrapper[5072]: I1124 11:28:10.625279 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a85681e-0caa-48f6-8782-301c059a6380-dns-svc\") pod \"dnsmasq-dns-5b856c5697-hl4mn\" (UID: \"8a85681e-0caa-48f6-8782-301c059a6380\") " pod="openstack/dnsmasq-dns-5b856c5697-hl4mn" Nov 24 11:28:10 crc kubenswrapper[5072]: I1124 11:28:10.625332 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a85681e-0caa-48f6-8782-301c059a6380-config\") pod \"dnsmasq-dns-5b856c5697-hl4mn\" (UID: \"8a85681e-0caa-48f6-8782-301c059a6380\") " pod="openstack/dnsmasq-dns-5b856c5697-hl4mn" Nov 24 11:28:10 crc kubenswrapper[5072]: I1124 11:28:10.727433 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8a85681e-0caa-48f6-8782-301c059a6380-ovsdbserver-nb\") pod \"dnsmasq-dns-5b856c5697-hl4mn\" (UID: \"8a85681e-0caa-48f6-8782-301c059a6380\") " pod="openstack/dnsmasq-dns-5b856c5697-hl4mn" Nov 24 11:28:10 crc kubenswrapper[5072]: I1124 11:28:10.727528 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bj9m\" (UniqueName: \"kubernetes.io/projected/8a85681e-0caa-48f6-8782-301c059a6380-kube-api-access-4bj9m\") pod \"dnsmasq-dns-5b856c5697-hl4mn\" (UID: \"8a85681e-0caa-48f6-8782-301c059a6380\") " pod="openstack/dnsmasq-dns-5b856c5697-hl4mn" Nov 24 11:28:10 crc kubenswrapper[5072]: I1124 11:28:10.727595 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8a85681e-0caa-48f6-8782-301c059a6380-ovsdbserver-sb\") pod \"dnsmasq-dns-5b856c5697-hl4mn\" (UID: \"8a85681e-0caa-48f6-8782-301c059a6380\") " pod="openstack/dnsmasq-dns-5b856c5697-hl4mn" Nov 24 11:28:10 crc kubenswrapper[5072]: I1124 11:28:10.728872 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8a85681e-0caa-48f6-8782-301c059a6380-ovsdbserver-sb\") pod \"dnsmasq-dns-5b856c5697-hl4mn\" (UID: \"8a85681e-0caa-48f6-8782-301c059a6380\") " pod="openstack/dnsmasq-dns-5b856c5697-hl4mn" Nov 24 11:28:10 crc kubenswrapper[5072]: I1124 11:28:10.728908 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a85681e-0caa-48f6-8782-301c059a6380-dns-svc\") pod \"dnsmasq-dns-5b856c5697-hl4mn\" (UID: \"8a85681e-0caa-48f6-8782-301c059a6380\") " pod="openstack/dnsmasq-dns-5b856c5697-hl4mn" Nov 24 11:28:10 crc kubenswrapper[5072]: I1124 11:28:10.728930 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a85681e-0caa-48f6-8782-301c059a6380-dns-svc\") pod \"dnsmasq-dns-5b856c5697-hl4mn\" (UID: \"8a85681e-0caa-48f6-8782-301c059a6380\") " pod="openstack/dnsmasq-dns-5b856c5697-hl4mn" Nov 24 11:28:10 crc kubenswrapper[5072]: I1124 11:28:10.728998 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a85681e-0caa-48f6-8782-301c059a6380-config\") pod \"dnsmasq-dns-5b856c5697-hl4mn\" (UID: \"8a85681e-0caa-48f6-8782-301c059a6380\") " pod="openstack/dnsmasq-dns-5b856c5697-hl4mn" Nov 24 11:28:10 crc kubenswrapper[5072]: I1124 11:28:10.729229 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8a85681e-0caa-48f6-8782-301c059a6380-ovsdbserver-nb\") pod \"dnsmasq-dns-5b856c5697-hl4mn\" (UID: \"8a85681e-0caa-48f6-8782-301c059a6380\") " pod="openstack/dnsmasq-dns-5b856c5697-hl4mn" Nov 24 11:28:10 crc kubenswrapper[5072]: I1124 11:28:10.729863 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a85681e-0caa-48f6-8782-301c059a6380-config\") pod \"dnsmasq-dns-5b856c5697-hl4mn\" (UID: \"8a85681e-0caa-48f6-8782-301c059a6380\") " pod="openstack/dnsmasq-dns-5b856c5697-hl4mn" Nov 24 11:28:10 crc kubenswrapper[5072]: I1124 11:28:10.759191 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bj9m\" (UniqueName: \"kubernetes.io/projected/8a85681e-0caa-48f6-8782-301c059a6380-kube-api-access-4bj9m\") pod \"dnsmasq-dns-5b856c5697-hl4mn\" (UID: \"8a85681e-0caa-48f6-8782-301c059a6380\") " pod="openstack/dnsmasq-dns-5b856c5697-hl4mn" Nov 24 11:28:10 crc kubenswrapper[5072]: I1124 11:28:10.817318 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b856c5697-hl4mn" Nov 24 11:28:11 crc kubenswrapper[5072]: I1124 11:28:11.477955 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-hl4mn"] Nov 24 11:28:11 crc kubenswrapper[5072]: W1124 11:28:11.494959 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8a85681e_0caa_48f6_8782_301c059a6380.slice/crio-2be961e1f5737a585999fd66594b58ea46864b77fd06fe9d02261c12603fc722 WatchSource:0}: Error finding container 2be961e1f5737a585999fd66594b58ea46864b77fd06fe9d02261c12603fc722: Status 404 returned error can't find the container with id 2be961e1f5737a585999fd66594b58ea46864b77fd06fe9d02261c12603fc722 Nov 24 11:28:12 crc kubenswrapper[5072]: I1124 11:28:12.310835 5072 generic.go:334] "Generic (PLEG): container finished" podID="8a85681e-0caa-48f6-8782-301c059a6380" containerID="8ce26fce3409fdaa9d8fbdb51e6a94dc52eba262d55fe9f8c18693fe3377d195" exitCode=0 Nov 24 11:28:12 crc kubenswrapper[5072]: I1124 11:28:12.311829 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-hl4mn" event={"ID":"8a85681e-0caa-48f6-8782-301c059a6380","Type":"ContainerDied","Data":"8ce26fce3409fdaa9d8fbdb51e6a94dc52eba262d55fe9f8c18693fe3377d195"} Nov 24 11:28:12 crc kubenswrapper[5072]: I1124 11:28:12.311858 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-hl4mn" event={"ID":"8a85681e-0caa-48f6-8782-301c059a6380","Type":"ContainerStarted","Data":"2be961e1f5737a585999fd66594b58ea46864b77fd06fe9d02261c12603fc722"} Nov 24 11:28:12 crc kubenswrapper[5072]: I1124 11:28:12.716285 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:28:12 crc kubenswrapper[5072]: I1124 11:28:12.716909 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b8c03edf-cb55-4853-8227-b65c429794bd" containerName="ceilometer-central-agent" containerID="cri-o://fa72750bccd5724b03966ce2905ef4ca1c605e5f17621ac12dbc4a30fabd3b61" gracePeriod=30 Nov 24 11:28:12 crc kubenswrapper[5072]: I1124 11:28:12.717046 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b8c03edf-cb55-4853-8227-b65c429794bd" containerName="proxy-httpd" containerID="cri-o://ee79a059c940f82154a7f5309fb75bca4181df17cdfd9bf0b938d5e5869a6560" gracePeriod=30 Nov 24 11:28:12 crc kubenswrapper[5072]: I1124 11:28:12.717092 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b8c03edf-cb55-4853-8227-b65c429794bd" containerName="sg-core" containerID="cri-o://116c7b03ea1d5434926d249492f873ad44dcfbbc46a6fe941a618bcad53eee0b" gracePeriod=30 Nov 24 11:28:12 crc kubenswrapper[5072]: I1124 11:28:12.717130 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b8c03edf-cb55-4853-8227-b65c429794bd" containerName="ceilometer-notification-agent" containerID="cri-o://851f5ff11469a32be48240ef4f81d0b7c0e6b06d47a31c096ad77d7de819f41e" gracePeriod=30 Nov 24 11:28:12 crc kubenswrapper[5072]: I1124 11:28:12.739960 5072 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="b8c03edf-cb55-4853-8227-b65c429794bd" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Nov 24 11:28:12 crc kubenswrapper[5072]: I1124 11:28:12.990544 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:28:13 crc kubenswrapper[5072]: I1124 11:28:13.256647 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 24 11:28:13 crc kubenswrapper[5072]: I1124 11:28:13.321247 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-hl4mn" event={"ID":"8a85681e-0caa-48f6-8782-301c059a6380","Type":"ContainerStarted","Data":"ad68e303220191203da71cc8f477c74d48a74897203681270d71f1d1803ce42f"} Nov 24 11:28:13 crc kubenswrapper[5072]: I1124 11:28:13.321439 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b856c5697-hl4mn" Nov 24 11:28:13 crc kubenswrapper[5072]: I1124 11:28:13.323745 5072 generic.go:334] "Generic (PLEG): container finished" podID="b8c03edf-cb55-4853-8227-b65c429794bd" containerID="ee79a059c940f82154a7f5309fb75bca4181df17cdfd9bf0b938d5e5869a6560" exitCode=0 Nov 24 11:28:13 crc kubenswrapper[5072]: I1124 11:28:13.323767 5072 generic.go:334] "Generic (PLEG): container finished" podID="b8c03edf-cb55-4853-8227-b65c429794bd" containerID="116c7b03ea1d5434926d249492f873ad44dcfbbc46a6fe941a618bcad53eee0b" exitCode=2 Nov 24 11:28:13 crc kubenswrapper[5072]: I1124 11:28:13.323774 5072 generic.go:334] "Generic (PLEG): container finished" podID="b8c03edf-cb55-4853-8227-b65c429794bd" containerID="fa72750bccd5724b03966ce2905ef4ca1c605e5f17621ac12dbc4a30fabd3b61" exitCode=0 Nov 24 11:28:13 crc kubenswrapper[5072]: I1124 11:28:13.323794 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b8c03edf-cb55-4853-8227-b65c429794bd","Type":"ContainerDied","Data":"ee79a059c940f82154a7f5309fb75bca4181df17cdfd9bf0b938d5e5869a6560"} Nov 24 11:28:13 crc kubenswrapper[5072]: I1124 11:28:13.323841 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b8c03edf-cb55-4853-8227-b65c429794bd","Type":"ContainerDied","Data":"116c7b03ea1d5434926d249492f873ad44dcfbbc46a6fe941a618bcad53eee0b"} Nov 24 11:28:13 crc kubenswrapper[5072]: I1124 11:28:13.323855 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b8c03edf-cb55-4853-8227-b65c429794bd","Type":"ContainerDied","Data":"fa72750bccd5724b03966ce2905ef4ca1c605e5f17621ac12dbc4a30fabd3b61"} Nov 24 11:28:13 crc kubenswrapper[5072]: I1124 11:28:13.323901 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="38e65ee4-652d-4453-9ea6-50b067da9715" containerName="nova-api-log" containerID="cri-o://590b271e7d29a2015a1d4fe6d86ecbaae249029946c53767a0af9e9128711204" gracePeriod=30 Nov 24 11:28:13 crc kubenswrapper[5072]: I1124 11:28:13.323985 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="38e65ee4-652d-4453-9ea6-50b067da9715" containerName="nova-api-api" containerID="cri-o://e514dd679a5970456992ef29bdcbc5e10593cb5f01ff47e87295d9d61faa44c3" gracePeriod=30 Nov 24 11:28:13 crc kubenswrapper[5072]: I1124 11:28:13.347136 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b856c5697-hl4mn" podStartSLOduration=3.347122162 podStartE2EDuration="3.347122162s" podCreationTimestamp="2025-11-24 11:28:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:28:13.342327423 +0000 UTC m=+1145.053851909" watchObservedRunningTime="2025-11-24 11:28:13.347122162 +0000 UTC m=+1145.058646638" Nov 24 11:28:13 crc kubenswrapper[5072]: I1124 11:28:13.644903 5072 patch_prober.go:28] interesting pod/machine-config-daemon-jfxnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 11:28:13 crc kubenswrapper[5072]: I1124 11:28:13.645154 5072 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 11:28:13 crc kubenswrapper[5072]: I1124 11:28:13.645202 5072 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" Nov 24 11:28:13 crc kubenswrapper[5072]: I1124 11:28:13.645962 5072 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b030b14c475fa1e60935020fac8bbc582c34d80ebfa6d2f82381ce67034a5e50"} pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 11:28:13 crc kubenswrapper[5072]: I1124 11:28:13.646028 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" containerName="machine-config-daemon" containerID="cri-o://b030b14c475fa1e60935020fac8bbc582c34d80ebfa6d2f82381ce67034a5e50" gracePeriod=600 Nov 24 11:28:14 crc kubenswrapper[5072]: I1124 11:28:14.329572 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 11:28:14 crc kubenswrapper[5072]: I1124 11:28:14.336423 5072 generic.go:334] "Generic (PLEG): container finished" podID="38e65ee4-652d-4453-9ea6-50b067da9715" containerID="590b271e7d29a2015a1d4fe6d86ecbaae249029946c53767a0af9e9128711204" exitCode=143 Nov 24 11:28:14 crc kubenswrapper[5072]: I1124 11:28:14.336533 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"38e65ee4-652d-4453-9ea6-50b067da9715","Type":"ContainerDied","Data":"590b271e7d29a2015a1d4fe6d86ecbaae249029946c53767a0af9e9128711204"} Nov 24 11:28:14 crc kubenswrapper[5072]: I1124 11:28:14.339114 5072 generic.go:334] "Generic (PLEG): container finished" podID="b8c03edf-cb55-4853-8227-b65c429794bd" containerID="851f5ff11469a32be48240ef4f81d0b7c0e6b06d47a31c096ad77d7de819f41e" exitCode=0 Nov 24 11:28:14 crc kubenswrapper[5072]: I1124 11:28:14.339169 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b8c03edf-cb55-4853-8227-b65c429794bd","Type":"ContainerDied","Data":"851f5ff11469a32be48240ef4f81d0b7c0e6b06d47a31c096ad77d7de819f41e"} Nov 24 11:28:14 crc kubenswrapper[5072]: I1124 11:28:14.339195 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b8c03edf-cb55-4853-8227-b65c429794bd","Type":"ContainerDied","Data":"38af200b3a98fcbff28587186482cf655a09ea98491d5c7d6445daff9eba24a1"} Nov 24 11:28:14 crc kubenswrapper[5072]: I1124 11:28:14.339211 5072 scope.go:117] "RemoveContainer" containerID="ee79a059c940f82154a7f5309fb75bca4181df17cdfd9bf0b938d5e5869a6560" Nov 24 11:28:14 crc kubenswrapper[5072]: I1124 11:28:14.339219 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 11:28:14 crc kubenswrapper[5072]: I1124 11:28:14.342133 5072 generic.go:334] "Generic (PLEG): container finished" podID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" containerID="b030b14c475fa1e60935020fac8bbc582c34d80ebfa6d2f82381ce67034a5e50" exitCode=0 Nov 24 11:28:14 crc kubenswrapper[5072]: I1124 11:28:14.342198 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" event={"ID":"85ee6420-36f0-467c-acf4-ebea8b02c8d5","Type":"ContainerDied","Data":"b030b14c475fa1e60935020fac8bbc582c34d80ebfa6d2f82381ce67034a5e50"} Nov 24 11:28:14 crc kubenswrapper[5072]: I1124 11:28:14.342230 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" event={"ID":"85ee6420-36f0-467c-acf4-ebea8b02c8d5","Type":"ContainerStarted","Data":"6f55c06922e799a9c07f40b576b3a8c5fadc1f87864557b3d2231c8cbac92093"} Nov 24 11:28:14 crc kubenswrapper[5072]: I1124 11:28:14.354978 5072 scope.go:117] "RemoveContainer" containerID="116c7b03ea1d5434926d249492f873ad44dcfbbc46a6fe941a618bcad53eee0b" Nov 24 11:28:14 crc kubenswrapper[5072]: I1124 11:28:14.380523 5072 scope.go:117] "RemoveContainer" containerID="851f5ff11469a32be48240ef4f81d0b7c0e6b06d47a31c096ad77d7de819f41e" Nov 24 11:28:14 crc kubenswrapper[5072]: I1124 11:28:14.457548 5072 scope.go:117] "RemoveContainer" containerID="fa72750bccd5724b03966ce2905ef4ca1c605e5f17621ac12dbc4a30fabd3b61" Nov 24 11:28:14 crc kubenswrapper[5072]: I1124 11:28:14.495979 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8c03edf-cb55-4853-8227-b65c429794bd-config-data\") pod \"b8c03edf-cb55-4853-8227-b65c429794bd\" (UID: \"b8c03edf-cb55-4853-8227-b65c429794bd\") " Nov 24 11:28:14 crc kubenswrapper[5072]: I1124 11:28:14.496021 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hmg8\" (UniqueName: \"kubernetes.io/projected/b8c03edf-cb55-4853-8227-b65c429794bd-kube-api-access-2hmg8\") pod \"b8c03edf-cb55-4853-8227-b65c429794bd\" (UID: \"b8c03edf-cb55-4853-8227-b65c429794bd\") " Nov 24 11:28:14 crc kubenswrapper[5072]: I1124 11:28:14.496046 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b8c03edf-cb55-4853-8227-b65c429794bd-ceilometer-tls-certs\") pod \"b8c03edf-cb55-4853-8227-b65c429794bd\" (UID: \"b8c03edf-cb55-4853-8227-b65c429794bd\") " Nov 24 11:28:14 crc kubenswrapper[5072]: I1124 11:28:14.496100 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b8c03edf-cb55-4853-8227-b65c429794bd-log-httpd\") pod \"b8c03edf-cb55-4853-8227-b65c429794bd\" (UID: \"b8c03edf-cb55-4853-8227-b65c429794bd\") " Nov 24 11:28:14 crc kubenswrapper[5072]: I1124 11:28:14.496124 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8c03edf-cb55-4853-8227-b65c429794bd-combined-ca-bundle\") pod \"b8c03edf-cb55-4853-8227-b65c429794bd\" (UID: \"b8c03edf-cb55-4853-8227-b65c429794bd\") " Nov 24 11:28:14 crc kubenswrapper[5072]: I1124 11:28:14.496146 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b8c03edf-cb55-4853-8227-b65c429794bd-run-httpd\") pod \"b8c03edf-cb55-4853-8227-b65c429794bd\" (UID: \"b8c03edf-cb55-4853-8227-b65c429794bd\") " Nov 24 11:28:14 crc kubenswrapper[5072]: I1124 11:28:14.496168 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b8c03edf-cb55-4853-8227-b65c429794bd-sg-core-conf-yaml\") pod \"b8c03edf-cb55-4853-8227-b65c429794bd\" (UID: \"b8c03edf-cb55-4853-8227-b65c429794bd\") " Nov 24 11:28:14 crc kubenswrapper[5072]: I1124 11:28:14.496231 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8c03edf-cb55-4853-8227-b65c429794bd-scripts\") pod \"b8c03edf-cb55-4853-8227-b65c429794bd\" (UID: \"b8c03edf-cb55-4853-8227-b65c429794bd\") " Nov 24 11:28:14 crc kubenswrapper[5072]: I1124 11:28:14.496480 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8c03edf-cb55-4853-8227-b65c429794bd-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b8c03edf-cb55-4853-8227-b65c429794bd" (UID: "b8c03edf-cb55-4853-8227-b65c429794bd"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:28:14 crc kubenswrapper[5072]: I1124 11:28:14.497070 5072 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b8c03edf-cb55-4853-8227-b65c429794bd-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 11:28:14 crc kubenswrapper[5072]: I1124 11:28:14.497422 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8c03edf-cb55-4853-8227-b65c429794bd-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b8c03edf-cb55-4853-8227-b65c429794bd" (UID: "b8c03edf-cb55-4853-8227-b65c429794bd"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:28:14 crc kubenswrapper[5072]: I1124 11:28:14.504072 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8c03edf-cb55-4853-8227-b65c429794bd-scripts" (OuterVolumeSpecName: "scripts") pod "b8c03edf-cb55-4853-8227-b65c429794bd" (UID: "b8c03edf-cb55-4853-8227-b65c429794bd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:28:14 crc kubenswrapper[5072]: I1124 11:28:14.507713 5072 scope.go:117] "RemoveContainer" containerID="ee79a059c940f82154a7f5309fb75bca4181df17cdfd9bf0b938d5e5869a6560" Nov 24 11:28:14 crc kubenswrapper[5072]: E1124 11:28:14.508090 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee79a059c940f82154a7f5309fb75bca4181df17cdfd9bf0b938d5e5869a6560\": container with ID starting with ee79a059c940f82154a7f5309fb75bca4181df17cdfd9bf0b938d5e5869a6560 not found: ID does not exist" containerID="ee79a059c940f82154a7f5309fb75bca4181df17cdfd9bf0b938d5e5869a6560" Nov 24 11:28:14 crc kubenswrapper[5072]: I1124 11:28:14.508133 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee79a059c940f82154a7f5309fb75bca4181df17cdfd9bf0b938d5e5869a6560"} err="failed to get container status \"ee79a059c940f82154a7f5309fb75bca4181df17cdfd9bf0b938d5e5869a6560\": rpc error: code = NotFound desc = could not find container \"ee79a059c940f82154a7f5309fb75bca4181df17cdfd9bf0b938d5e5869a6560\": container with ID starting with ee79a059c940f82154a7f5309fb75bca4181df17cdfd9bf0b938d5e5869a6560 not found: ID does not exist" Nov 24 11:28:14 crc kubenswrapper[5072]: I1124 11:28:14.508164 5072 scope.go:117] "RemoveContainer" containerID="116c7b03ea1d5434926d249492f873ad44dcfbbc46a6fe941a618bcad53eee0b" Nov 24 11:28:14 crc kubenswrapper[5072]: E1124 11:28:14.508512 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"116c7b03ea1d5434926d249492f873ad44dcfbbc46a6fe941a618bcad53eee0b\": container with ID starting with 116c7b03ea1d5434926d249492f873ad44dcfbbc46a6fe941a618bcad53eee0b not found: ID does not exist" containerID="116c7b03ea1d5434926d249492f873ad44dcfbbc46a6fe941a618bcad53eee0b" Nov 24 11:28:14 crc kubenswrapper[5072]: I1124 11:28:14.508532 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"116c7b03ea1d5434926d249492f873ad44dcfbbc46a6fe941a618bcad53eee0b"} err="failed to get container status \"116c7b03ea1d5434926d249492f873ad44dcfbbc46a6fe941a618bcad53eee0b\": rpc error: code = NotFound desc = could not find container \"116c7b03ea1d5434926d249492f873ad44dcfbbc46a6fe941a618bcad53eee0b\": container with ID starting with 116c7b03ea1d5434926d249492f873ad44dcfbbc46a6fe941a618bcad53eee0b not found: ID does not exist" Nov 24 11:28:14 crc kubenswrapper[5072]: I1124 11:28:14.508561 5072 scope.go:117] "RemoveContainer" containerID="851f5ff11469a32be48240ef4f81d0b7c0e6b06d47a31c096ad77d7de819f41e" Nov 24 11:28:14 crc kubenswrapper[5072]: E1124 11:28:14.508856 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"851f5ff11469a32be48240ef4f81d0b7c0e6b06d47a31c096ad77d7de819f41e\": container with ID starting with 851f5ff11469a32be48240ef4f81d0b7c0e6b06d47a31c096ad77d7de819f41e not found: ID does not exist" containerID="851f5ff11469a32be48240ef4f81d0b7c0e6b06d47a31c096ad77d7de819f41e" Nov 24 11:28:14 crc kubenswrapper[5072]: I1124 11:28:14.508875 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"851f5ff11469a32be48240ef4f81d0b7c0e6b06d47a31c096ad77d7de819f41e"} err="failed to get container status \"851f5ff11469a32be48240ef4f81d0b7c0e6b06d47a31c096ad77d7de819f41e\": rpc error: code = NotFound desc = could not find container \"851f5ff11469a32be48240ef4f81d0b7c0e6b06d47a31c096ad77d7de819f41e\": container with ID starting with 851f5ff11469a32be48240ef4f81d0b7c0e6b06d47a31c096ad77d7de819f41e not found: ID does not exist" Nov 24 11:28:14 crc kubenswrapper[5072]: I1124 11:28:14.508891 5072 scope.go:117] "RemoveContainer" containerID="fa72750bccd5724b03966ce2905ef4ca1c605e5f17621ac12dbc4a30fabd3b61" Nov 24 11:28:14 crc kubenswrapper[5072]: E1124 11:28:14.509091 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa72750bccd5724b03966ce2905ef4ca1c605e5f17621ac12dbc4a30fabd3b61\": container with ID starting with fa72750bccd5724b03966ce2905ef4ca1c605e5f17621ac12dbc4a30fabd3b61 not found: ID does not exist" containerID="fa72750bccd5724b03966ce2905ef4ca1c605e5f17621ac12dbc4a30fabd3b61" Nov 24 11:28:14 crc kubenswrapper[5072]: I1124 11:28:14.509110 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa72750bccd5724b03966ce2905ef4ca1c605e5f17621ac12dbc4a30fabd3b61"} err="failed to get container status \"fa72750bccd5724b03966ce2905ef4ca1c605e5f17621ac12dbc4a30fabd3b61\": rpc error: code = NotFound desc = could not find container \"fa72750bccd5724b03966ce2905ef4ca1c605e5f17621ac12dbc4a30fabd3b61\": container with ID starting with fa72750bccd5724b03966ce2905ef4ca1c605e5f17621ac12dbc4a30fabd3b61 not found: ID does not exist" Nov 24 11:28:14 crc kubenswrapper[5072]: I1124 11:28:14.509124 5072 scope.go:117] "RemoveContainer" containerID="8e2fafce48ed7d24bea410cc4a09f0aa29c5014f23ce7269a5e5cc3ebe7aa12f" Nov 24 11:28:14 crc kubenswrapper[5072]: I1124 11:28:14.514578 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8c03edf-cb55-4853-8227-b65c429794bd-kube-api-access-2hmg8" (OuterVolumeSpecName: "kube-api-access-2hmg8") pod "b8c03edf-cb55-4853-8227-b65c429794bd" (UID: "b8c03edf-cb55-4853-8227-b65c429794bd"). InnerVolumeSpecName "kube-api-access-2hmg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:28:14 crc kubenswrapper[5072]: I1124 11:28:14.562632 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8c03edf-cb55-4853-8227-b65c429794bd-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b8c03edf-cb55-4853-8227-b65c429794bd" (UID: "b8c03edf-cb55-4853-8227-b65c429794bd"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:28:14 crc kubenswrapper[5072]: I1124 11:28:14.564449 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8c03edf-cb55-4853-8227-b65c429794bd-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "b8c03edf-cb55-4853-8227-b65c429794bd" (UID: "b8c03edf-cb55-4853-8227-b65c429794bd"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:28:14 crc kubenswrapper[5072]: I1124 11:28:14.587128 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8c03edf-cb55-4853-8227-b65c429794bd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b8c03edf-cb55-4853-8227-b65c429794bd" (UID: "b8c03edf-cb55-4853-8227-b65c429794bd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:28:14 crc kubenswrapper[5072]: I1124 11:28:14.599014 5072 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b8c03edf-cb55-4853-8227-b65c429794bd-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 11:28:14 crc kubenswrapper[5072]: I1124 11:28:14.599043 5072 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8c03edf-cb55-4853-8227-b65c429794bd-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:28:14 crc kubenswrapper[5072]: I1124 11:28:14.599053 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2hmg8\" (UniqueName: \"kubernetes.io/projected/b8c03edf-cb55-4853-8227-b65c429794bd-kube-api-access-2hmg8\") on node \"crc\" DevicePath \"\"" Nov 24 11:28:14 crc kubenswrapper[5072]: I1124 11:28:14.599061 5072 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b8c03edf-cb55-4853-8227-b65c429794bd-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 11:28:14 crc kubenswrapper[5072]: I1124 11:28:14.599072 5072 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b8c03edf-cb55-4853-8227-b65c429794bd-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 11:28:14 crc kubenswrapper[5072]: I1124 11:28:14.599081 5072 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8c03edf-cb55-4853-8227-b65c429794bd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:28:14 crc kubenswrapper[5072]: I1124 11:28:14.615997 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8c03edf-cb55-4853-8227-b65c429794bd-config-data" (OuterVolumeSpecName: "config-data") pod "b8c03edf-cb55-4853-8227-b65c429794bd" (UID: "b8c03edf-cb55-4853-8227-b65c429794bd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:28:14 crc kubenswrapper[5072]: I1124 11:28:14.667673 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:28:14 crc kubenswrapper[5072]: I1124 11:28:14.676773 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:28:14 crc kubenswrapper[5072]: I1124 11:28:14.691127 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:28:14 crc kubenswrapper[5072]: E1124 11:28:14.691648 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8c03edf-cb55-4853-8227-b65c429794bd" containerName="ceilometer-central-agent" Nov 24 11:28:14 crc kubenswrapper[5072]: I1124 11:28:14.691711 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8c03edf-cb55-4853-8227-b65c429794bd" containerName="ceilometer-central-agent" Nov 24 11:28:14 crc kubenswrapper[5072]: E1124 11:28:14.691800 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8c03edf-cb55-4853-8227-b65c429794bd" containerName="sg-core" Nov 24 11:28:14 crc kubenswrapper[5072]: I1124 11:28:14.691849 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8c03edf-cb55-4853-8227-b65c429794bd" containerName="sg-core" Nov 24 11:28:14 crc kubenswrapper[5072]: E1124 11:28:14.691916 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8c03edf-cb55-4853-8227-b65c429794bd" containerName="ceilometer-notification-agent" Nov 24 11:28:14 crc kubenswrapper[5072]: I1124 11:28:14.691966 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8c03edf-cb55-4853-8227-b65c429794bd" containerName="ceilometer-notification-agent" Nov 24 11:28:14 crc kubenswrapper[5072]: E1124 11:28:14.692036 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8c03edf-cb55-4853-8227-b65c429794bd" containerName="proxy-httpd" Nov 24 11:28:14 crc kubenswrapper[5072]: I1124 11:28:14.692085 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8c03edf-cb55-4853-8227-b65c429794bd" containerName="proxy-httpd" Nov 24 11:28:14 crc kubenswrapper[5072]: I1124 11:28:14.692278 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8c03edf-cb55-4853-8227-b65c429794bd" containerName="ceilometer-central-agent" Nov 24 11:28:14 crc kubenswrapper[5072]: I1124 11:28:14.692341 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8c03edf-cb55-4853-8227-b65c429794bd" containerName="ceilometer-notification-agent" Nov 24 11:28:14 crc kubenswrapper[5072]: I1124 11:28:14.692438 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8c03edf-cb55-4853-8227-b65c429794bd" containerName="sg-core" Nov 24 11:28:14 crc kubenswrapper[5072]: I1124 11:28:14.692496 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8c03edf-cb55-4853-8227-b65c429794bd" containerName="proxy-httpd" Nov 24 11:28:14 crc kubenswrapper[5072]: I1124 11:28:14.693985 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 11:28:14 crc kubenswrapper[5072]: I1124 11:28:14.696259 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 11:28:14 crc kubenswrapper[5072]: I1124 11:28:14.704239 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 11:28:14 crc kubenswrapper[5072]: I1124 11:28:14.704341 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 24 11:28:14 crc kubenswrapper[5072]: I1124 11:28:14.704854 5072 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8c03edf-cb55-4853-8227-b65c429794bd-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:28:14 crc kubenswrapper[5072]: I1124 11:28:14.716950 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:28:14 crc kubenswrapper[5072]: I1124 11:28:14.806507 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/984c9c3d-dc52-4152-8ec4-e1ed94695079-log-httpd\") pod \"ceilometer-0\" (UID: \"984c9c3d-dc52-4152-8ec4-e1ed94695079\") " pod="openstack/ceilometer-0" Nov 24 11:28:14 crc kubenswrapper[5072]: I1124 11:28:14.806660 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/984c9c3d-dc52-4152-8ec4-e1ed94695079-scripts\") pod \"ceilometer-0\" (UID: \"984c9c3d-dc52-4152-8ec4-e1ed94695079\") " pod="openstack/ceilometer-0" Nov 24 11:28:14 crc kubenswrapper[5072]: I1124 11:28:14.806702 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcztg\" (UniqueName: \"kubernetes.io/projected/984c9c3d-dc52-4152-8ec4-e1ed94695079-kube-api-access-mcztg\") pod \"ceilometer-0\" (UID: \"984c9c3d-dc52-4152-8ec4-e1ed94695079\") " pod="openstack/ceilometer-0" Nov 24 11:28:14 crc kubenswrapper[5072]: I1124 11:28:14.806740 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/984c9c3d-dc52-4152-8ec4-e1ed94695079-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"984c9c3d-dc52-4152-8ec4-e1ed94695079\") " pod="openstack/ceilometer-0" Nov 24 11:28:14 crc kubenswrapper[5072]: I1124 11:28:14.806764 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/984c9c3d-dc52-4152-8ec4-e1ed94695079-run-httpd\") pod \"ceilometer-0\" (UID: \"984c9c3d-dc52-4152-8ec4-e1ed94695079\") " pod="openstack/ceilometer-0" Nov 24 11:28:14 crc kubenswrapper[5072]: I1124 11:28:14.806892 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/984c9c3d-dc52-4152-8ec4-e1ed94695079-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"984c9c3d-dc52-4152-8ec4-e1ed94695079\") " pod="openstack/ceilometer-0" Nov 24 11:28:14 crc kubenswrapper[5072]: I1124 11:28:14.806949 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/984c9c3d-dc52-4152-8ec4-e1ed94695079-config-data\") pod \"ceilometer-0\" (UID: \"984c9c3d-dc52-4152-8ec4-e1ed94695079\") " pod="openstack/ceilometer-0" Nov 24 11:28:14 crc kubenswrapper[5072]: I1124 11:28:14.807188 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/984c9c3d-dc52-4152-8ec4-e1ed94695079-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"984c9c3d-dc52-4152-8ec4-e1ed94695079\") " pod="openstack/ceilometer-0" Nov 24 11:28:14 crc kubenswrapper[5072]: I1124 11:28:14.908493 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/984c9c3d-dc52-4152-8ec4-e1ed94695079-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"984c9c3d-dc52-4152-8ec4-e1ed94695079\") " pod="openstack/ceilometer-0" Nov 24 11:28:14 crc kubenswrapper[5072]: I1124 11:28:14.908589 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/984c9c3d-dc52-4152-8ec4-e1ed94695079-run-httpd\") pod \"ceilometer-0\" (UID: \"984c9c3d-dc52-4152-8ec4-e1ed94695079\") " pod="openstack/ceilometer-0" Nov 24 11:28:14 crc kubenswrapper[5072]: I1124 11:28:14.908625 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/984c9c3d-dc52-4152-8ec4-e1ed94695079-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"984c9c3d-dc52-4152-8ec4-e1ed94695079\") " pod="openstack/ceilometer-0" Nov 24 11:28:14 crc kubenswrapper[5072]: I1124 11:28:14.908676 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/984c9c3d-dc52-4152-8ec4-e1ed94695079-config-data\") pod \"ceilometer-0\" (UID: \"984c9c3d-dc52-4152-8ec4-e1ed94695079\") " pod="openstack/ceilometer-0" Nov 24 11:28:14 crc kubenswrapper[5072]: I1124 11:28:14.908777 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/984c9c3d-dc52-4152-8ec4-e1ed94695079-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"984c9c3d-dc52-4152-8ec4-e1ed94695079\") " pod="openstack/ceilometer-0" Nov 24 11:28:14 crc kubenswrapper[5072]: I1124 11:28:14.908868 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/984c9c3d-dc52-4152-8ec4-e1ed94695079-log-httpd\") pod \"ceilometer-0\" (UID: \"984c9c3d-dc52-4152-8ec4-e1ed94695079\") " pod="openstack/ceilometer-0" Nov 24 11:28:14 crc kubenswrapper[5072]: I1124 11:28:14.908914 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/984c9c3d-dc52-4152-8ec4-e1ed94695079-scripts\") pod \"ceilometer-0\" (UID: \"984c9c3d-dc52-4152-8ec4-e1ed94695079\") " pod="openstack/ceilometer-0" Nov 24 11:28:14 crc kubenswrapper[5072]: I1124 11:28:14.908965 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcztg\" (UniqueName: \"kubernetes.io/projected/984c9c3d-dc52-4152-8ec4-e1ed94695079-kube-api-access-mcztg\") pod \"ceilometer-0\" (UID: \"984c9c3d-dc52-4152-8ec4-e1ed94695079\") " pod="openstack/ceilometer-0" Nov 24 11:28:14 crc kubenswrapper[5072]: I1124 11:28:14.909443 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/984c9c3d-dc52-4152-8ec4-e1ed94695079-run-httpd\") pod \"ceilometer-0\" (UID: \"984c9c3d-dc52-4152-8ec4-e1ed94695079\") " pod="openstack/ceilometer-0" Nov 24 11:28:14 crc kubenswrapper[5072]: I1124 11:28:14.909443 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/984c9c3d-dc52-4152-8ec4-e1ed94695079-log-httpd\") pod \"ceilometer-0\" (UID: \"984c9c3d-dc52-4152-8ec4-e1ed94695079\") " pod="openstack/ceilometer-0" Nov 24 11:28:14 crc kubenswrapper[5072]: I1124 11:28:14.912734 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/984c9c3d-dc52-4152-8ec4-e1ed94695079-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"984c9c3d-dc52-4152-8ec4-e1ed94695079\") " pod="openstack/ceilometer-0" Nov 24 11:28:14 crc kubenswrapper[5072]: I1124 11:28:14.913281 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/984c9c3d-dc52-4152-8ec4-e1ed94695079-scripts\") pod \"ceilometer-0\" (UID: \"984c9c3d-dc52-4152-8ec4-e1ed94695079\") " pod="openstack/ceilometer-0" Nov 24 11:28:14 crc kubenswrapper[5072]: I1124 11:28:14.914931 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/984c9c3d-dc52-4152-8ec4-e1ed94695079-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"984c9c3d-dc52-4152-8ec4-e1ed94695079\") " pod="openstack/ceilometer-0" Nov 24 11:28:14 crc kubenswrapper[5072]: I1124 11:28:14.915286 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/984c9c3d-dc52-4152-8ec4-e1ed94695079-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"984c9c3d-dc52-4152-8ec4-e1ed94695079\") " pod="openstack/ceilometer-0" Nov 24 11:28:14 crc kubenswrapper[5072]: I1124 11:28:14.915677 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/984c9c3d-dc52-4152-8ec4-e1ed94695079-config-data\") pod \"ceilometer-0\" (UID: \"984c9c3d-dc52-4152-8ec4-e1ed94695079\") " pod="openstack/ceilometer-0" Nov 24 11:28:14 crc kubenswrapper[5072]: I1124 11:28:14.932787 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcztg\" (UniqueName: \"kubernetes.io/projected/984c9c3d-dc52-4152-8ec4-e1ed94695079-kube-api-access-mcztg\") pod \"ceilometer-0\" (UID: \"984c9c3d-dc52-4152-8ec4-e1ed94695079\") " pod="openstack/ceilometer-0" Nov 24 11:28:14 crc kubenswrapper[5072]: I1124 11:28:14.938515 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:28:14 crc kubenswrapper[5072]: I1124 11:28:14.939137 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 11:28:15 crc kubenswrapper[5072]: I1124 11:28:15.035484 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8c03edf-cb55-4853-8227-b65c429794bd" path="/var/lib/kubelet/pods/b8c03edf-cb55-4853-8227-b65c429794bd/volumes" Nov 24 11:28:15 crc kubenswrapper[5072]: I1124 11:28:15.425784 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:28:15 crc kubenswrapper[5072]: W1124 11:28:15.432699 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod984c9c3d_dc52_4152_8ec4_e1ed94695079.slice/crio-b09ef10479e208436d8c13cb75e76c1e5774fc55a427f890dd34f299845bf2b6 WatchSource:0}: Error finding container b09ef10479e208436d8c13cb75e76c1e5774fc55a427f890dd34f299845bf2b6: Status 404 returned error can't find the container with id b09ef10479e208436d8c13cb75e76c1e5774fc55a427f890dd34f299845bf2b6 Nov 24 11:28:16 crc kubenswrapper[5072]: I1124 11:28:16.364903 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"984c9c3d-dc52-4152-8ec4-e1ed94695079","Type":"ContainerStarted","Data":"ec422472b5c5c599eb5d463d34a8c359fa5e367157abe0d52dd0facd4dab3618"} Nov 24 11:28:16 crc kubenswrapper[5072]: I1124 11:28:16.365400 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"984c9c3d-dc52-4152-8ec4-e1ed94695079","Type":"ContainerStarted","Data":"b09ef10479e208436d8c13cb75e76c1e5774fc55a427f890dd34f299845bf2b6"} Nov 24 11:28:16 crc kubenswrapper[5072]: I1124 11:28:16.858735 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 11:28:16 crc kubenswrapper[5072]: I1124 11:28:16.945939 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38e65ee4-652d-4453-9ea6-50b067da9715-combined-ca-bundle\") pod \"38e65ee4-652d-4453-9ea6-50b067da9715\" (UID: \"38e65ee4-652d-4453-9ea6-50b067da9715\") " Nov 24 11:28:16 crc kubenswrapper[5072]: I1124 11:28:16.946279 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7f488\" (UniqueName: \"kubernetes.io/projected/38e65ee4-652d-4453-9ea6-50b067da9715-kube-api-access-7f488\") pod \"38e65ee4-652d-4453-9ea6-50b067da9715\" (UID: \"38e65ee4-652d-4453-9ea6-50b067da9715\") " Nov 24 11:28:16 crc kubenswrapper[5072]: I1124 11:28:16.946383 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/38e65ee4-652d-4453-9ea6-50b067da9715-logs\") pod \"38e65ee4-652d-4453-9ea6-50b067da9715\" (UID: \"38e65ee4-652d-4453-9ea6-50b067da9715\") " Nov 24 11:28:16 crc kubenswrapper[5072]: I1124 11:28:16.946400 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38e65ee4-652d-4453-9ea6-50b067da9715-config-data\") pod \"38e65ee4-652d-4453-9ea6-50b067da9715\" (UID: \"38e65ee4-652d-4453-9ea6-50b067da9715\") " Nov 24 11:28:16 crc kubenswrapper[5072]: I1124 11:28:16.950250 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38e65ee4-652d-4453-9ea6-50b067da9715-logs" (OuterVolumeSpecName: "logs") pod "38e65ee4-652d-4453-9ea6-50b067da9715" (UID: "38e65ee4-652d-4453-9ea6-50b067da9715"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:28:16 crc kubenswrapper[5072]: I1124 11:28:16.956640 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38e65ee4-652d-4453-9ea6-50b067da9715-kube-api-access-7f488" (OuterVolumeSpecName: "kube-api-access-7f488") pod "38e65ee4-652d-4453-9ea6-50b067da9715" (UID: "38e65ee4-652d-4453-9ea6-50b067da9715"). InnerVolumeSpecName "kube-api-access-7f488". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:28:16 crc kubenswrapper[5072]: I1124 11:28:16.983329 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38e65ee4-652d-4453-9ea6-50b067da9715-config-data" (OuterVolumeSpecName: "config-data") pod "38e65ee4-652d-4453-9ea6-50b067da9715" (UID: "38e65ee4-652d-4453-9ea6-50b067da9715"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:28:16 crc kubenswrapper[5072]: I1124 11:28:16.983662 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38e65ee4-652d-4453-9ea6-50b067da9715-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "38e65ee4-652d-4453-9ea6-50b067da9715" (UID: "38e65ee4-652d-4453-9ea6-50b067da9715"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:28:17 crc kubenswrapper[5072]: I1124 11:28:17.047915 5072 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38e65ee4-652d-4453-9ea6-50b067da9715-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:28:17 crc kubenswrapper[5072]: I1124 11:28:17.048186 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7f488\" (UniqueName: \"kubernetes.io/projected/38e65ee4-652d-4453-9ea6-50b067da9715-kube-api-access-7f488\") on node \"crc\" DevicePath \"\"" Nov 24 11:28:17 crc kubenswrapper[5072]: I1124 11:28:17.048264 5072 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/38e65ee4-652d-4453-9ea6-50b067da9715-logs\") on node \"crc\" DevicePath \"\"" Nov 24 11:28:17 crc kubenswrapper[5072]: I1124 11:28:17.048319 5072 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38e65ee4-652d-4453-9ea6-50b067da9715-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:28:17 crc kubenswrapper[5072]: E1124 11:28:17.145974 5072 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod38e65ee4_652d_4453_9ea6_50b067da9715.slice\": RecentStats: unable to find data in memory cache]" Nov 24 11:28:17 crc kubenswrapper[5072]: I1124 11:28:17.392237 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"984c9c3d-dc52-4152-8ec4-e1ed94695079","Type":"ContainerStarted","Data":"f332807d4940ab715a2fcc4d3258eae3fd321f2b7d1e786beb99032d1ede8dc0"} Nov 24 11:28:17 crc kubenswrapper[5072]: I1124 11:28:17.395091 5072 generic.go:334] "Generic (PLEG): container finished" podID="38e65ee4-652d-4453-9ea6-50b067da9715" containerID="e514dd679a5970456992ef29bdcbc5e10593cb5f01ff47e87295d9d61faa44c3" exitCode=0 Nov 24 11:28:17 crc kubenswrapper[5072]: I1124 11:28:17.395242 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"38e65ee4-652d-4453-9ea6-50b067da9715","Type":"ContainerDied","Data":"e514dd679a5970456992ef29bdcbc5e10593cb5f01ff47e87295d9d61faa44c3"} Nov 24 11:28:17 crc kubenswrapper[5072]: I1124 11:28:17.395464 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"38e65ee4-652d-4453-9ea6-50b067da9715","Type":"ContainerDied","Data":"57ba67da85711e3fffd4685322d1892775d86a75ec0f1f2fcda7dc44ccf8c818"} Nov 24 11:28:17 crc kubenswrapper[5072]: I1124 11:28:17.395274 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 11:28:17 crc kubenswrapper[5072]: I1124 11:28:17.395498 5072 scope.go:117] "RemoveContainer" containerID="e514dd679a5970456992ef29bdcbc5e10593cb5f01ff47e87295d9d61faa44c3" Nov 24 11:28:17 crc kubenswrapper[5072]: I1124 11:28:17.439815 5072 scope.go:117] "RemoveContainer" containerID="590b271e7d29a2015a1d4fe6d86ecbaae249029946c53767a0af9e9128711204" Nov 24 11:28:17 crc kubenswrapper[5072]: I1124 11:28:17.465534 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 24 11:28:17 crc kubenswrapper[5072]: I1124 11:28:17.480467 5072 scope.go:117] "RemoveContainer" containerID="e514dd679a5970456992ef29bdcbc5e10593cb5f01ff47e87295d9d61faa44c3" Nov 24 11:28:17 crc kubenswrapper[5072]: E1124 11:28:17.480897 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e514dd679a5970456992ef29bdcbc5e10593cb5f01ff47e87295d9d61faa44c3\": container with ID starting with e514dd679a5970456992ef29bdcbc5e10593cb5f01ff47e87295d9d61faa44c3 not found: ID does not exist" containerID="e514dd679a5970456992ef29bdcbc5e10593cb5f01ff47e87295d9d61faa44c3" Nov 24 11:28:17 crc kubenswrapper[5072]: I1124 11:28:17.480938 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e514dd679a5970456992ef29bdcbc5e10593cb5f01ff47e87295d9d61faa44c3"} err="failed to get container status \"e514dd679a5970456992ef29bdcbc5e10593cb5f01ff47e87295d9d61faa44c3\": rpc error: code = NotFound desc = could not find container \"e514dd679a5970456992ef29bdcbc5e10593cb5f01ff47e87295d9d61faa44c3\": container with ID starting with e514dd679a5970456992ef29bdcbc5e10593cb5f01ff47e87295d9d61faa44c3 not found: ID does not exist" Nov 24 11:28:17 crc kubenswrapper[5072]: I1124 11:28:17.480962 5072 scope.go:117] "RemoveContainer" containerID="590b271e7d29a2015a1d4fe6d86ecbaae249029946c53767a0af9e9128711204" Nov 24 11:28:17 crc kubenswrapper[5072]: E1124 11:28:17.481697 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"590b271e7d29a2015a1d4fe6d86ecbaae249029946c53767a0af9e9128711204\": container with ID starting with 590b271e7d29a2015a1d4fe6d86ecbaae249029946c53767a0af9e9128711204 not found: ID does not exist" containerID="590b271e7d29a2015a1d4fe6d86ecbaae249029946c53767a0af9e9128711204" Nov 24 11:28:17 crc kubenswrapper[5072]: I1124 11:28:17.481948 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"590b271e7d29a2015a1d4fe6d86ecbaae249029946c53767a0af9e9128711204"} err="failed to get container status \"590b271e7d29a2015a1d4fe6d86ecbaae249029946c53767a0af9e9128711204\": rpc error: code = NotFound desc = could not find container \"590b271e7d29a2015a1d4fe6d86ecbaae249029946c53767a0af9e9128711204\": container with ID starting with 590b271e7d29a2015a1d4fe6d86ecbaae249029946c53767a0af9e9128711204 not found: ID does not exist" Nov 24 11:28:17 crc kubenswrapper[5072]: I1124 11:28:17.485163 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 24 11:28:17 crc kubenswrapper[5072]: I1124 11:28:17.498522 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 24 11:28:17 crc kubenswrapper[5072]: E1124 11:28:17.499557 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38e65ee4-652d-4453-9ea6-50b067da9715" containerName="nova-api-api" Nov 24 11:28:17 crc kubenswrapper[5072]: I1124 11:28:17.499752 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="38e65ee4-652d-4453-9ea6-50b067da9715" containerName="nova-api-api" Nov 24 11:28:17 crc kubenswrapper[5072]: E1124 11:28:17.499892 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38e65ee4-652d-4453-9ea6-50b067da9715" containerName="nova-api-log" Nov 24 11:28:17 crc kubenswrapper[5072]: I1124 11:28:17.499998 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="38e65ee4-652d-4453-9ea6-50b067da9715" containerName="nova-api-log" Nov 24 11:28:17 crc kubenswrapper[5072]: I1124 11:28:17.500487 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="38e65ee4-652d-4453-9ea6-50b067da9715" containerName="nova-api-log" Nov 24 11:28:17 crc kubenswrapper[5072]: I1124 11:28:17.500660 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="38e65ee4-652d-4453-9ea6-50b067da9715" containerName="nova-api-api" Nov 24 11:28:17 crc kubenswrapper[5072]: I1124 11:28:17.502155 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 11:28:17 crc kubenswrapper[5072]: I1124 11:28:17.505918 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 24 11:28:17 crc kubenswrapper[5072]: I1124 11:28:17.505958 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 24 11:28:17 crc kubenswrapper[5072]: I1124 11:28:17.506196 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 24 11:28:17 crc kubenswrapper[5072]: I1124 11:28:17.510522 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 11:28:17 crc kubenswrapper[5072]: I1124 11:28:17.560062 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e1ea7ea-3fc5-4ae0-80c1-d769428711d2-public-tls-certs\") pod \"nova-api-0\" (UID: \"7e1ea7ea-3fc5-4ae0-80c1-d769428711d2\") " pod="openstack/nova-api-0" Nov 24 11:28:17 crc kubenswrapper[5072]: I1124 11:28:17.560125 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e1ea7ea-3fc5-4ae0-80c1-d769428711d2-internal-tls-certs\") pod \"nova-api-0\" (UID: \"7e1ea7ea-3fc5-4ae0-80c1-d769428711d2\") " pod="openstack/nova-api-0" Nov 24 11:28:17 crc kubenswrapper[5072]: I1124 11:28:17.560158 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2xsw\" (UniqueName: \"kubernetes.io/projected/7e1ea7ea-3fc5-4ae0-80c1-d769428711d2-kube-api-access-x2xsw\") pod \"nova-api-0\" (UID: \"7e1ea7ea-3fc5-4ae0-80c1-d769428711d2\") " pod="openstack/nova-api-0" Nov 24 11:28:17 crc kubenswrapper[5072]: I1124 11:28:17.560231 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e1ea7ea-3fc5-4ae0-80c1-d769428711d2-config-data\") pod \"nova-api-0\" (UID: \"7e1ea7ea-3fc5-4ae0-80c1-d769428711d2\") " pod="openstack/nova-api-0" Nov 24 11:28:17 crc kubenswrapper[5072]: I1124 11:28:17.560322 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e1ea7ea-3fc5-4ae0-80c1-d769428711d2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7e1ea7ea-3fc5-4ae0-80c1-d769428711d2\") " pod="openstack/nova-api-0" Nov 24 11:28:17 crc kubenswrapper[5072]: I1124 11:28:17.560382 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e1ea7ea-3fc5-4ae0-80c1-d769428711d2-logs\") pod \"nova-api-0\" (UID: \"7e1ea7ea-3fc5-4ae0-80c1-d769428711d2\") " pod="openstack/nova-api-0" Nov 24 11:28:17 crc kubenswrapper[5072]: I1124 11:28:17.662300 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e1ea7ea-3fc5-4ae0-80c1-d769428711d2-config-data\") pod \"nova-api-0\" (UID: \"7e1ea7ea-3fc5-4ae0-80c1-d769428711d2\") " pod="openstack/nova-api-0" Nov 24 11:28:17 crc kubenswrapper[5072]: I1124 11:28:17.662482 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e1ea7ea-3fc5-4ae0-80c1-d769428711d2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7e1ea7ea-3fc5-4ae0-80c1-d769428711d2\") " pod="openstack/nova-api-0" Nov 24 11:28:17 crc kubenswrapper[5072]: I1124 11:28:17.662541 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e1ea7ea-3fc5-4ae0-80c1-d769428711d2-logs\") pod \"nova-api-0\" (UID: \"7e1ea7ea-3fc5-4ae0-80c1-d769428711d2\") " pod="openstack/nova-api-0" Nov 24 11:28:17 crc kubenswrapper[5072]: I1124 11:28:17.662640 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e1ea7ea-3fc5-4ae0-80c1-d769428711d2-public-tls-certs\") pod \"nova-api-0\" (UID: \"7e1ea7ea-3fc5-4ae0-80c1-d769428711d2\") " pod="openstack/nova-api-0" Nov 24 11:28:17 crc kubenswrapper[5072]: I1124 11:28:17.662690 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e1ea7ea-3fc5-4ae0-80c1-d769428711d2-internal-tls-certs\") pod \"nova-api-0\" (UID: \"7e1ea7ea-3fc5-4ae0-80c1-d769428711d2\") " pod="openstack/nova-api-0" Nov 24 11:28:17 crc kubenswrapper[5072]: I1124 11:28:17.662734 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2xsw\" (UniqueName: \"kubernetes.io/projected/7e1ea7ea-3fc5-4ae0-80c1-d769428711d2-kube-api-access-x2xsw\") pod \"nova-api-0\" (UID: \"7e1ea7ea-3fc5-4ae0-80c1-d769428711d2\") " pod="openstack/nova-api-0" Nov 24 11:28:17 crc kubenswrapper[5072]: I1124 11:28:17.663342 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e1ea7ea-3fc5-4ae0-80c1-d769428711d2-logs\") pod \"nova-api-0\" (UID: \"7e1ea7ea-3fc5-4ae0-80c1-d769428711d2\") " pod="openstack/nova-api-0" Nov 24 11:28:17 crc kubenswrapper[5072]: I1124 11:28:17.675427 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e1ea7ea-3fc5-4ae0-80c1-d769428711d2-config-data\") pod \"nova-api-0\" (UID: \"7e1ea7ea-3fc5-4ae0-80c1-d769428711d2\") " pod="openstack/nova-api-0" Nov 24 11:28:17 crc kubenswrapper[5072]: I1124 11:28:17.675708 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e1ea7ea-3fc5-4ae0-80c1-d769428711d2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7e1ea7ea-3fc5-4ae0-80c1-d769428711d2\") " pod="openstack/nova-api-0" Nov 24 11:28:17 crc kubenswrapper[5072]: I1124 11:28:17.675948 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e1ea7ea-3fc5-4ae0-80c1-d769428711d2-public-tls-certs\") pod \"nova-api-0\" (UID: \"7e1ea7ea-3fc5-4ae0-80c1-d769428711d2\") " pod="openstack/nova-api-0" Nov 24 11:28:17 crc kubenswrapper[5072]: I1124 11:28:17.676842 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e1ea7ea-3fc5-4ae0-80c1-d769428711d2-internal-tls-certs\") pod \"nova-api-0\" (UID: \"7e1ea7ea-3fc5-4ae0-80c1-d769428711d2\") " pod="openstack/nova-api-0" Nov 24 11:28:17 crc kubenswrapper[5072]: I1124 11:28:17.680338 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2xsw\" (UniqueName: \"kubernetes.io/projected/7e1ea7ea-3fc5-4ae0-80c1-d769428711d2-kube-api-access-x2xsw\") pod \"nova-api-0\" (UID: \"7e1ea7ea-3fc5-4ae0-80c1-d769428711d2\") " pod="openstack/nova-api-0" Nov 24 11:28:17 crc kubenswrapper[5072]: I1124 11:28:17.847179 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 11:28:17 crc kubenswrapper[5072]: I1124 11:28:17.990943 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:28:18 crc kubenswrapper[5072]: I1124 11:28:18.017292 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:28:18 crc kubenswrapper[5072]: I1124 11:28:18.324611 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 11:28:18 crc kubenswrapper[5072]: W1124 11:28:18.335681 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7e1ea7ea_3fc5_4ae0_80c1_d769428711d2.slice/crio-976b3f052c4e6f0b3ed2366326e01bcb5d22cac2ad3fee3725bbb45af6f4f5cb WatchSource:0}: Error finding container 976b3f052c4e6f0b3ed2366326e01bcb5d22cac2ad3fee3725bbb45af6f4f5cb: Status 404 returned error can't find the container with id 976b3f052c4e6f0b3ed2366326e01bcb5d22cac2ad3fee3725bbb45af6f4f5cb Nov 24 11:28:18 crc kubenswrapper[5072]: I1124 11:28:18.410774 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7e1ea7ea-3fc5-4ae0-80c1-d769428711d2","Type":"ContainerStarted","Data":"976b3f052c4e6f0b3ed2366326e01bcb5d22cac2ad3fee3725bbb45af6f4f5cb"} Nov 24 11:28:18 crc kubenswrapper[5072]: I1124 11:28:18.415725 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"984c9c3d-dc52-4152-8ec4-e1ed94695079","Type":"ContainerStarted","Data":"30863584fb3ca1cbfe701ae14451e812a0fe096b373b2f14bae63c1cfa5668b1"} Nov 24 11:28:18 crc kubenswrapper[5072]: I1124 11:28:18.434813 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:28:18 crc kubenswrapper[5072]: I1124 11:28:18.570138 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-ghzbb"] Nov 24 11:28:18 crc kubenswrapper[5072]: I1124 11:28:18.572253 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-ghzbb" Nov 24 11:28:18 crc kubenswrapper[5072]: I1124 11:28:18.579493 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Nov 24 11:28:18 crc kubenswrapper[5072]: I1124 11:28:18.579666 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Nov 24 11:28:18 crc kubenswrapper[5072]: I1124 11:28:18.584723 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-ghzbb"] Nov 24 11:28:18 crc kubenswrapper[5072]: I1124 11:28:18.677018 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nv9x4\" (UniqueName: \"kubernetes.io/projected/e4d90486-6954-484a-aa10-2ffa6789cdc7-kube-api-access-nv9x4\") pod \"nova-cell1-cell-mapping-ghzbb\" (UID: \"e4d90486-6954-484a-aa10-2ffa6789cdc7\") " pod="openstack/nova-cell1-cell-mapping-ghzbb" Nov 24 11:28:18 crc kubenswrapper[5072]: I1124 11:28:18.677358 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4d90486-6954-484a-aa10-2ffa6789cdc7-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-ghzbb\" (UID: \"e4d90486-6954-484a-aa10-2ffa6789cdc7\") " pod="openstack/nova-cell1-cell-mapping-ghzbb" Nov 24 11:28:18 crc kubenswrapper[5072]: I1124 11:28:18.677503 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4d90486-6954-484a-aa10-2ffa6789cdc7-scripts\") pod \"nova-cell1-cell-mapping-ghzbb\" (UID: \"e4d90486-6954-484a-aa10-2ffa6789cdc7\") " pod="openstack/nova-cell1-cell-mapping-ghzbb" Nov 24 11:28:18 crc kubenswrapper[5072]: I1124 11:28:18.677527 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4d90486-6954-484a-aa10-2ffa6789cdc7-config-data\") pod \"nova-cell1-cell-mapping-ghzbb\" (UID: \"e4d90486-6954-484a-aa10-2ffa6789cdc7\") " pod="openstack/nova-cell1-cell-mapping-ghzbb" Nov 24 11:28:18 crc kubenswrapper[5072]: I1124 11:28:18.778629 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4d90486-6954-484a-aa10-2ffa6789cdc7-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-ghzbb\" (UID: \"e4d90486-6954-484a-aa10-2ffa6789cdc7\") " pod="openstack/nova-cell1-cell-mapping-ghzbb" Nov 24 11:28:18 crc kubenswrapper[5072]: I1124 11:28:18.778715 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4d90486-6954-484a-aa10-2ffa6789cdc7-scripts\") pod \"nova-cell1-cell-mapping-ghzbb\" (UID: \"e4d90486-6954-484a-aa10-2ffa6789cdc7\") " pod="openstack/nova-cell1-cell-mapping-ghzbb" Nov 24 11:28:18 crc kubenswrapper[5072]: I1124 11:28:18.778734 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4d90486-6954-484a-aa10-2ffa6789cdc7-config-data\") pod \"nova-cell1-cell-mapping-ghzbb\" (UID: \"e4d90486-6954-484a-aa10-2ffa6789cdc7\") " pod="openstack/nova-cell1-cell-mapping-ghzbb" Nov 24 11:28:18 crc kubenswrapper[5072]: I1124 11:28:18.778785 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nv9x4\" (UniqueName: \"kubernetes.io/projected/e4d90486-6954-484a-aa10-2ffa6789cdc7-kube-api-access-nv9x4\") pod \"nova-cell1-cell-mapping-ghzbb\" (UID: \"e4d90486-6954-484a-aa10-2ffa6789cdc7\") " pod="openstack/nova-cell1-cell-mapping-ghzbb" Nov 24 11:28:18 crc kubenswrapper[5072]: I1124 11:28:18.784914 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4d90486-6954-484a-aa10-2ffa6789cdc7-scripts\") pod \"nova-cell1-cell-mapping-ghzbb\" (UID: \"e4d90486-6954-484a-aa10-2ffa6789cdc7\") " pod="openstack/nova-cell1-cell-mapping-ghzbb" Nov 24 11:28:18 crc kubenswrapper[5072]: I1124 11:28:18.785129 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4d90486-6954-484a-aa10-2ffa6789cdc7-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-ghzbb\" (UID: \"e4d90486-6954-484a-aa10-2ffa6789cdc7\") " pod="openstack/nova-cell1-cell-mapping-ghzbb" Nov 24 11:28:18 crc kubenswrapper[5072]: I1124 11:28:18.791962 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4d90486-6954-484a-aa10-2ffa6789cdc7-config-data\") pod \"nova-cell1-cell-mapping-ghzbb\" (UID: \"e4d90486-6954-484a-aa10-2ffa6789cdc7\") " pod="openstack/nova-cell1-cell-mapping-ghzbb" Nov 24 11:28:18 crc kubenswrapper[5072]: I1124 11:28:18.798847 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nv9x4\" (UniqueName: \"kubernetes.io/projected/e4d90486-6954-484a-aa10-2ffa6789cdc7-kube-api-access-nv9x4\") pod \"nova-cell1-cell-mapping-ghzbb\" (UID: \"e4d90486-6954-484a-aa10-2ffa6789cdc7\") " pod="openstack/nova-cell1-cell-mapping-ghzbb" Nov 24 11:28:18 crc kubenswrapper[5072]: I1124 11:28:18.966900 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-ghzbb" Nov 24 11:28:19 crc kubenswrapper[5072]: I1124 11:28:19.045433 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38e65ee4-652d-4453-9ea6-50b067da9715" path="/var/lib/kubelet/pods/38e65ee4-652d-4453-9ea6-50b067da9715/volumes" Nov 24 11:28:19 crc kubenswrapper[5072]: I1124 11:28:19.136038 5072 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod98f36c5e-b827-4fcb-ac98-8eb62f230787"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod98f36c5e-b827-4fcb-ac98-8eb62f230787] : Timed out while waiting for systemd to remove kubepods-besteffort-pod98f36c5e_b827_4fcb_ac98_8eb62f230787.slice" Nov 24 11:28:19 crc kubenswrapper[5072]: E1124 11:28:19.136091 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort pod98f36c5e-b827-4fcb-ac98-8eb62f230787] : unable to destroy cgroup paths for cgroup [kubepods besteffort pod98f36c5e-b827-4fcb-ac98-8eb62f230787] : Timed out while waiting for systemd to remove kubepods-besteffort-pod98f36c5e_b827_4fcb_ac98_8eb62f230787.slice" pod="openstack/nova-scheduler-0" podUID="98f36c5e-b827-4fcb-ac98-8eb62f230787" Nov 24 11:28:19 crc kubenswrapper[5072]: I1124 11:28:19.432052 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"984c9c3d-dc52-4152-8ec4-e1ed94695079","Type":"ContainerStarted","Data":"36688bb5176270a9c5bcd470a743f38c1d5ad59ff9b95d40642a60e604b94f0b"} Nov 24 11:28:19 crc kubenswrapper[5072]: I1124 11:28:19.432454 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="984c9c3d-dc52-4152-8ec4-e1ed94695079" containerName="ceilometer-central-agent" containerID="cri-o://ec422472b5c5c599eb5d463d34a8c359fa5e367157abe0d52dd0facd4dab3618" gracePeriod=30 Nov 24 11:28:19 crc kubenswrapper[5072]: I1124 11:28:19.432563 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 11:28:19 crc kubenswrapper[5072]: I1124 11:28:19.432720 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="984c9c3d-dc52-4152-8ec4-e1ed94695079" containerName="proxy-httpd" containerID="cri-o://36688bb5176270a9c5bcd470a743f38c1d5ad59ff9b95d40642a60e604b94f0b" gracePeriod=30 Nov 24 11:28:19 crc kubenswrapper[5072]: I1124 11:28:19.432922 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="984c9c3d-dc52-4152-8ec4-e1ed94695079" containerName="ceilometer-notification-agent" containerID="cri-o://f332807d4940ab715a2fcc4d3258eae3fd321f2b7d1e786beb99032d1ede8dc0" gracePeriod=30 Nov 24 11:28:19 crc kubenswrapper[5072]: I1124 11:28:19.432979 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="984c9c3d-dc52-4152-8ec4-e1ed94695079" containerName="sg-core" containerID="cri-o://30863584fb3ca1cbfe701ae14451e812a0fe096b373b2f14bae63c1cfa5668b1" gracePeriod=30 Nov 24 11:28:19 crc kubenswrapper[5072]: I1124 11:28:19.437696 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 11:28:19 crc kubenswrapper[5072]: I1124 11:28:19.438455 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7e1ea7ea-3fc5-4ae0-80c1-d769428711d2","Type":"ContainerStarted","Data":"c694f6acf6af52396dcde2b546f3f28759ac132a2761d7971341b73f0f435f17"} Nov 24 11:28:19 crc kubenswrapper[5072]: I1124 11:28:19.438478 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7e1ea7ea-3fc5-4ae0-80c1-d769428711d2","Type":"ContainerStarted","Data":"7cdac74e617cd61ac7bdf1c71b05601211f9e58cb768e5d05b407be135413980"} Nov 24 11:28:19 crc kubenswrapper[5072]: I1124 11:28:19.461932 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.080361348 podStartE2EDuration="5.461914435s" podCreationTimestamp="2025-11-24 11:28:14 +0000 UTC" firstStartedPulling="2025-11-24 11:28:15.43509432 +0000 UTC m=+1147.146618796" lastFinishedPulling="2025-11-24 11:28:18.816647397 +0000 UTC m=+1150.528171883" observedRunningTime="2025-11-24 11:28:19.458242234 +0000 UTC m=+1151.169766720" watchObservedRunningTime="2025-11-24 11:28:19.461914435 +0000 UTC m=+1151.173438911" Nov 24 11:28:19 crc kubenswrapper[5072]: I1124 11:28:19.490409 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 11:28:19 crc kubenswrapper[5072]: I1124 11:28:19.497570 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 11:28:19 crc kubenswrapper[5072]: I1124 11:28:19.504515 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 11:28:19 crc kubenswrapper[5072]: I1124 11:28:19.505651 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 11:28:19 crc kubenswrapper[5072]: I1124 11:28:19.507325 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 24 11:28:19 crc kubenswrapper[5072]: I1124 11:28:19.522167 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.5221474329999998 podStartE2EDuration="2.522147433s" podCreationTimestamp="2025-11-24 11:28:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:28:19.506190706 +0000 UTC m=+1151.217715202" watchObservedRunningTime="2025-11-24 11:28:19.522147433 +0000 UTC m=+1151.233671909" Nov 24 11:28:19 crc kubenswrapper[5072]: I1124 11:28:19.538969 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 11:28:19 crc kubenswrapper[5072]: I1124 11:28:19.574442 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-ghzbb"] Nov 24 11:28:19 crc kubenswrapper[5072]: W1124 11:28:19.583716 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode4d90486_6954_484a_aa10_2ffa6789cdc7.slice/crio-37cd9ee9c14c51dbbc5d093ebfa3ae2be91b97c9913542549bd5ec4ed3084b7a WatchSource:0}: Error finding container 37cd9ee9c14c51dbbc5d093ebfa3ae2be91b97c9913542549bd5ec4ed3084b7a: Status 404 returned error can't find the container with id 37cd9ee9c14c51dbbc5d093ebfa3ae2be91b97c9913542549bd5ec4ed3084b7a Nov 24 11:28:19 crc kubenswrapper[5072]: I1124 11:28:19.705918 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgbxf\" (UniqueName: \"kubernetes.io/projected/179bc010-e872-4be0-b453-088a8260caa5-kube-api-access-xgbxf\") pod \"nova-scheduler-0\" (UID: \"179bc010-e872-4be0-b453-088a8260caa5\") " pod="openstack/nova-scheduler-0" Nov 24 11:28:19 crc kubenswrapper[5072]: I1124 11:28:19.705981 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/179bc010-e872-4be0-b453-088a8260caa5-config-data\") pod \"nova-scheduler-0\" (UID: \"179bc010-e872-4be0-b453-088a8260caa5\") " pod="openstack/nova-scheduler-0" Nov 24 11:28:19 crc kubenswrapper[5072]: I1124 11:28:19.706067 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/179bc010-e872-4be0-b453-088a8260caa5-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"179bc010-e872-4be0-b453-088a8260caa5\") " pod="openstack/nova-scheduler-0" Nov 24 11:28:19 crc kubenswrapper[5072]: I1124 11:28:19.807906 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/179bc010-e872-4be0-b453-088a8260caa5-config-data\") pod \"nova-scheduler-0\" (UID: \"179bc010-e872-4be0-b453-088a8260caa5\") " pod="openstack/nova-scheduler-0" Nov 24 11:28:19 crc kubenswrapper[5072]: I1124 11:28:19.808083 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/179bc010-e872-4be0-b453-088a8260caa5-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"179bc010-e872-4be0-b453-088a8260caa5\") " pod="openstack/nova-scheduler-0" Nov 24 11:28:19 crc kubenswrapper[5072]: I1124 11:28:19.808184 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgbxf\" (UniqueName: \"kubernetes.io/projected/179bc010-e872-4be0-b453-088a8260caa5-kube-api-access-xgbxf\") pod \"nova-scheduler-0\" (UID: \"179bc010-e872-4be0-b453-088a8260caa5\") " pod="openstack/nova-scheduler-0" Nov 24 11:28:19 crc kubenswrapper[5072]: I1124 11:28:19.812329 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/179bc010-e872-4be0-b453-088a8260caa5-config-data\") pod \"nova-scheduler-0\" (UID: \"179bc010-e872-4be0-b453-088a8260caa5\") " pod="openstack/nova-scheduler-0" Nov 24 11:28:19 crc kubenswrapper[5072]: I1124 11:28:19.812977 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/179bc010-e872-4be0-b453-088a8260caa5-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"179bc010-e872-4be0-b453-088a8260caa5\") " pod="openstack/nova-scheduler-0" Nov 24 11:28:19 crc kubenswrapper[5072]: I1124 11:28:19.824305 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgbxf\" (UniqueName: \"kubernetes.io/projected/179bc010-e872-4be0-b453-088a8260caa5-kube-api-access-xgbxf\") pod \"nova-scheduler-0\" (UID: \"179bc010-e872-4be0-b453-088a8260caa5\") " pod="openstack/nova-scheduler-0" Nov 24 11:28:19 crc kubenswrapper[5072]: I1124 11:28:19.830975 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 11:28:20 crc kubenswrapper[5072]: I1124 11:28:20.359454 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 11:28:20 crc kubenswrapper[5072]: I1124 11:28:20.448464 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"179bc010-e872-4be0-b453-088a8260caa5","Type":"ContainerStarted","Data":"84888f0d4206652a8cc907ccdd9fbea76ae5ea53a805627bb285487adc7f6f4e"} Nov 24 11:28:20 crc kubenswrapper[5072]: I1124 11:28:20.452998 5072 generic.go:334] "Generic (PLEG): container finished" podID="984c9c3d-dc52-4152-8ec4-e1ed94695079" containerID="36688bb5176270a9c5bcd470a743f38c1d5ad59ff9b95d40642a60e604b94f0b" exitCode=0 Nov 24 11:28:20 crc kubenswrapper[5072]: I1124 11:28:20.453036 5072 generic.go:334] "Generic (PLEG): container finished" podID="984c9c3d-dc52-4152-8ec4-e1ed94695079" containerID="30863584fb3ca1cbfe701ae14451e812a0fe096b373b2f14bae63c1cfa5668b1" exitCode=2 Nov 24 11:28:20 crc kubenswrapper[5072]: I1124 11:28:20.453045 5072 generic.go:334] "Generic (PLEG): container finished" podID="984c9c3d-dc52-4152-8ec4-e1ed94695079" containerID="f332807d4940ab715a2fcc4d3258eae3fd321f2b7d1e786beb99032d1ede8dc0" exitCode=0 Nov 24 11:28:20 crc kubenswrapper[5072]: I1124 11:28:20.453048 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"984c9c3d-dc52-4152-8ec4-e1ed94695079","Type":"ContainerDied","Data":"36688bb5176270a9c5bcd470a743f38c1d5ad59ff9b95d40642a60e604b94f0b"} Nov 24 11:28:20 crc kubenswrapper[5072]: I1124 11:28:20.453087 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"984c9c3d-dc52-4152-8ec4-e1ed94695079","Type":"ContainerDied","Data":"30863584fb3ca1cbfe701ae14451e812a0fe096b373b2f14bae63c1cfa5668b1"} Nov 24 11:28:20 crc kubenswrapper[5072]: I1124 11:28:20.453107 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"984c9c3d-dc52-4152-8ec4-e1ed94695079","Type":"ContainerDied","Data":"f332807d4940ab715a2fcc4d3258eae3fd321f2b7d1e786beb99032d1ede8dc0"} Nov 24 11:28:20 crc kubenswrapper[5072]: I1124 11:28:20.455227 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-ghzbb" event={"ID":"e4d90486-6954-484a-aa10-2ffa6789cdc7","Type":"ContainerStarted","Data":"adbbafa7dba3ea0127645167357936a6a57585ed79b55e0b0d66b94e6662c686"} Nov 24 11:28:20 crc kubenswrapper[5072]: I1124 11:28:20.455255 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-ghzbb" event={"ID":"e4d90486-6954-484a-aa10-2ffa6789cdc7","Type":"ContainerStarted","Data":"37cd9ee9c14c51dbbc5d093ebfa3ae2be91b97c9913542549bd5ec4ed3084b7a"} Nov 24 11:28:20 crc kubenswrapper[5072]: I1124 11:28:20.483105 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-ghzbb" podStartSLOduration=2.483085171 podStartE2EDuration="2.483085171s" podCreationTimestamp="2025-11-24 11:28:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:28:20.474446096 +0000 UTC m=+1152.185970582" watchObservedRunningTime="2025-11-24 11:28:20.483085171 +0000 UTC m=+1152.194609657" Nov 24 11:28:20 crc kubenswrapper[5072]: I1124 11:28:20.818706 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b856c5697-hl4mn" Nov 24 11:28:20 crc kubenswrapper[5072]: I1124 11:28:20.907410 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-5pgtx"] Nov 24 11:28:20 crc kubenswrapper[5072]: I1124 11:28:20.907732 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-566b5b7845-5pgtx" podUID="3524341f-32c2-40b8-bfe3-f551f8e48de0" containerName="dnsmasq-dns" containerID="cri-o://61e4480db97e7be4cbb9f676fa18803cc688d2588e03b938aeb98351268cc76f" gracePeriod=10 Nov 24 11:28:21 crc kubenswrapper[5072]: I1124 11:28:21.029878 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98f36c5e-b827-4fcb-ac98-8eb62f230787" path="/var/lib/kubelet/pods/98f36c5e-b827-4fcb-ac98-8eb62f230787/volumes" Nov 24 11:28:21 crc kubenswrapper[5072]: I1124 11:28:21.470668 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"179bc010-e872-4be0-b453-088a8260caa5","Type":"ContainerStarted","Data":"c6e3298fd45803c6c49d67a1ab7743f89778f20e5d406858ad91f8a27395c48c"} Nov 24 11:28:21 crc kubenswrapper[5072]: I1124 11:28:21.471801 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-566b5b7845-5pgtx" Nov 24 11:28:21 crc kubenswrapper[5072]: I1124 11:28:21.479808 5072 generic.go:334] "Generic (PLEG): container finished" podID="3524341f-32c2-40b8-bfe3-f551f8e48de0" containerID="61e4480db97e7be4cbb9f676fa18803cc688d2588e03b938aeb98351268cc76f" exitCode=0 Nov 24 11:28:21 crc kubenswrapper[5072]: I1124 11:28:21.479872 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-5pgtx" event={"ID":"3524341f-32c2-40b8-bfe3-f551f8e48de0","Type":"ContainerDied","Data":"61e4480db97e7be4cbb9f676fa18803cc688d2588e03b938aeb98351268cc76f"} Nov 24 11:28:21 crc kubenswrapper[5072]: I1124 11:28:21.479900 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-5pgtx" event={"ID":"3524341f-32c2-40b8-bfe3-f551f8e48de0","Type":"ContainerDied","Data":"2056516b7bd7c64638826d8dc8be673c35deb99c28c2dd31e1600cc00ff71bc3"} Nov 24 11:28:21 crc kubenswrapper[5072]: I1124 11:28:21.479917 5072 scope.go:117] "RemoveContainer" containerID="61e4480db97e7be4cbb9f676fa18803cc688d2588e03b938aeb98351268cc76f" Nov 24 11:28:21 crc kubenswrapper[5072]: I1124 11:28:21.499269 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.499250972 podStartE2EDuration="2.499250972s" podCreationTimestamp="2025-11-24 11:28:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:28:21.485783487 +0000 UTC m=+1153.197307963" watchObservedRunningTime="2025-11-24 11:28:21.499250972 +0000 UTC m=+1153.210775448" Nov 24 11:28:21 crc kubenswrapper[5072]: I1124 11:28:21.508020 5072 scope.go:117] "RemoveContainer" containerID="36aadd7da48dcfe3611e54aed6f2269821ff6eaf7dff59ccd1c6c694d1f79054" Nov 24 11:28:21 crc kubenswrapper[5072]: I1124 11:28:21.536174 5072 scope.go:117] "RemoveContainer" containerID="61e4480db97e7be4cbb9f676fa18803cc688d2588e03b938aeb98351268cc76f" Nov 24 11:28:21 crc kubenswrapper[5072]: E1124 11:28:21.537732 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61e4480db97e7be4cbb9f676fa18803cc688d2588e03b938aeb98351268cc76f\": container with ID starting with 61e4480db97e7be4cbb9f676fa18803cc688d2588e03b938aeb98351268cc76f not found: ID does not exist" containerID="61e4480db97e7be4cbb9f676fa18803cc688d2588e03b938aeb98351268cc76f" Nov 24 11:28:21 crc kubenswrapper[5072]: I1124 11:28:21.537765 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61e4480db97e7be4cbb9f676fa18803cc688d2588e03b938aeb98351268cc76f"} err="failed to get container status \"61e4480db97e7be4cbb9f676fa18803cc688d2588e03b938aeb98351268cc76f\": rpc error: code = NotFound desc = could not find container \"61e4480db97e7be4cbb9f676fa18803cc688d2588e03b938aeb98351268cc76f\": container with ID starting with 61e4480db97e7be4cbb9f676fa18803cc688d2588e03b938aeb98351268cc76f not found: ID does not exist" Nov 24 11:28:21 crc kubenswrapper[5072]: I1124 11:28:21.537785 5072 scope.go:117] "RemoveContainer" containerID="36aadd7da48dcfe3611e54aed6f2269821ff6eaf7dff59ccd1c6c694d1f79054" Nov 24 11:28:21 crc kubenswrapper[5072]: E1124 11:28:21.542709 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36aadd7da48dcfe3611e54aed6f2269821ff6eaf7dff59ccd1c6c694d1f79054\": container with ID starting with 36aadd7da48dcfe3611e54aed6f2269821ff6eaf7dff59ccd1c6c694d1f79054 not found: ID does not exist" containerID="36aadd7da48dcfe3611e54aed6f2269821ff6eaf7dff59ccd1c6c694d1f79054" Nov 24 11:28:21 crc kubenswrapper[5072]: I1124 11:28:21.542747 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36aadd7da48dcfe3611e54aed6f2269821ff6eaf7dff59ccd1c6c694d1f79054"} err="failed to get container status \"36aadd7da48dcfe3611e54aed6f2269821ff6eaf7dff59ccd1c6c694d1f79054\": rpc error: code = NotFound desc = could not find container \"36aadd7da48dcfe3611e54aed6f2269821ff6eaf7dff59ccd1c6c694d1f79054\": container with ID starting with 36aadd7da48dcfe3611e54aed6f2269821ff6eaf7dff59ccd1c6c694d1f79054 not found: ID does not exist" Nov 24 11:28:21 crc kubenswrapper[5072]: I1124 11:28:21.652026 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3524341f-32c2-40b8-bfe3-f551f8e48de0-config\") pod \"3524341f-32c2-40b8-bfe3-f551f8e48de0\" (UID: \"3524341f-32c2-40b8-bfe3-f551f8e48de0\") " Nov 24 11:28:21 crc kubenswrapper[5072]: I1124 11:28:21.652138 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3524341f-32c2-40b8-bfe3-f551f8e48de0-ovsdbserver-sb\") pod \"3524341f-32c2-40b8-bfe3-f551f8e48de0\" (UID: \"3524341f-32c2-40b8-bfe3-f551f8e48de0\") " Nov 24 11:28:21 crc kubenswrapper[5072]: I1124 11:28:21.652260 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mbtg7\" (UniqueName: \"kubernetes.io/projected/3524341f-32c2-40b8-bfe3-f551f8e48de0-kube-api-access-mbtg7\") pod \"3524341f-32c2-40b8-bfe3-f551f8e48de0\" (UID: \"3524341f-32c2-40b8-bfe3-f551f8e48de0\") " Nov 24 11:28:21 crc kubenswrapper[5072]: I1124 11:28:21.652338 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3524341f-32c2-40b8-bfe3-f551f8e48de0-ovsdbserver-nb\") pod \"3524341f-32c2-40b8-bfe3-f551f8e48de0\" (UID: \"3524341f-32c2-40b8-bfe3-f551f8e48de0\") " Nov 24 11:28:21 crc kubenswrapper[5072]: I1124 11:28:21.652366 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3524341f-32c2-40b8-bfe3-f551f8e48de0-dns-svc\") pod \"3524341f-32c2-40b8-bfe3-f551f8e48de0\" (UID: \"3524341f-32c2-40b8-bfe3-f551f8e48de0\") " Nov 24 11:28:21 crc kubenswrapper[5072]: I1124 11:28:21.664546 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3524341f-32c2-40b8-bfe3-f551f8e48de0-kube-api-access-mbtg7" (OuterVolumeSpecName: "kube-api-access-mbtg7") pod "3524341f-32c2-40b8-bfe3-f551f8e48de0" (UID: "3524341f-32c2-40b8-bfe3-f551f8e48de0"). InnerVolumeSpecName "kube-api-access-mbtg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:28:21 crc kubenswrapper[5072]: I1124 11:28:21.711336 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3524341f-32c2-40b8-bfe3-f551f8e48de0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3524341f-32c2-40b8-bfe3-f551f8e48de0" (UID: "3524341f-32c2-40b8-bfe3-f551f8e48de0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:28:21 crc kubenswrapper[5072]: I1124 11:28:21.716344 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3524341f-32c2-40b8-bfe3-f551f8e48de0-config" (OuterVolumeSpecName: "config") pod "3524341f-32c2-40b8-bfe3-f551f8e48de0" (UID: "3524341f-32c2-40b8-bfe3-f551f8e48de0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:28:21 crc kubenswrapper[5072]: I1124 11:28:21.720203 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3524341f-32c2-40b8-bfe3-f551f8e48de0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3524341f-32c2-40b8-bfe3-f551f8e48de0" (UID: "3524341f-32c2-40b8-bfe3-f551f8e48de0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:28:21 crc kubenswrapper[5072]: I1124 11:28:21.726995 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3524341f-32c2-40b8-bfe3-f551f8e48de0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3524341f-32c2-40b8-bfe3-f551f8e48de0" (UID: "3524341f-32c2-40b8-bfe3-f551f8e48de0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:28:21 crc kubenswrapper[5072]: I1124 11:28:21.760290 5072 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3524341f-32c2-40b8-bfe3-f551f8e48de0-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:28:21 crc kubenswrapper[5072]: I1124 11:28:21.760317 5072 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3524341f-32c2-40b8-bfe3-f551f8e48de0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 11:28:21 crc kubenswrapper[5072]: I1124 11:28:21.760327 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mbtg7\" (UniqueName: \"kubernetes.io/projected/3524341f-32c2-40b8-bfe3-f551f8e48de0-kube-api-access-mbtg7\") on node \"crc\" DevicePath \"\"" Nov 24 11:28:21 crc kubenswrapper[5072]: I1124 11:28:21.760336 5072 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3524341f-32c2-40b8-bfe3-f551f8e48de0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 11:28:21 crc kubenswrapper[5072]: I1124 11:28:21.760345 5072 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3524341f-32c2-40b8-bfe3-f551f8e48de0-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 11:28:22 crc kubenswrapper[5072]: I1124 11:28:22.010467 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 11:28:22 crc kubenswrapper[5072]: I1124 11:28:22.167985 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/984c9c3d-dc52-4152-8ec4-e1ed94695079-scripts\") pod \"984c9c3d-dc52-4152-8ec4-e1ed94695079\" (UID: \"984c9c3d-dc52-4152-8ec4-e1ed94695079\") " Nov 24 11:28:22 crc kubenswrapper[5072]: I1124 11:28:22.168075 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/984c9c3d-dc52-4152-8ec4-e1ed94695079-config-data\") pod \"984c9c3d-dc52-4152-8ec4-e1ed94695079\" (UID: \"984c9c3d-dc52-4152-8ec4-e1ed94695079\") " Nov 24 11:28:22 crc kubenswrapper[5072]: I1124 11:28:22.168132 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mcztg\" (UniqueName: \"kubernetes.io/projected/984c9c3d-dc52-4152-8ec4-e1ed94695079-kube-api-access-mcztg\") pod \"984c9c3d-dc52-4152-8ec4-e1ed94695079\" (UID: \"984c9c3d-dc52-4152-8ec4-e1ed94695079\") " Nov 24 11:28:22 crc kubenswrapper[5072]: I1124 11:28:22.168167 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/984c9c3d-dc52-4152-8ec4-e1ed94695079-combined-ca-bundle\") pod \"984c9c3d-dc52-4152-8ec4-e1ed94695079\" (UID: \"984c9c3d-dc52-4152-8ec4-e1ed94695079\") " Nov 24 11:28:22 crc kubenswrapper[5072]: I1124 11:28:22.168254 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/984c9c3d-dc52-4152-8ec4-e1ed94695079-run-httpd\") pod \"984c9c3d-dc52-4152-8ec4-e1ed94695079\" (UID: \"984c9c3d-dc52-4152-8ec4-e1ed94695079\") " Nov 24 11:28:22 crc kubenswrapper[5072]: I1124 11:28:22.168300 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/984c9c3d-dc52-4152-8ec4-e1ed94695079-log-httpd\") pod \"984c9c3d-dc52-4152-8ec4-e1ed94695079\" (UID: \"984c9c3d-dc52-4152-8ec4-e1ed94695079\") " Nov 24 11:28:22 crc kubenswrapper[5072]: I1124 11:28:22.168335 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/984c9c3d-dc52-4152-8ec4-e1ed94695079-ceilometer-tls-certs\") pod \"984c9c3d-dc52-4152-8ec4-e1ed94695079\" (UID: \"984c9c3d-dc52-4152-8ec4-e1ed94695079\") " Nov 24 11:28:22 crc kubenswrapper[5072]: I1124 11:28:22.168392 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/984c9c3d-dc52-4152-8ec4-e1ed94695079-sg-core-conf-yaml\") pod \"984c9c3d-dc52-4152-8ec4-e1ed94695079\" (UID: \"984c9c3d-dc52-4152-8ec4-e1ed94695079\") " Nov 24 11:28:22 crc kubenswrapper[5072]: I1124 11:28:22.168764 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/984c9c3d-dc52-4152-8ec4-e1ed94695079-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "984c9c3d-dc52-4152-8ec4-e1ed94695079" (UID: "984c9c3d-dc52-4152-8ec4-e1ed94695079"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:28:22 crc kubenswrapper[5072]: I1124 11:28:22.168973 5072 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/984c9c3d-dc52-4152-8ec4-e1ed94695079-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 11:28:22 crc kubenswrapper[5072]: I1124 11:28:22.169388 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/984c9c3d-dc52-4152-8ec4-e1ed94695079-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "984c9c3d-dc52-4152-8ec4-e1ed94695079" (UID: "984c9c3d-dc52-4152-8ec4-e1ed94695079"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:28:22 crc kubenswrapper[5072]: I1124 11:28:22.174126 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/984c9c3d-dc52-4152-8ec4-e1ed94695079-kube-api-access-mcztg" (OuterVolumeSpecName: "kube-api-access-mcztg") pod "984c9c3d-dc52-4152-8ec4-e1ed94695079" (UID: "984c9c3d-dc52-4152-8ec4-e1ed94695079"). InnerVolumeSpecName "kube-api-access-mcztg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:28:22 crc kubenswrapper[5072]: I1124 11:28:22.177556 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/984c9c3d-dc52-4152-8ec4-e1ed94695079-scripts" (OuterVolumeSpecName: "scripts") pod "984c9c3d-dc52-4152-8ec4-e1ed94695079" (UID: "984c9c3d-dc52-4152-8ec4-e1ed94695079"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:28:22 crc kubenswrapper[5072]: I1124 11:28:22.201532 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/984c9c3d-dc52-4152-8ec4-e1ed94695079-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "984c9c3d-dc52-4152-8ec4-e1ed94695079" (UID: "984c9c3d-dc52-4152-8ec4-e1ed94695079"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:28:22 crc kubenswrapper[5072]: I1124 11:28:22.217712 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/984c9c3d-dc52-4152-8ec4-e1ed94695079-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "984c9c3d-dc52-4152-8ec4-e1ed94695079" (UID: "984c9c3d-dc52-4152-8ec4-e1ed94695079"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:28:22 crc kubenswrapper[5072]: I1124 11:28:22.234858 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/984c9c3d-dc52-4152-8ec4-e1ed94695079-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "984c9c3d-dc52-4152-8ec4-e1ed94695079" (UID: "984c9c3d-dc52-4152-8ec4-e1ed94695079"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:28:22 crc kubenswrapper[5072]: I1124 11:28:22.255556 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/984c9c3d-dc52-4152-8ec4-e1ed94695079-config-data" (OuterVolumeSpecName: "config-data") pod "984c9c3d-dc52-4152-8ec4-e1ed94695079" (UID: "984c9c3d-dc52-4152-8ec4-e1ed94695079"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:28:22 crc kubenswrapper[5072]: I1124 11:28:22.270467 5072 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/984c9c3d-dc52-4152-8ec4-e1ed94695079-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 11:28:22 crc kubenswrapper[5072]: I1124 11:28:22.270503 5072 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/984c9c3d-dc52-4152-8ec4-e1ed94695079-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:28:22 crc kubenswrapper[5072]: I1124 11:28:22.270515 5072 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/984c9c3d-dc52-4152-8ec4-e1ed94695079-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:28:22 crc kubenswrapper[5072]: I1124 11:28:22.270527 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mcztg\" (UniqueName: \"kubernetes.io/projected/984c9c3d-dc52-4152-8ec4-e1ed94695079-kube-api-access-mcztg\") on node \"crc\" DevicePath \"\"" Nov 24 11:28:22 crc kubenswrapper[5072]: I1124 11:28:22.270539 5072 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/984c9c3d-dc52-4152-8ec4-e1ed94695079-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:28:22 crc kubenswrapper[5072]: I1124 11:28:22.270550 5072 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/984c9c3d-dc52-4152-8ec4-e1ed94695079-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 11:28:22 crc kubenswrapper[5072]: I1124 11:28:22.270562 5072 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/984c9c3d-dc52-4152-8ec4-e1ed94695079-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 11:28:22 crc kubenswrapper[5072]: I1124 11:28:22.491883 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-566b5b7845-5pgtx" Nov 24 11:28:22 crc kubenswrapper[5072]: I1124 11:28:22.496087 5072 generic.go:334] "Generic (PLEG): container finished" podID="984c9c3d-dc52-4152-8ec4-e1ed94695079" containerID="ec422472b5c5c599eb5d463d34a8c359fa5e367157abe0d52dd0facd4dab3618" exitCode=0 Nov 24 11:28:22 crc kubenswrapper[5072]: I1124 11:28:22.496883 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 11:28:22 crc kubenswrapper[5072]: I1124 11:28:22.497442 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"984c9c3d-dc52-4152-8ec4-e1ed94695079","Type":"ContainerDied","Data":"ec422472b5c5c599eb5d463d34a8c359fa5e367157abe0d52dd0facd4dab3618"} Nov 24 11:28:22 crc kubenswrapper[5072]: I1124 11:28:22.497506 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"984c9c3d-dc52-4152-8ec4-e1ed94695079","Type":"ContainerDied","Data":"b09ef10479e208436d8c13cb75e76c1e5774fc55a427f890dd34f299845bf2b6"} Nov 24 11:28:22 crc kubenswrapper[5072]: I1124 11:28:22.497534 5072 scope.go:117] "RemoveContainer" containerID="36688bb5176270a9c5bcd470a743f38c1d5ad59ff9b95d40642a60e604b94f0b" Nov 24 11:28:22 crc kubenswrapper[5072]: I1124 11:28:22.539963 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-5pgtx"] Nov 24 11:28:22 crc kubenswrapper[5072]: I1124 11:28:22.548796 5072 scope.go:117] "RemoveContainer" containerID="30863584fb3ca1cbfe701ae14451e812a0fe096b373b2f14bae63c1cfa5668b1" Nov 24 11:28:22 crc kubenswrapper[5072]: I1124 11:28:22.550642 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-5pgtx"] Nov 24 11:28:22 crc kubenswrapper[5072]: I1124 11:28:22.557773 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:28:22 crc kubenswrapper[5072]: I1124 11:28:22.569929 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:28:22 crc kubenswrapper[5072]: I1124 11:28:22.575568 5072 scope.go:117] "RemoveContainer" containerID="f332807d4940ab715a2fcc4d3258eae3fd321f2b7d1e786beb99032d1ede8dc0" Nov 24 11:28:22 crc kubenswrapper[5072]: I1124 11:28:22.584010 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:28:22 crc kubenswrapper[5072]: E1124 11:28:22.584720 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3524341f-32c2-40b8-bfe3-f551f8e48de0" containerName="init" Nov 24 11:28:22 crc kubenswrapper[5072]: I1124 11:28:22.584800 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="3524341f-32c2-40b8-bfe3-f551f8e48de0" containerName="init" Nov 24 11:28:22 crc kubenswrapper[5072]: E1124 11:28:22.584871 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="984c9c3d-dc52-4152-8ec4-e1ed94695079" containerName="sg-core" Nov 24 11:28:22 crc kubenswrapper[5072]: I1124 11:28:22.584953 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="984c9c3d-dc52-4152-8ec4-e1ed94695079" containerName="sg-core" Nov 24 11:28:22 crc kubenswrapper[5072]: E1124 11:28:22.585016 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3524341f-32c2-40b8-bfe3-f551f8e48de0" containerName="dnsmasq-dns" Nov 24 11:28:22 crc kubenswrapper[5072]: I1124 11:28:22.585067 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="3524341f-32c2-40b8-bfe3-f551f8e48de0" containerName="dnsmasq-dns" Nov 24 11:28:22 crc kubenswrapper[5072]: E1124 11:28:22.585136 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="984c9c3d-dc52-4152-8ec4-e1ed94695079" containerName="ceilometer-notification-agent" Nov 24 11:28:22 crc kubenswrapper[5072]: I1124 11:28:22.585186 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="984c9c3d-dc52-4152-8ec4-e1ed94695079" containerName="ceilometer-notification-agent" Nov 24 11:28:22 crc kubenswrapper[5072]: E1124 11:28:22.585251 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="984c9c3d-dc52-4152-8ec4-e1ed94695079" containerName="proxy-httpd" Nov 24 11:28:22 crc kubenswrapper[5072]: I1124 11:28:22.585343 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="984c9c3d-dc52-4152-8ec4-e1ed94695079" containerName="proxy-httpd" Nov 24 11:28:22 crc kubenswrapper[5072]: E1124 11:28:22.585438 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="984c9c3d-dc52-4152-8ec4-e1ed94695079" containerName="ceilometer-central-agent" Nov 24 11:28:22 crc kubenswrapper[5072]: I1124 11:28:22.585492 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="984c9c3d-dc52-4152-8ec4-e1ed94695079" containerName="ceilometer-central-agent" Nov 24 11:28:22 crc kubenswrapper[5072]: I1124 11:28:22.585703 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="984c9c3d-dc52-4152-8ec4-e1ed94695079" containerName="ceilometer-notification-agent" Nov 24 11:28:22 crc kubenswrapper[5072]: I1124 11:28:22.585769 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="984c9c3d-dc52-4152-8ec4-e1ed94695079" containerName="ceilometer-central-agent" Nov 24 11:28:22 crc kubenswrapper[5072]: I1124 11:28:22.585828 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="984c9c3d-dc52-4152-8ec4-e1ed94695079" containerName="proxy-httpd" Nov 24 11:28:22 crc kubenswrapper[5072]: I1124 11:28:22.585886 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="984c9c3d-dc52-4152-8ec4-e1ed94695079" containerName="sg-core" Nov 24 11:28:22 crc kubenswrapper[5072]: I1124 11:28:22.593708 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="3524341f-32c2-40b8-bfe3-f551f8e48de0" containerName="dnsmasq-dns" Nov 24 11:28:22 crc kubenswrapper[5072]: I1124 11:28:22.597403 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 11:28:22 crc kubenswrapper[5072]: I1124 11:28:22.602536 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 11:28:22 crc kubenswrapper[5072]: I1124 11:28:22.602670 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 11:28:22 crc kubenswrapper[5072]: I1124 11:28:22.602786 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 24 11:28:22 crc kubenswrapper[5072]: I1124 11:28:22.608150 5072 scope.go:117] "RemoveContainer" containerID="ec422472b5c5c599eb5d463d34a8c359fa5e367157abe0d52dd0facd4dab3618" Nov 24 11:28:22 crc kubenswrapper[5072]: I1124 11:28:22.632200 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:28:22 crc kubenswrapper[5072]: I1124 11:28:22.651559 5072 scope.go:117] "RemoveContainer" containerID="36688bb5176270a9c5bcd470a743f38c1d5ad59ff9b95d40642a60e604b94f0b" Nov 24 11:28:22 crc kubenswrapper[5072]: E1124 11:28:22.653859 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36688bb5176270a9c5bcd470a743f38c1d5ad59ff9b95d40642a60e604b94f0b\": container with ID starting with 36688bb5176270a9c5bcd470a743f38c1d5ad59ff9b95d40642a60e604b94f0b not found: ID does not exist" containerID="36688bb5176270a9c5bcd470a743f38c1d5ad59ff9b95d40642a60e604b94f0b" Nov 24 11:28:22 crc kubenswrapper[5072]: I1124 11:28:22.653892 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36688bb5176270a9c5bcd470a743f38c1d5ad59ff9b95d40642a60e604b94f0b"} err="failed to get container status \"36688bb5176270a9c5bcd470a743f38c1d5ad59ff9b95d40642a60e604b94f0b\": rpc error: code = NotFound desc = could not find container \"36688bb5176270a9c5bcd470a743f38c1d5ad59ff9b95d40642a60e604b94f0b\": container with ID starting with 36688bb5176270a9c5bcd470a743f38c1d5ad59ff9b95d40642a60e604b94f0b not found: ID does not exist" Nov 24 11:28:22 crc kubenswrapper[5072]: I1124 11:28:22.653914 5072 scope.go:117] "RemoveContainer" containerID="30863584fb3ca1cbfe701ae14451e812a0fe096b373b2f14bae63c1cfa5668b1" Nov 24 11:28:22 crc kubenswrapper[5072]: E1124 11:28:22.654307 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30863584fb3ca1cbfe701ae14451e812a0fe096b373b2f14bae63c1cfa5668b1\": container with ID starting with 30863584fb3ca1cbfe701ae14451e812a0fe096b373b2f14bae63c1cfa5668b1 not found: ID does not exist" containerID="30863584fb3ca1cbfe701ae14451e812a0fe096b373b2f14bae63c1cfa5668b1" Nov 24 11:28:22 crc kubenswrapper[5072]: I1124 11:28:22.654325 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30863584fb3ca1cbfe701ae14451e812a0fe096b373b2f14bae63c1cfa5668b1"} err="failed to get container status \"30863584fb3ca1cbfe701ae14451e812a0fe096b373b2f14bae63c1cfa5668b1\": rpc error: code = NotFound desc = could not find container \"30863584fb3ca1cbfe701ae14451e812a0fe096b373b2f14bae63c1cfa5668b1\": container with ID starting with 30863584fb3ca1cbfe701ae14451e812a0fe096b373b2f14bae63c1cfa5668b1 not found: ID does not exist" Nov 24 11:28:22 crc kubenswrapper[5072]: I1124 11:28:22.654472 5072 scope.go:117] "RemoveContainer" containerID="f332807d4940ab715a2fcc4d3258eae3fd321f2b7d1e786beb99032d1ede8dc0" Nov 24 11:28:22 crc kubenswrapper[5072]: E1124 11:28:22.655669 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f332807d4940ab715a2fcc4d3258eae3fd321f2b7d1e786beb99032d1ede8dc0\": container with ID starting with f332807d4940ab715a2fcc4d3258eae3fd321f2b7d1e786beb99032d1ede8dc0 not found: ID does not exist" containerID="f332807d4940ab715a2fcc4d3258eae3fd321f2b7d1e786beb99032d1ede8dc0" Nov 24 11:28:22 crc kubenswrapper[5072]: I1124 11:28:22.655703 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f332807d4940ab715a2fcc4d3258eae3fd321f2b7d1e786beb99032d1ede8dc0"} err="failed to get container status \"f332807d4940ab715a2fcc4d3258eae3fd321f2b7d1e786beb99032d1ede8dc0\": rpc error: code = NotFound desc = could not find container \"f332807d4940ab715a2fcc4d3258eae3fd321f2b7d1e786beb99032d1ede8dc0\": container with ID starting with f332807d4940ab715a2fcc4d3258eae3fd321f2b7d1e786beb99032d1ede8dc0 not found: ID does not exist" Nov 24 11:28:22 crc kubenswrapper[5072]: I1124 11:28:22.655719 5072 scope.go:117] "RemoveContainer" containerID="ec422472b5c5c599eb5d463d34a8c359fa5e367157abe0d52dd0facd4dab3618" Nov 24 11:28:22 crc kubenswrapper[5072]: E1124 11:28:22.655993 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec422472b5c5c599eb5d463d34a8c359fa5e367157abe0d52dd0facd4dab3618\": container with ID starting with ec422472b5c5c599eb5d463d34a8c359fa5e367157abe0d52dd0facd4dab3618 not found: ID does not exist" containerID="ec422472b5c5c599eb5d463d34a8c359fa5e367157abe0d52dd0facd4dab3618" Nov 24 11:28:22 crc kubenswrapper[5072]: I1124 11:28:22.656016 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec422472b5c5c599eb5d463d34a8c359fa5e367157abe0d52dd0facd4dab3618"} err="failed to get container status \"ec422472b5c5c599eb5d463d34a8c359fa5e367157abe0d52dd0facd4dab3618\": rpc error: code = NotFound desc = could not find container \"ec422472b5c5c599eb5d463d34a8c359fa5e367157abe0d52dd0facd4dab3618\": container with ID starting with ec422472b5c5c599eb5d463d34a8c359fa5e367157abe0d52dd0facd4dab3618 not found: ID does not exist" Nov 24 11:28:22 crc kubenswrapper[5072]: I1124 11:28:22.779236 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/761b2964-cd70-47d9-ade7-8ddfb3eb73c3-config-data\") pod \"ceilometer-0\" (UID: \"761b2964-cd70-47d9-ade7-8ddfb3eb73c3\") " pod="openstack/ceilometer-0" Nov 24 11:28:22 crc kubenswrapper[5072]: I1124 11:28:22.779313 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k44jn\" (UniqueName: \"kubernetes.io/projected/761b2964-cd70-47d9-ade7-8ddfb3eb73c3-kube-api-access-k44jn\") pod \"ceilometer-0\" (UID: \"761b2964-cd70-47d9-ade7-8ddfb3eb73c3\") " pod="openstack/ceilometer-0" Nov 24 11:28:22 crc kubenswrapper[5072]: I1124 11:28:22.779399 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/761b2964-cd70-47d9-ade7-8ddfb3eb73c3-run-httpd\") pod \"ceilometer-0\" (UID: \"761b2964-cd70-47d9-ade7-8ddfb3eb73c3\") " pod="openstack/ceilometer-0" Nov 24 11:28:22 crc kubenswrapper[5072]: I1124 11:28:22.779536 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/761b2964-cd70-47d9-ade7-8ddfb3eb73c3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"761b2964-cd70-47d9-ade7-8ddfb3eb73c3\") " pod="openstack/ceilometer-0" Nov 24 11:28:22 crc kubenswrapper[5072]: I1124 11:28:22.779861 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/761b2964-cd70-47d9-ade7-8ddfb3eb73c3-scripts\") pod \"ceilometer-0\" (UID: \"761b2964-cd70-47d9-ade7-8ddfb3eb73c3\") " pod="openstack/ceilometer-0" Nov 24 11:28:22 crc kubenswrapper[5072]: I1124 11:28:22.780060 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/761b2964-cd70-47d9-ade7-8ddfb3eb73c3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"761b2964-cd70-47d9-ade7-8ddfb3eb73c3\") " pod="openstack/ceilometer-0" Nov 24 11:28:22 crc kubenswrapper[5072]: I1124 11:28:22.780148 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/761b2964-cd70-47d9-ade7-8ddfb3eb73c3-log-httpd\") pod \"ceilometer-0\" (UID: \"761b2964-cd70-47d9-ade7-8ddfb3eb73c3\") " pod="openstack/ceilometer-0" Nov 24 11:28:22 crc kubenswrapper[5072]: I1124 11:28:22.780249 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/761b2964-cd70-47d9-ade7-8ddfb3eb73c3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"761b2964-cd70-47d9-ade7-8ddfb3eb73c3\") " pod="openstack/ceilometer-0" Nov 24 11:28:22 crc kubenswrapper[5072]: I1124 11:28:22.882424 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/761b2964-cd70-47d9-ade7-8ddfb3eb73c3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"761b2964-cd70-47d9-ade7-8ddfb3eb73c3\") " pod="openstack/ceilometer-0" Nov 24 11:28:22 crc kubenswrapper[5072]: I1124 11:28:22.882496 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/761b2964-cd70-47d9-ade7-8ddfb3eb73c3-log-httpd\") pod \"ceilometer-0\" (UID: \"761b2964-cd70-47d9-ade7-8ddfb3eb73c3\") " pod="openstack/ceilometer-0" Nov 24 11:28:22 crc kubenswrapper[5072]: I1124 11:28:22.882529 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/761b2964-cd70-47d9-ade7-8ddfb3eb73c3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"761b2964-cd70-47d9-ade7-8ddfb3eb73c3\") " pod="openstack/ceilometer-0" Nov 24 11:28:22 crc kubenswrapper[5072]: I1124 11:28:22.882615 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/761b2964-cd70-47d9-ade7-8ddfb3eb73c3-config-data\") pod \"ceilometer-0\" (UID: \"761b2964-cd70-47d9-ade7-8ddfb3eb73c3\") " pod="openstack/ceilometer-0" Nov 24 11:28:22 crc kubenswrapper[5072]: I1124 11:28:22.882676 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k44jn\" (UniqueName: \"kubernetes.io/projected/761b2964-cd70-47d9-ade7-8ddfb3eb73c3-kube-api-access-k44jn\") pod \"ceilometer-0\" (UID: \"761b2964-cd70-47d9-ade7-8ddfb3eb73c3\") " pod="openstack/ceilometer-0" Nov 24 11:28:22 crc kubenswrapper[5072]: I1124 11:28:22.882713 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/761b2964-cd70-47d9-ade7-8ddfb3eb73c3-run-httpd\") pod \"ceilometer-0\" (UID: \"761b2964-cd70-47d9-ade7-8ddfb3eb73c3\") " pod="openstack/ceilometer-0" Nov 24 11:28:22 crc kubenswrapper[5072]: I1124 11:28:22.882773 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/761b2964-cd70-47d9-ade7-8ddfb3eb73c3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"761b2964-cd70-47d9-ade7-8ddfb3eb73c3\") " pod="openstack/ceilometer-0" Nov 24 11:28:22 crc kubenswrapper[5072]: I1124 11:28:22.882917 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/761b2964-cd70-47d9-ade7-8ddfb3eb73c3-scripts\") pod \"ceilometer-0\" (UID: \"761b2964-cd70-47d9-ade7-8ddfb3eb73c3\") " pod="openstack/ceilometer-0" Nov 24 11:28:22 crc kubenswrapper[5072]: I1124 11:28:22.883768 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/761b2964-cd70-47d9-ade7-8ddfb3eb73c3-run-httpd\") pod \"ceilometer-0\" (UID: \"761b2964-cd70-47d9-ade7-8ddfb3eb73c3\") " pod="openstack/ceilometer-0" Nov 24 11:28:22 crc kubenswrapper[5072]: I1124 11:28:22.884057 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/761b2964-cd70-47d9-ade7-8ddfb3eb73c3-log-httpd\") pod \"ceilometer-0\" (UID: \"761b2964-cd70-47d9-ade7-8ddfb3eb73c3\") " pod="openstack/ceilometer-0" Nov 24 11:28:22 crc kubenswrapper[5072]: I1124 11:28:22.886564 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/761b2964-cd70-47d9-ade7-8ddfb3eb73c3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"761b2964-cd70-47d9-ade7-8ddfb3eb73c3\") " pod="openstack/ceilometer-0" Nov 24 11:28:22 crc kubenswrapper[5072]: I1124 11:28:22.887176 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/761b2964-cd70-47d9-ade7-8ddfb3eb73c3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"761b2964-cd70-47d9-ade7-8ddfb3eb73c3\") " pod="openstack/ceilometer-0" Nov 24 11:28:22 crc kubenswrapper[5072]: I1124 11:28:22.888313 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/761b2964-cd70-47d9-ade7-8ddfb3eb73c3-scripts\") pod \"ceilometer-0\" (UID: \"761b2964-cd70-47d9-ade7-8ddfb3eb73c3\") " pod="openstack/ceilometer-0" Nov 24 11:28:22 crc kubenswrapper[5072]: I1124 11:28:22.891638 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/761b2964-cd70-47d9-ade7-8ddfb3eb73c3-config-data\") pod \"ceilometer-0\" (UID: \"761b2964-cd70-47d9-ade7-8ddfb3eb73c3\") " pod="openstack/ceilometer-0" Nov 24 11:28:22 crc kubenswrapper[5072]: I1124 11:28:22.895123 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/761b2964-cd70-47d9-ade7-8ddfb3eb73c3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"761b2964-cd70-47d9-ade7-8ddfb3eb73c3\") " pod="openstack/ceilometer-0" Nov 24 11:28:22 crc kubenswrapper[5072]: I1124 11:28:22.923181 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k44jn\" (UniqueName: \"kubernetes.io/projected/761b2964-cd70-47d9-ade7-8ddfb3eb73c3-kube-api-access-k44jn\") pod \"ceilometer-0\" (UID: \"761b2964-cd70-47d9-ade7-8ddfb3eb73c3\") " pod="openstack/ceilometer-0" Nov 24 11:28:22 crc kubenswrapper[5072]: I1124 11:28:22.955620 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 11:28:23 crc kubenswrapper[5072]: I1124 11:28:23.041932 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3524341f-32c2-40b8-bfe3-f551f8e48de0" path="/var/lib/kubelet/pods/3524341f-32c2-40b8-bfe3-f551f8e48de0/volumes" Nov 24 11:28:23 crc kubenswrapper[5072]: I1124 11:28:23.043402 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="984c9c3d-dc52-4152-8ec4-e1ed94695079" path="/var/lib/kubelet/pods/984c9c3d-dc52-4152-8ec4-e1ed94695079/volumes" Nov 24 11:28:23 crc kubenswrapper[5072]: I1124 11:28:23.439298 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:28:23 crc kubenswrapper[5072]: W1124 11:28:23.449121 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod761b2964_cd70_47d9_ade7_8ddfb3eb73c3.slice/crio-af411ad8d3469e55fb5440dd5046e8278b736be7bb284db06c93028f44c90340 WatchSource:0}: Error finding container af411ad8d3469e55fb5440dd5046e8278b736be7bb284db06c93028f44c90340: Status 404 returned error can't find the container with id af411ad8d3469e55fb5440dd5046e8278b736be7bb284db06c93028f44c90340 Nov 24 11:28:23 crc kubenswrapper[5072]: I1124 11:28:23.509646 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"761b2964-cd70-47d9-ade7-8ddfb3eb73c3","Type":"ContainerStarted","Data":"af411ad8d3469e55fb5440dd5046e8278b736be7bb284db06c93028f44c90340"} Nov 24 11:28:24 crc kubenswrapper[5072]: I1124 11:28:24.521975 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"761b2964-cd70-47d9-ade7-8ddfb3eb73c3","Type":"ContainerStarted","Data":"ffd0b3500c9774fad4dcbaf75c93c9ea57223eb9a31a2ce6a5960ac413fb7291"} Nov 24 11:28:24 crc kubenswrapper[5072]: I1124 11:28:24.831168 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 24 11:28:25 crc kubenswrapper[5072]: I1124 11:28:25.537710 5072 generic.go:334] "Generic (PLEG): container finished" podID="e4d90486-6954-484a-aa10-2ffa6789cdc7" containerID="adbbafa7dba3ea0127645167357936a6a57585ed79b55e0b0d66b94e6662c686" exitCode=0 Nov 24 11:28:25 crc kubenswrapper[5072]: I1124 11:28:25.537815 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-ghzbb" event={"ID":"e4d90486-6954-484a-aa10-2ffa6789cdc7","Type":"ContainerDied","Data":"adbbafa7dba3ea0127645167357936a6a57585ed79b55e0b0d66b94e6662c686"} Nov 24 11:28:25 crc kubenswrapper[5072]: I1124 11:28:25.544186 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"761b2964-cd70-47d9-ade7-8ddfb3eb73c3","Type":"ContainerStarted","Data":"4630d6afa767f2b989b968e94698ffa151c51abba3dbaf45c5337880ca956ce5"} Nov 24 11:28:26 crc kubenswrapper[5072]: I1124 11:28:26.557413 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"761b2964-cd70-47d9-ade7-8ddfb3eb73c3","Type":"ContainerStarted","Data":"972dc3a765f700930ddd30765dfcfd8c0d7199181792814ea03e27923f79a850"} Nov 24 11:28:26 crc kubenswrapper[5072]: I1124 11:28:26.910995 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-ghzbb" Nov 24 11:28:26 crc kubenswrapper[5072]: I1124 11:28:26.965063 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nv9x4\" (UniqueName: \"kubernetes.io/projected/e4d90486-6954-484a-aa10-2ffa6789cdc7-kube-api-access-nv9x4\") pod \"e4d90486-6954-484a-aa10-2ffa6789cdc7\" (UID: \"e4d90486-6954-484a-aa10-2ffa6789cdc7\") " Nov 24 11:28:26 crc kubenswrapper[5072]: I1124 11:28:26.965153 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4d90486-6954-484a-aa10-2ffa6789cdc7-config-data\") pod \"e4d90486-6954-484a-aa10-2ffa6789cdc7\" (UID: \"e4d90486-6954-484a-aa10-2ffa6789cdc7\") " Nov 24 11:28:26 crc kubenswrapper[5072]: I1124 11:28:26.965236 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4d90486-6954-484a-aa10-2ffa6789cdc7-scripts\") pod \"e4d90486-6954-484a-aa10-2ffa6789cdc7\" (UID: \"e4d90486-6954-484a-aa10-2ffa6789cdc7\") " Nov 24 11:28:26 crc kubenswrapper[5072]: I1124 11:28:26.965288 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4d90486-6954-484a-aa10-2ffa6789cdc7-combined-ca-bundle\") pod \"e4d90486-6954-484a-aa10-2ffa6789cdc7\" (UID: \"e4d90486-6954-484a-aa10-2ffa6789cdc7\") " Nov 24 11:28:26 crc kubenswrapper[5072]: I1124 11:28:26.969346 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4d90486-6954-484a-aa10-2ffa6789cdc7-kube-api-access-nv9x4" (OuterVolumeSpecName: "kube-api-access-nv9x4") pod "e4d90486-6954-484a-aa10-2ffa6789cdc7" (UID: "e4d90486-6954-484a-aa10-2ffa6789cdc7"). InnerVolumeSpecName "kube-api-access-nv9x4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:28:26 crc kubenswrapper[5072]: I1124 11:28:26.970135 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4d90486-6954-484a-aa10-2ffa6789cdc7-scripts" (OuterVolumeSpecName: "scripts") pod "e4d90486-6954-484a-aa10-2ffa6789cdc7" (UID: "e4d90486-6954-484a-aa10-2ffa6789cdc7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:28:26 crc kubenswrapper[5072]: I1124 11:28:26.988864 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4d90486-6954-484a-aa10-2ffa6789cdc7-config-data" (OuterVolumeSpecName: "config-data") pod "e4d90486-6954-484a-aa10-2ffa6789cdc7" (UID: "e4d90486-6954-484a-aa10-2ffa6789cdc7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:28:26 crc kubenswrapper[5072]: I1124 11:28:26.999423 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4d90486-6954-484a-aa10-2ffa6789cdc7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e4d90486-6954-484a-aa10-2ffa6789cdc7" (UID: "e4d90486-6954-484a-aa10-2ffa6789cdc7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:28:27 crc kubenswrapper[5072]: I1124 11:28:27.067057 5072 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4d90486-6954-484a-aa10-2ffa6789cdc7-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:28:27 crc kubenswrapper[5072]: I1124 11:28:27.067084 5072 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4d90486-6954-484a-aa10-2ffa6789cdc7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:28:27 crc kubenswrapper[5072]: I1124 11:28:27.067095 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nv9x4\" (UniqueName: \"kubernetes.io/projected/e4d90486-6954-484a-aa10-2ffa6789cdc7-kube-api-access-nv9x4\") on node \"crc\" DevicePath \"\"" Nov 24 11:28:27 crc kubenswrapper[5072]: I1124 11:28:27.067104 5072 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4d90486-6954-484a-aa10-2ffa6789cdc7-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:28:27 crc kubenswrapper[5072]: I1124 11:28:27.566279 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-ghzbb" event={"ID":"e4d90486-6954-484a-aa10-2ffa6789cdc7","Type":"ContainerDied","Data":"37cd9ee9c14c51dbbc5d093ebfa3ae2be91b97c9913542549bd5ec4ed3084b7a"} Nov 24 11:28:27 crc kubenswrapper[5072]: I1124 11:28:27.566316 5072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37cd9ee9c14c51dbbc5d093ebfa3ae2be91b97c9913542549bd5ec4ed3084b7a" Nov 24 11:28:27 crc kubenswrapper[5072]: I1124 11:28:27.566382 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-ghzbb" Nov 24 11:28:27 crc kubenswrapper[5072]: I1124 11:28:27.799463 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 24 11:28:27 crc kubenswrapper[5072]: I1124 11:28:27.800155 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="7e1ea7ea-3fc5-4ae0-80c1-d769428711d2" containerName="nova-api-log" containerID="cri-o://7cdac74e617cd61ac7bdf1c71b05601211f9e58cb768e5d05b407be135413980" gracePeriod=30 Nov 24 11:28:27 crc kubenswrapper[5072]: I1124 11:28:27.800290 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="7e1ea7ea-3fc5-4ae0-80c1-d769428711d2" containerName="nova-api-api" containerID="cri-o://c694f6acf6af52396dcde2b546f3f28759ac132a2761d7971341b73f0f435f17" gracePeriod=30 Nov 24 11:28:27 crc kubenswrapper[5072]: I1124 11:28:27.828969 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 11:28:27 crc kubenswrapper[5072]: I1124 11:28:27.829248 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="179bc010-e872-4be0-b453-088a8260caa5" containerName="nova-scheduler-scheduler" containerID="cri-o://c6e3298fd45803c6c49d67a1ab7743f89778f20e5d406858ad91f8a27395c48c" gracePeriod=30 Nov 24 11:28:27 crc kubenswrapper[5072]: I1124 11:28:27.852666 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 11:28:27 crc kubenswrapper[5072]: I1124 11:28:27.852922 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="2ab8c206-f9b3-4aa1-96c7-3a19f7a9b1b2" containerName="nova-metadata-log" containerID="cri-o://7ed93f6dfb00cf4d5234145c5d3271873d4c1eac308bc55c4d300f8b1e890d2a" gracePeriod=30 Nov 24 11:28:27 crc kubenswrapper[5072]: I1124 11:28:27.852962 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="2ab8c206-f9b3-4aa1-96c7-3a19f7a9b1b2" containerName="nova-metadata-metadata" containerID="cri-o://bf8f5fd1e53d40c0f76857d4a12e1ce7b670df788f3055203f8069d9cbb7ee24" gracePeriod=30 Nov 24 11:28:28 crc kubenswrapper[5072]: I1124 11:28:28.575297 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"761b2964-cd70-47d9-ade7-8ddfb3eb73c3","Type":"ContainerStarted","Data":"64f401f26854854a6a44fed6bc7b451c23dc5e2140b0b0a71a493d5fe27c9b8a"} Nov 24 11:28:28 crc kubenswrapper[5072]: I1124 11:28:28.575649 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 11:28:28 crc kubenswrapper[5072]: I1124 11:28:28.577391 5072 generic.go:334] "Generic (PLEG): container finished" podID="7e1ea7ea-3fc5-4ae0-80c1-d769428711d2" containerID="c694f6acf6af52396dcde2b546f3f28759ac132a2761d7971341b73f0f435f17" exitCode=0 Nov 24 11:28:28 crc kubenswrapper[5072]: I1124 11:28:28.577415 5072 generic.go:334] "Generic (PLEG): container finished" podID="7e1ea7ea-3fc5-4ae0-80c1-d769428711d2" containerID="7cdac74e617cd61ac7bdf1c71b05601211f9e58cb768e5d05b407be135413980" exitCode=143 Nov 24 11:28:28 crc kubenswrapper[5072]: I1124 11:28:28.577449 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7e1ea7ea-3fc5-4ae0-80c1-d769428711d2","Type":"ContainerDied","Data":"c694f6acf6af52396dcde2b546f3f28759ac132a2761d7971341b73f0f435f17"} Nov 24 11:28:28 crc kubenswrapper[5072]: I1124 11:28:28.577466 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7e1ea7ea-3fc5-4ae0-80c1-d769428711d2","Type":"ContainerDied","Data":"7cdac74e617cd61ac7bdf1c71b05601211f9e58cb768e5d05b407be135413980"} Nov 24 11:28:28 crc kubenswrapper[5072]: I1124 11:28:28.577476 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7e1ea7ea-3fc5-4ae0-80c1-d769428711d2","Type":"ContainerDied","Data":"976b3f052c4e6f0b3ed2366326e01bcb5d22cac2ad3fee3725bbb45af6f4f5cb"} Nov 24 11:28:28 crc kubenswrapper[5072]: I1124 11:28:28.577485 5072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="976b3f052c4e6f0b3ed2366326e01bcb5d22cac2ad3fee3725bbb45af6f4f5cb" Nov 24 11:28:28 crc kubenswrapper[5072]: I1124 11:28:28.579685 5072 generic.go:334] "Generic (PLEG): container finished" podID="2ab8c206-f9b3-4aa1-96c7-3a19f7a9b1b2" containerID="7ed93f6dfb00cf4d5234145c5d3271873d4c1eac308bc55c4d300f8b1e890d2a" exitCode=143 Nov 24 11:28:28 crc kubenswrapper[5072]: I1124 11:28:28.579734 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2ab8c206-f9b3-4aa1-96c7-3a19f7a9b1b2","Type":"ContainerDied","Data":"7ed93f6dfb00cf4d5234145c5d3271873d4c1eac308bc55c4d300f8b1e890d2a"} Nov 24 11:28:28 crc kubenswrapper[5072]: I1124 11:28:28.596268 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.466354494 podStartE2EDuration="6.596251118s" podCreationTimestamp="2025-11-24 11:28:22 +0000 UTC" firstStartedPulling="2025-11-24 11:28:23.452541811 +0000 UTC m=+1155.164066287" lastFinishedPulling="2025-11-24 11:28:27.582438435 +0000 UTC m=+1159.293962911" observedRunningTime="2025-11-24 11:28:28.594859923 +0000 UTC m=+1160.306384399" watchObservedRunningTime="2025-11-24 11:28:28.596251118 +0000 UTC m=+1160.307775584" Nov 24 11:28:28 crc kubenswrapper[5072]: I1124 11:28:28.648057 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 11:28:28 crc kubenswrapper[5072]: I1124 11:28:28.802845 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2xsw\" (UniqueName: \"kubernetes.io/projected/7e1ea7ea-3fc5-4ae0-80c1-d769428711d2-kube-api-access-x2xsw\") pod \"7e1ea7ea-3fc5-4ae0-80c1-d769428711d2\" (UID: \"7e1ea7ea-3fc5-4ae0-80c1-d769428711d2\") " Nov 24 11:28:28 crc kubenswrapper[5072]: I1124 11:28:28.802934 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e1ea7ea-3fc5-4ae0-80c1-d769428711d2-logs\") pod \"7e1ea7ea-3fc5-4ae0-80c1-d769428711d2\" (UID: \"7e1ea7ea-3fc5-4ae0-80c1-d769428711d2\") " Nov 24 11:28:28 crc kubenswrapper[5072]: I1124 11:28:28.802957 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e1ea7ea-3fc5-4ae0-80c1-d769428711d2-internal-tls-certs\") pod \"7e1ea7ea-3fc5-4ae0-80c1-d769428711d2\" (UID: \"7e1ea7ea-3fc5-4ae0-80c1-d769428711d2\") " Nov 24 11:28:28 crc kubenswrapper[5072]: I1124 11:28:28.802985 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e1ea7ea-3fc5-4ae0-80c1-d769428711d2-public-tls-certs\") pod \"7e1ea7ea-3fc5-4ae0-80c1-d769428711d2\" (UID: \"7e1ea7ea-3fc5-4ae0-80c1-d769428711d2\") " Nov 24 11:28:28 crc kubenswrapper[5072]: I1124 11:28:28.803109 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e1ea7ea-3fc5-4ae0-80c1-d769428711d2-config-data\") pod \"7e1ea7ea-3fc5-4ae0-80c1-d769428711d2\" (UID: \"7e1ea7ea-3fc5-4ae0-80c1-d769428711d2\") " Nov 24 11:28:28 crc kubenswrapper[5072]: I1124 11:28:28.803218 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e1ea7ea-3fc5-4ae0-80c1-d769428711d2-combined-ca-bundle\") pod \"7e1ea7ea-3fc5-4ae0-80c1-d769428711d2\" (UID: \"7e1ea7ea-3fc5-4ae0-80c1-d769428711d2\") " Nov 24 11:28:28 crc kubenswrapper[5072]: I1124 11:28:28.803325 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e1ea7ea-3fc5-4ae0-80c1-d769428711d2-logs" (OuterVolumeSpecName: "logs") pod "7e1ea7ea-3fc5-4ae0-80c1-d769428711d2" (UID: "7e1ea7ea-3fc5-4ae0-80c1-d769428711d2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:28:28 crc kubenswrapper[5072]: I1124 11:28:28.803716 5072 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e1ea7ea-3fc5-4ae0-80c1-d769428711d2-logs\") on node \"crc\" DevicePath \"\"" Nov 24 11:28:28 crc kubenswrapper[5072]: I1124 11:28:28.815809 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e1ea7ea-3fc5-4ae0-80c1-d769428711d2-kube-api-access-x2xsw" (OuterVolumeSpecName: "kube-api-access-x2xsw") pod "7e1ea7ea-3fc5-4ae0-80c1-d769428711d2" (UID: "7e1ea7ea-3fc5-4ae0-80c1-d769428711d2"). InnerVolumeSpecName "kube-api-access-x2xsw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:28:28 crc kubenswrapper[5072]: I1124 11:28:28.834165 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e1ea7ea-3fc5-4ae0-80c1-d769428711d2-config-data" (OuterVolumeSpecName: "config-data") pod "7e1ea7ea-3fc5-4ae0-80c1-d769428711d2" (UID: "7e1ea7ea-3fc5-4ae0-80c1-d769428711d2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:28:28 crc kubenswrapper[5072]: I1124 11:28:28.858579 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e1ea7ea-3fc5-4ae0-80c1-d769428711d2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7e1ea7ea-3fc5-4ae0-80c1-d769428711d2" (UID: "7e1ea7ea-3fc5-4ae0-80c1-d769428711d2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:28:28 crc kubenswrapper[5072]: I1124 11:28:28.870405 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e1ea7ea-3fc5-4ae0-80c1-d769428711d2-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "7e1ea7ea-3fc5-4ae0-80c1-d769428711d2" (UID: "7e1ea7ea-3fc5-4ae0-80c1-d769428711d2"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:28:28 crc kubenswrapper[5072]: I1124 11:28:28.875853 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e1ea7ea-3fc5-4ae0-80c1-d769428711d2-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "7e1ea7ea-3fc5-4ae0-80c1-d769428711d2" (UID: "7e1ea7ea-3fc5-4ae0-80c1-d769428711d2"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:28:28 crc kubenswrapper[5072]: I1124 11:28:28.907674 5072 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e1ea7ea-3fc5-4ae0-80c1-d769428711d2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:28:28 crc kubenswrapper[5072]: I1124 11:28:28.907717 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2xsw\" (UniqueName: \"kubernetes.io/projected/7e1ea7ea-3fc5-4ae0-80c1-d769428711d2-kube-api-access-x2xsw\") on node \"crc\" DevicePath \"\"" Nov 24 11:28:28 crc kubenswrapper[5072]: I1124 11:28:28.907728 5072 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e1ea7ea-3fc5-4ae0-80c1-d769428711d2-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 11:28:28 crc kubenswrapper[5072]: I1124 11:28:28.907738 5072 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e1ea7ea-3fc5-4ae0-80c1-d769428711d2-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 11:28:28 crc kubenswrapper[5072]: I1124 11:28:28.907746 5072 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e1ea7ea-3fc5-4ae0-80c1-d769428711d2-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:28:29 crc kubenswrapper[5072]: I1124 11:28:29.587352 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 11:28:29 crc kubenswrapper[5072]: I1124 11:28:29.617463 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 24 11:28:29 crc kubenswrapper[5072]: I1124 11:28:29.636611 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 24 11:28:29 crc kubenswrapper[5072]: I1124 11:28:29.645504 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 24 11:28:29 crc kubenswrapper[5072]: E1124 11:28:29.645924 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4d90486-6954-484a-aa10-2ffa6789cdc7" containerName="nova-manage" Nov 24 11:28:29 crc kubenswrapper[5072]: I1124 11:28:29.645942 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4d90486-6954-484a-aa10-2ffa6789cdc7" containerName="nova-manage" Nov 24 11:28:29 crc kubenswrapper[5072]: E1124 11:28:29.645963 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e1ea7ea-3fc5-4ae0-80c1-d769428711d2" containerName="nova-api-api" Nov 24 11:28:29 crc kubenswrapper[5072]: I1124 11:28:29.645970 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e1ea7ea-3fc5-4ae0-80c1-d769428711d2" containerName="nova-api-api" Nov 24 11:28:29 crc kubenswrapper[5072]: E1124 11:28:29.645991 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e1ea7ea-3fc5-4ae0-80c1-d769428711d2" containerName="nova-api-log" Nov 24 11:28:29 crc kubenswrapper[5072]: I1124 11:28:29.645997 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e1ea7ea-3fc5-4ae0-80c1-d769428711d2" containerName="nova-api-log" Nov 24 11:28:29 crc kubenswrapper[5072]: I1124 11:28:29.646140 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e1ea7ea-3fc5-4ae0-80c1-d769428711d2" containerName="nova-api-log" Nov 24 11:28:29 crc kubenswrapper[5072]: I1124 11:28:29.646157 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4d90486-6954-484a-aa10-2ffa6789cdc7" containerName="nova-manage" Nov 24 11:28:29 crc kubenswrapper[5072]: I1124 11:28:29.646172 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e1ea7ea-3fc5-4ae0-80c1-d769428711d2" containerName="nova-api-api" Nov 24 11:28:29 crc kubenswrapper[5072]: I1124 11:28:29.647016 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 11:28:29 crc kubenswrapper[5072]: I1124 11:28:29.653101 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 24 11:28:29 crc kubenswrapper[5072]: I1124 11:28:29.653270 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 24 11:28:29 crc kubenswrapper[5072]: I1124 11:28:29.653318 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 24 11:28:29 crc kubenswrapper[5072]: I1124 11:28:29.667098 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 11:28:29 crc kubenswrapper[5072]: I1124 11:28:29.727089 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cw8px\" (UniqueName: \"kubernetes.io/projected/82f52ff9-d0f6-4a88-bc4e-47d4d47808ac-kube-api-access-cw8px\") pod \"nova-api-0\" (UID: \"82f52ff9-d0f6-4a88-bc4e-47d4d47808ac\") " pod="openstack/nova-api-0" Nov 24 11:28:29 crc kubenswrapper[5072]: I1124 11:28:29.727182 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/82f52ff9-d0f6-4a88-bc4e-47d4d47808ac-internal-tls-certs\") pod \"nova-api-0\" (UID: \"82f52ff9-d0f6-4a88-bc4e-47d4d47808ac\") " pod="openstack/nova-api-0" Nov 24 11:28:29 crc kubenswrapper[5072]: I1124 11:28:29.727200 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/82f52ff9-d0f6-4a88-bc4e-47d4d47808ac-public-tls-certs\") pod \"nova-api-0\" (UID: \"82f52ff9-d0f6-4a88-bc4e-47d4d47808ac\") " pod="openstack/nova-api-0" Nov 24 11:28:29 crc kubenswrapper[5072]: I1124 11:28:29.727257 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82f52ff9-d0f6-4a88-bc4e-47d4d47808ac-config-data\") pod \"nova-api-0\" (UID: \"82f52ff9-d0f6-4a88-bc4e-47d4d47808ac\") " pod="openstack/nova-api-0" Nov 24 11:28:29 crc kubenswrapper[5072]: I1124 11:28:29.727273 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82f52ff9-d0f6-4a88-bc4e-47d4d47808ac-logs\") pod \"nova-api-0\" (UID: \"82f52ff9-d0f6-4a88-bc4e-47d4d47808ac\") " pod="openstack/nova-api-0" Nov 24 11:28:29 crc kubenswrapper[5072]: I1124 11:28:29.727753 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82f52ff9-d0f6-4a88-bc4e-47d4d47808ac-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"82f52ff9-d0f6-4a88-bc4e-47d4d47808ac\") " pod="openstack/nova-api-0" Nov 24 11:28:29 crc kubenswrapper[5072]: I1124 11:28:29.829968 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cw8px\" (UniqueName: \"kubernetes.io/projected/82f52ff9-d0f6-4a88-bc4e-47d4d47808ac-kube-api-access-cw8px\") pod \"nova-api-0\" (UID: \"82f52ff9-d0f6-4a88-bc4e-47d4d47808ac\") " pod="openstack/nova-api-0" Nov 24 11:28:29 crc kubenswrapper[5072]: I1124 11:28:29.835217 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/82f52ff9-d0f6-4a88-bc4e-47d4d47808ac-internal-tls-certs\") pod \"nova-api-0\" (UID: \"82f52ff9-d0f6-4a88-bc4e-47d4d47808ac\") " pod="openstack/nova-api-0" Nov 24 11:28:29 crc kubenswrapper[5072]: I1124 11:28:29.835288 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/82f52ff9-d0f6-4a88-bc4e-47d4d47808ac-public-tls-certs\") pod \"nova-api-0\" (UID: \"82f52ff9-d0f6-4a88-bc4e-47d4d47808ac\") " pod="openstack/nova-api-0" Nov 24 11:28:29 crc kubenswrapper[5072]: I1124 11:28:29.835335 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82f52ff9-d0f6-4a88-bc4e-47d4d47808ac-config-data\") pod \"nova-api-0\" (UID: \"82f52ff9-d0f6-4a88-bc4e-47d4d47808ac\") " pod="openstack/nova-api-0" Nov 24 11:28:29 crc kubenswrapper[5072]: I1124 11:28:29.835361 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82f52ff9-d0f6-4a88-bc4e-47d4d47808ac-logs\") pod \"nova-api-0\" (UID: \"82f52ff9-d0f6-4a88-bc4e-47d4d47808ac\") " pod="openstack/nova-api-0" Nov 24 11:28:29 crc kubenswrapper[5072]: I1124 11:28:29.835493 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82f52ff9-d0f6-4a88-bc4e-47d4d47808ac-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"82f52ff9-d0f6-4a88-bc4e-47d4d47808ac\") " pod="openstack/nova-api-0" Nov 24 11:28:29 crc kubenswrapper[5072]: I1124 11:28:29.836452 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82f52ff9-d0f6-4a88-bc4e-47d4d47808ac-logs\") pod \"nova-api-0\" (UID: \"82f52ff9-d0f6-4a88-bc4e-47d4d47808ac\") " pod="openstack/nova-api-0" Nov 24 11:28:29 crc kubenswrapper[5072]: I1124 11:28:29.839677 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/82f52ff9-d0f6-4a88-bc4e-47d4d47808ac-internal-tls-certs\") pod \"nova-api-0\" (UID: \"82f52ff9-d0f6-4a88-bc4e-47d4d47808ac\") " pod="openstack/nova-api-0" Nov 24 11:28:29 crc kubenswrapper[5072]: I1124 11:28:29.844058 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/82f52ff9-d0f6-4a88-bc4e-47d4d47808ac-public-tls-certs\") pod \"nova-api-0\" (UID: \"82f52ff9-d0f6-4a88-bc4e-47d4d47808ac\") " pod="openstack/nova-api-0" Nov 24 11:28:29 crc kubenswrapper[5072]: I1124 11:28:29.848100 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82f52ff9-d0f6-4a88-bc4e-47d4d47808ac-config-data\") pod \"nova-api-0\" (UID: \"82f52ff9-d0f6-4a88-bc4e-47d4d47808ac\") " pod="openstack/nova-api-0" Nov 24 11:28:29 crc kubenswrapper[5072]: I1124 11:28:29.848790 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82f52ff9-d0f6-4a88-bc4e-47d4d47808ac-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"82f52ff9-d0f6-4a88-bc4e-47d4d47808ac\") " pod="openstack/nova-api-0" Nov 24 11:28:29 crc kubenswrapper[5072]: I1124 11:28:29.849667 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cw8px\" (UniqueName: \"kubernetes.io/projected/82f52ff9-d0f6-4a88-bc4e-47d4d47808ac-kube-api-access-cw8px\") pod \"nova-api-0\" (UID: \"82f52ff9-d0f6-4a88-bc4e-47d4d47808ac\") " pod="openstack/nova-api-0" Nov 24 11:28:29 crc kubenswrapper[5072]: I1124 11:28:29.965474 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 11:28:30 crc kubenswrapper[5072]: I1124 11:28:30.452216 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 11:28:30 crc kubenswrapper[5072]: I1124 11:28:30.609561 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"82f52ff9-d0f6-4a88-bc4e-47d4d47808ac","Type":"ContainerStarted","Data":"dd496f59d4de21971e6ac781042d41ae548e9294b5b71ce04a51ed4ce72206a3"} Nov 24 11:28:31 crc kubenswrapper[5072]: I1124 11:28:31.001433 5072 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="2ab8c206-f9b3-4aa1-96c7-3a19f7a9b1b2" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.175:8775/\": read tcp 10.217.0.2:41174->10.217.0.175:8775: read: connection reset by peer" Nov 24 11:28:31 crc kubenswrapper[5072]: I1124 11:28:31.001662 5072 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="2ab8c206-f9b3-4aa1-96c7-3a19f7a9b1b2" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.175:8775/\": read tcp 10.217.0.2:41188->10.217.0.175:8775: read: connection reset by peer" Nov 24 11:28:31 crc kubenswrapper[5072]: I1124 11:28:31.031544 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e1ea7ea-3fc5-4ae0-80c1-d769428711d2" path="/var/lib/kubelet/pods/7e1ea7ea-3fc5-4ae0-80c1-d769428711d2/volumes" Nov 24 11:28:31 crc kubenswrapper[5072]: I1124 11:28:31.369085 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 11:28:31 crc kubenswrapper[5072]: I1124 11:28:31.465315 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jpwws\" (UniqueName: \"kubernetes.io/projected/2ab8c206-f9b3-4aa1-96c7-3a19f7a9b1b2-kube-api-access-jpwws\") pod \"2ab8c206-f9b3-4aa1-96c7-3a19f7a9b1b2\" (UID: \"2ab8c206-f9b3-4aa1-96c7-3a19f7a9b1b2\") " Nov 24 11:28:31 crc kubenswrapper[5072]: I1124 11:28:31.465519 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ab8c206-f9b3-4aa1-96c7-3a19f7a9b1b2-logs\") pod \"2ab8c206-f9b3-4aa1-96c7-3a19f7a9b1b2\" (UID: \"2ab8c206-f9b3-4aa1-96c7-3a19f7a9b1b2\") " Nov 24 11:28:31 crc kubenswrapper[5072]: I1124 11:28:31.465601 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ab8c206-f9b3-4aa1-96c7-3a19f7a9b1b2-nova-metadata-tls-certs\") pod \"2ab8c206-f9b3-4aa1-96c7-3a19f7a9b1b2\" (UID: \"2ab8c206-f9b3-4aa1-96c7-3a19f7a9b1b2\") " Nov 24 11:28:31 crc kubenswrapper[5072]: I1124 11:28:31.465695 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ab8c206-f9b3-4aa1-96c7-3a19f7a9b1b2-combined-ca-bundle\") pod \"2ab8c206-f9b3-4aa1-96c7-3a19f7a9b1b2\" (UID: \"2ab8c206-f9b3-4aa1-96c7-3a19f7a9b1b2\") " Nov 24 11:28:31 crc kubenswrapper[5072]: I1124 11:28:31.465763 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ab8c206-f9b3-4aa1-96c7-3a19f7a9b1b2-config-data\") pod \"2ab8c206-f9b3-4aa1-96c7-3a19f7a9b1b2\" (UID: \"2ab8c206-f9b3-4aa1-96c7-3a19f7a9b1b2\") " Nov 24 11:28:31 crc kubenswrapper[5072]: I1124 11:28:31.472981 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ab8c206-f9b3-4aa1-96c7-3a19f7a9b1b2-logs" (OuterVolumeSpecName: "logs") pod "2ab8c206-f9b3-4aa1-96c7-3a19f7a9b1b2" (UID: "2ab8c206-f9b3-4aa1-96c7-3a19f7a9b1b2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:28:31 crc kubenswrapper[5072]: I1124 11:28:31.480722 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ab8c206-f9b3-4aa1-96c7-3a19f7a9b1b2-kube-api-access-jpwws" (OuterVolumeSpecName: "kube-api-access-jpwws") pod "2ab8c206-f9b3-4aa1-96c7-3a19f7a9b1b2" (UID: "2ab8c206-f9b3-4aa1-96c7-3a19f7a9b1b2"). InnerVolumeSpecName "kube-api-access-jpwws". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:28:31 crc kubenswrapper[5072]: I1124 11:28:31.506945 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ab8c206-f9b3-4aa1-96c7-3a19f7a9b1b2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2ab8c206-f9b3-4aa1-96c7-3a19f7a9b1b2" (UID: "2ab8c206-f9b3-4aa1-96c7-3a19f7a9b1b2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:28:31 crc kubenswrapper[5072]: I1124 11:28:31.509856 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ab8c206-f9b3-4aa1-96c7-3a19f7a9b1b2-config-data" (OuterVolumeSpecName: "config-data") pod "2ab8c206-f9b3-4aa1-96c7-3a19f7a9b1b2" (UID: "2ab8c206-f9b3-4aa1-96c7-3a19f7a9b1b2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:28:31 crc kubenswrapper[5072]: I1124 11:28:31.528330 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ab8c206-f9b3-4aa1-96c7-3a19f7a9b1b2-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "2ab8c206-f9b3-4aa1-96c7-3a19f7a9b1b2" (UID: "2ab8c206-f9b3-4aa1-96c7-3a19f7a9b1b2"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:28:31 crc kubenswrapper[5072]: I1124 11:28:31.573076 5072 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ab8c206-f9b3-4aa1-96c7-3a19f7a9b1b2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:28:31 crc kubenswrapper[5072]: I1124 11:28:31.574362 5072 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ab8c206-f9b3-4aa1-96c7-3a19f7a9b1b2-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:28:31 crc kubenswrapper[5072]: I1124 11:28:31.574669 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jpwws\" (UniqueName: \"kubernetes.io/projected/2ab8c206-f9b3-4aa1-96c7-3a19f7a9b1b2-kube-api-access-jpwws\") on node \"crc\" DevicePath \"\"" Nov 24 11:28:31 crc kubenswrapper[5072]: I1124 11:28:31.575015 5072 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ab8c206-f9b3-4aa1-96c7-3a19f7a9b1b2-logs\") on node \"crc\" DevicePath \"\"" Nov 24 11:28:31 crc kubenswrapper[5072]: I1124 11:28:31.575046 5072 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ab8c206-f9b3-4aa1-96c7-3a19f7a9b1b2-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 11:28:31 crc kubenswrapper[5072]: I1124 11:28:31.635076 5072 generic.go:334] "Generic (PLEG): container finished" podID="2ab8c206-f9b3-4aa1-96c7-3a19f7a9b1b2" containerID="bf8f5fd1e53d40c0f76857d4a12e1ce7b670df788f3055203f8069d9cbb7ee24" exitCode=0 Nov 24 11:28:31 crc kubenswrapper[5072]: I1124 11:28:31.635145 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2ab8c206-f9b3-4aa1-96c7-3a19f7a9b1b2","Type":"ContainerDied","Data":"bf8f5fd1e53d40c0f76857d4a12e1ce7b670df788f3055203f8069d9cbb7ee24"} Nov 24 11:28:31 crc kubenswrapper[5072]: I1124 11:28:31.635168 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2ab8c206-f9b3-4aa1-96c7-3a19f7a9b1b2","Type":"ContainerDied","Data":"14b022c2f3bfb8a8194c032c26d63079f77f6358a0d2e077b5d2c41cc672c28a"} Nov 24 11:28:31 crc kubenswrapper[5072]: I1124 11:28:31.635183 5072 scope.go:117] "RemoveContainer" containerID="bf8f5fd1e53d40c0f76857d4a12e1ce7b670df788f3055203f8069d9cbb7ee24" Nov 24 11:28:31 crc kubenswrapper[5072]: I1124 11:28:31.635285 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 11:28:31 crc kubenswrapper[5072]: I1124 11:28:31.642317 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"82f52ff9-d0f6-4a88-bc4e-47d4d47808ac","Type":"ContainerStarted","Data":"0fdf499ed4f368ae6a5834fb31f511d1fa57a0ce2af86d25ea67eb9766307e10"} Nov 24 11:28:31 crc kubenswrapper[5072]: I1124 11:28:31.643930 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"82f52ff9-d0f6-4a88-bc4e-47d4d47808ac","Type":"ContainerStarted","Data":"149c3122e19353627062eb45119fa898329d6eaf70f5d4b76e257a12f4967473"} Nov 24 11:28:31 crc kubenswrapper[5072]: I1124 11:28:31.668474 5072 scope.go:117] "RemoveContainer" containerID="7ed93f6dfb00cf4d5234145c5d3271873d4c1eac308bc55c4d300f8b1e890d2a" Nov 24 11:28:31 crc kubenswrapper[5072]: I1124 11:28:31.678217 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.678201155 podStartE2EDuration="2.678201155s" podCreationTimestamp="2025-11-24 11:28:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:28:31.67277504 +0000 UTC m=+1163.384299546" watchObservedRunningTime="2025-11-24 11:28:31.678201155 +0000 UTC m=+1163.389725631" Nov 24 11:28:31 crc kubenswrapper[5072]: I1124 11:28:31.700625 5072 scope.go:117] "RemoveContainer" containerID="bf8f5fd1e53d40c0f76857d4a12e1ce7b670df788f3055203f8069d9cbb7ee24" Nov 24 11:28:31 crc kubenswrapper[5072]: I1124 11:28:31.701686 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 11:28:31 crc kubenswrapper[5072]: E1124 11:28:31.702810 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf8f5fd1e53d40c0f76857d4a12e1ce7b670df788f3055203f8069d9cbb7ee24\": container with ID starting with bf8f5fd1e53d40c0f76857d4a12e1ce7b670df788f3055203f8069d9cbb7ee24 not found: ID does not exist" containerID="bf8f5fd1e53d40c0f76857d4a12e1ce7b670df788f3055203f8069d9cbb7ee24" Nov 24 11:28:31 crc kubenswrapper[5072]: I1124 11:28:31.702851 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf8f5fd1e53d40c0f76857d4a12e1ce7b670df788f3055203f8069d9cbb7ee24"} err="failed to get container status \"bf8f5fd1e53d40c0f76857d4a12e1ce7b670df788f3055203f8069d9cbb7ee24\": rpc error: code = NotFound desc = could not find container \"bf8f5fd1e53d40c0f76857d4a12e1ce7b670df788f3055203f8069d9cbb7ee24\": container with ID starting with bf8f5fd1e53d40c0f76857d4a12e1ce7b670df788f3055203f8069d9cbb7ee24 not found: ID does not exist" Nov 24 11:28:31 crc kubenswrapper[5072]: I1124 11:28:31.702880 5072 scope.go:117] "RemoveContainer" containerID="7ed93f6dfb00cf4d5234145c5d3271873d4c1eac308bc55c4d300f8b1e890d2a" Nov 24 11:28:31 crc kubenswrapper[5072]: E1124 11:28:31.703260 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ed93f6dfb00cf4d5234145c5d3271873d4c1eac308bc55c4d300f8b1e890d2a\": container with ID starting with 7ed93f6dfb00cf4d5234145c5d3271873d4c1eac308bc55c4d300f8b1e890d2a not found: ID does not exist" containerID="7ed93f6dfb00cf4d5234145c5d3271873d4c1eac308bc55c4d300f8b1e890d2a" Nov 24 11:28:31 crc kubenswrapper[5072]: I1124 11:28:31.703298 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ed93f6dfb00cf4d5234145c5d3271873d4c1eac308bc55c4d300f8b1e890d2a"} err="failed to get container status \"7ed93f6dfb00cf4d5234145c5d3271873d4c1eac308bc55c4d300f8b1e890d2a\": rpc error: code = NotFound desc = could not find container \"7ed93f6dfb00cf4d5234145c5d3271873d4c1eac308bc55c4d300f8b1e890d2a\": container with ID starting with 7ed93f6dfb00cf4d5234145c5d3271873d4c1eac308bc55c4d300f8b1e890d2a not found: ID does not exist" Nov 24 11:28:31 crc kubenswrapper[5072]: I1124 11:28:31.709660 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 11:28:31 crc kubenswrapper[5072]: I1124 11:28:31.728480 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 24 11:28:31 crc kubenswrapper[5072]: E1124 11:28:31.729176 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ab8c206-f9b3-4aa1-96c7-3a19f7a9b1b2" containerName="nova-metadata-metadata" Nov 24 11:28:31 crc kubenswrapper[5072]: I1124 11:28:31.729274 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ab8c206-f9b3-4aa1-96c7-3a19f7a9b1b2" containerName="nova-metadata-metadata" Nov 24 11:28:31 crc kubenswrapper[5072]: E1124 11:28:31.729399 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ab8c206-f9b3-4aa1-96c7-3a19f7a9b1b2" containerName="nova-metadata-log" Nov 24 11:28:31 crc kubenswrapper[5072]: I1124 11:28:31.729480 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ab8c206-f9b3-4aa1-96c7-3a19f7a9b1b2" containerName="nova-metadata-log" Nov 24 11:28:31 crc kubenswrapper[5072]: I1124 11:28:31.729775 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ab8c206-f9b3-4aa1-96c7-3a19f7a9b1b2" containerName="nova-metadata-log" Nov 24 11:28:31 crc kubenswrapper[5072]: I1124 11:28:31.729871 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ab8c206-f9b3-4aa1-96c7-3a19f7a9b1b2" containerName="nova-metadata-metadata" Nov 24 11:28:31 crc kubenswrapper[5072]: I1124 11:28:31.731148 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 11:28:31 crc kubenswrapper[5072]: I1124 11:28:31.733696 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 24 11:28:31 crc kubenswrapper[5072]: I1124 11:28:31.733964 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 24 11:28:31 crc kubenswrapper[5072]: I1124 11:28:31.739443 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 11:28:31 crc kubenswrapper[5072]: I1124 11:28:31.778360 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vbrj\" (UniqueName: \"kubernetes.io/projected/cb7d5b02-88e5-4f50-8039-3d573e832977-kube-api-access-6vbrj\") pod \"nova-metadata-0\" (UID: \"cb7d5b02-88e5-4f50-8039-3d573e832977\") " pod="openstack/nova-metadata-0" Nov 24 11:28:31 crc kubenswrapper[5072]: I1124 11:28:31.778442 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb7d5b02-88e5-4f50-8039-3d573e832977-config-data\") pod \"nova-metadata-0\" (UID: \"cb7d5b02-88e5-4f50-8039-3d573e832977\") " pod="openstack/nova-metadata-0" Nov 24 11:28:31 crc kubenswrapper[5072]: I1124 11:28:31.778468 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cb7d5b02-88e5-4f50-8039-3d573e832977-logs\") pod \"nova-metadata-0\" (UID: \"cb7d5b02-88e5-4f50-8039-3d573e832977\") " pod="openstack/nova-metadata-0" Nov 24 11:28:31 crc kubenswrapper[5072]: I1124 11:28:31.778488 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb7d5b02-88e5-4f50-8039-3d573e832977-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"cb7d5b02-88e5-4f50-8039-3d573e832977\") " pod="openstack/nova-metadata-0" Nov 24 11:28:31 crc kubenswrapper[5072]: I1124 11:28:31.778520 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb7d5b02-88e5-4f50-8039-3d573e832977-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"cb7d5b02-88e5-4f50-8039-3d573e832977\") " pod="openstack/nova-metadata-0" Nov 24 11:28:31 crc kubenswrapper[5072]: I1124 11:28:31.879589 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb7d5b02-88e5-4f50-8039-3d573e832977-config-data\") pod \"nova-metadata-0\" (UID: \"cb7d5b02-88e5-4f50-8039-3d573e832977\") " pod="openstack/nova-metadata-0" Nov 24 11:28:31 crc kubenswrapper[5072]: I1124 11:28:31.879645 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cb7d5b02-88e5-4f50-8039-3d573e832977-logs\") pod \"nova-metadata-0\" (UID: \"cb7d5b02-88e5-4f50-8039-3d573e832977\") " pod="openstack/nova-metadata-0" Nov 24 11:28:31 crc kubenswrapper[5072]: I1124 11:28:31.879675 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb7d5b02-88e5-4f50-8039-3d573e832977-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"cb7d5b02-88e5-4f50-8039-3d573e832977\") " pod="openstack/nova-metadata-0" Nov 24 11:28:31 crc kubenswrapper[5072]: I1124 11:28:31.880163 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cb7d5b02-88e5-4f50-8039-3d573e832977-logs\") pod \"nova-metadata-0\" (UID: \"cb7d5b02-88e5-4f50-8039-3d573e832977\") " pod="openstack/nova-metadata-0" Nov 24 11:28:31 crc kubenswrapper[5072]: I1124 11:28:31.880264 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb7d5b02-88e5-4f50-8039-3d573e832977-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"cb7d5b02-88e5-4f50-8039-3d573e832977\") " pod="openstack/nova-metadata-0" Nov 24 11:28:31 crc kubenswrapper[5072]: I1124 11:28:31.880936 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vbrj\" (UniqueName: \"kubernetes.io/projected/cb7d5b02-88e5-4f50-8039-3d573e832977-kube-api-access-6vbrj\") pod \"nova-metadata-0\" (UID: \"cb7d5b02-88e5-4f50-8039-3d573e832977\") " pod="openstack/nova-metadata-0" Nov 24 11:28:31 crc kubenswrapper[5072]: I1124 11:28:31.883679 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb7d5b02-88e5-4f50-8039-3d573e832977-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"cb7d5b02-88e5-4f50-8039-3d573e832977\") " pod="openstack/nova-metadata-0" Nov 24 11:28:31 crc kubenswrapper[5072]: I1124 11:28:31.884490 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb7d5b02-88e5-4f50-8039-3d573e832977-config-data\") pod \"nova-metadata-0\" (UID: \"cb7d5b02-88e5-4f50-8039-3d573e832977\") " pod="openstack/nova-metadata-0" Nov 24 11:28:31 crc kubenswrapper[5072]: I1124 11:28:31.895206 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb7d5b02-88e5-4f50-8039-3d573e832977-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"cb7d5b02-88e5-4f50-8039-3d573e832977\") " pod="openstack/nova-metadata-0" Nov 24 11:28:31 crc kubenswrapper[5072]: I1124 11:28:31.895755 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vbrj\" (UniqueName: \"kubernetes.io/projected/cb7d5b02-88e5-4f50-8039-3d573e832977-kube-api-access-6vbrj\") pod \"nova-metadata-0\" (UID: \"cb7d5b02-88e5-4f50-8039-3d573e832977\") " pod="openstack/nova-metadata-0" Nov 24 11:28:32 crc kubenswrapper[5072]: I1124 11:28:32.050925 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 11:28:32 crc kubenswrapper[5072]: I1124 11:28:32.372009 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 11:28:32 crc kubenswrapper[5072]: I1124 11:28:32.493042 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgbxf\" (UniqueName: \"kubernetes.io/projected/179bc010-e872-4be0-b453-088a8260caa5-kube-api-access-xgbxf\") pod \"179bc010-e872-4be0-b453-088a8260caa5\" (UID: \"179bc010-e872-4be0-b453-088a8260caa5\") " Nov 24 11:28:32 crc kubenswrapper[5072]: I1124 11:28:32.493396 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/179bc010-e872-4be0-b453-088a8260caa5-config-data\") pod \"179bc010-e872-4be0-b453-088a8260caa5\" (UID: \"179bc010-e872-4be0-b453-088a8260caa5\") " Nov 24 11:28:32 crc kubenswrapper[5072]: I1124 11:28:32.493417 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/179bc010-e872-4be0-b453-088a8260caa5-combined-ca-bundle\") pod \"179bc010-e872-4be0-b453-088a8260caa5\" (UID: \"179bc010-e872-4be0-b453-088a8260caa5\") " Nov 24 11:28:32 crc kubenswrapper[5072]: I1124 11:28:32.499598 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/179bc010-e872-4be0-b453-088a8260caa5-kube-api-access-xgbxf" (OuterVolumeSpecName: "kube-api-access-xgbxf") pod "179bc010-e872-4be0-b453-088a8260caa5" (UID: "179bc010-e872-4be0-b453-088a8260caa5"). InnerVolumeSpecName "kube-api-access-xgbxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:28:32 crc kubenswrapper[5072]: I1124 11:28:32.517592 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/179bc010-e872-4be0-b453-088a8260caa5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "179bc010-e872-4be0-b453-088a8260caa5" (UID: "179bc010-e872-4be0-b453-088a8260caa5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:28:32 crc kubenswrapper[5072]: I1124 11:28:32.523191 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/179bc010-e872-4be0-b453-088a8260caa5-config-data" (OuterVolumeSpecName: "config-data") pod "179bc010-e872-4be0-b453-088a8260caa5" (UID: "179bc010-e872-4be0-b453-088a8260caa5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:28:32 crc kubenswrapper[5072]: I1124 11:28:32.573897 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 11:28:32 crc kubenswrapper[5072]: W1124 11:28:32.576864 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcb7d5b02_88e5_4f50_8039_3d573e832977.slice/crio-49f2a7aee380922cb38cdecefd45f3d4c39f12480cb9239aaca65584817dabcb WatchSource:0}: Error finding container 49f2a7aee380922cb38cdecefd45f3d4c39f12480cb9239aaca65584817dabcb: Status 404 returned error can't find the container with id 49f2a7aee380922cb38cdecefd45f3d4c39f12480cb9239aaca65584817dabcb Nov 24 11:28:32 crc kubenswrapper[5072]: I1124 11:28:32.596101 5072 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/179bc010-e872-4be0-b453-088a8260caa5-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:28:32 crc kubenswrapper[5072]: I1124 11:28:32.596133 5072 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/179bc010-e872-4be0-b453-088a8260caa5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:28:32 crc kubenswrapper[5072]: I1124 11:28:32.596147 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xgbxf\" (UniqueName: \"kubernetes.io/projected/179bc010-e872-4be0-b453-088a8260caa5-kube-api-access-xgbxf\") on node \"crc\" DevicePath \"\"" Nov 24 11:28:32 crc kubenswrapper[5072]: I1124 11:28:32.652876 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cb7d5b02-88e5-4f50-8039-3d573e832977","Type":"ContainerStarted","Data":"49f2a7aee380922cb38cdecefd45f3d4c39f12480cb9239aaca65584817dabcb"} Nov 24 11:28:32 crc kubenswrapper[5072]: I1124 11:28:32.654688 5072 generic.go:334] "Generic (PLEG): container finished" podID="179bc010-e872-4be0-b453-088a8260caa5" containerID="c6e3298fd45803c6c49d67a1ab7743f89778f20e5d406858ad91f8a27395c48c" exitCode=0 Nov 24 11:28:32 crc kubenswrapper[5072]: I1124 11:28:32.654738 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"179bc010-e872-4be0-b453-088a8260caa5","Type":"ContainerDied","Data":"c6e3298fd45803c6c49d67a1ab7743f89778f20e5d406858ad91f8a27395c48c"} Nov 24 11:28:32 crc kubenswrapper[5072]: I1124 11:28:32.654758 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"179bc010-e872-4be0-b453-088a8260caa5","Type":"ContainerDied","Data":"84888f0d4206652a8cc907ccdd9fbea76ae5ea53a805627bb285487adc7f6f4e"} Nov 24 11:28:32 crc kubenswrapper[5072]: I1124 11:28:32.654778 5072 scope.go:117] "RemoveContainer" containerID="c6e3298fd45803c6c49d67a1ab7743f89778f20e5d406858ad91f8a27395c48c" Nov 24 11:28:32 crc kubenswrapper[5072]: I1124 11:28:32.654860 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 11:28:32 crc kubenswrapper[5072]: I1124 11:28:32.697499 5072 scope.go:117] "RemoveContainer" containerID="c6e3298fd45803c6c49d67a1ab7743f89778f20e5d406858ad91f8a27395c48c" Nov 24 11:28:32 crc kubenswrapper[5072]: E1124 11:28:32.698816 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6e3298fd45803c6c49d67a1ab7743f89778f20e5d406858ad91f8a27395c48c\": container with ID starting with c6e3298fd45803c6c49d67a1ab7743f89778f20e5d406858ad91f8a27395c48c not found: ID does not exist" containerID="c6e3298fd45803c6c49d67a1ab7743f89778f20e5d406858ad91f8a27395c48c" Nov 24 11:28:32 crc kubenswrapper[5072]: I1124 11:28:32.698870 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6e3298fd45803c6c49d67a1ab7743f89778f20e5d406858ad91f8a27395c48c"} err="failed to get container status \"c6e3298fd45803c6c49d67a1ab7743f89778f20e5d406858ad91f8a27395c48c\": rpc error: code = NotFound desc = could not find container \"c6e3298fd45803c6c49d67a1ab7743f89778f20e5d406858ad91f8a27395c48c\": container with ID starting with c6e3298fd45803c6c49d67a1ab7743f89778f20e5d406858ad91f8a27395c48c not found: ID does not exist" Nov 24 11:28:32 crc kubenswrapper[5072]: I1124 11:28:32.707319 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 11:28:32 crc kubenswrapper[5072]: I1124 11:28:32.720988 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 11:28:32 crc kubenswrapper[5072]: I1124 11:28:32.727767 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 11:28:32 crc kubenswrapper[5072]: E1124 11:28:32.728094 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="179bc010-e872-4be0-b453-088a8260caa5" containerName="nova-scheduler-scheduler" Nov 24 11:28:32 crc kubenswrapper[5072]: I1124 11:28:32.728112 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="179bc010-e872-4be0-b453-088a8260caa5" containerName="nova-scheduler-scheduler" Nov 24 11:28:32 crc kubenswrapper[5072]: I1124 11:28:32.728298 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="179bc010-e872-4be0-b453-088a8260caa5" containerName="nova-scheduler-scheduler" Nov 24 11:28:32 crc kubenswrapper[5072]: I1124 11:28:32.729601 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 11:28:32 crc kubenswrapper[5072]: I1124 11:28:32.732450 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 24 11:28:32 crc kubenswrapper[5072]: I1124 11:28:32.771612 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 11:28:32 crc kubenswrapper[5072]: I1124 11:28:32.801093 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5rj8\" (UniqueName: \"kubernetes.io/projected/c842f0bb-64ee-4e70-a276-cf281480cf05-kube-api-access-b5rj8\") pod \"nova-scheduler-0\" (UID: \"c842f0bb-64ee-4e70-a276-cf281480cf05\") " pod="openstack/nova-scheduler-0" Nov 24 11:28:32 crc kubenswrapper[5072]: I1124 11:28:32.801186 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c842f0bb-64ee-4e70-a276-cf281480cf05-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c842f0bb-64ee-4e70-a276-cf281480cf05\") " pod="openstack/nova-scheduler-0" Nov 24 11:28:32 crc kubenswrapper[5072]: I1124 11:28:32.801291 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c842f0bb-64ee-4e70-a276-cf281480cf05-config-data\") pod \"nova-scheduler-0\" (UID: \"c842f0bb-64ee-4e70-a276-cf281480cf05\") " pod="openstack/nova-scheduler-0" Nov 24 11:28:32 crc kubenswrapper[5072]: I1124 11:28:32.902829 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c842f0bb-64ee-4e70-a276-cf281480cf05-config-data\") pod \"nova-scheduler-0\" (UID: \"c842f0bb-64ee-4e70-a276-cf281480cf05\") " pod="openstack/nova-scheduler-0" Nov 24 11:28:32 crc kubenswrapper[5072]: I1124 11:28:32.903151 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5rj8\" (UniqueName: \"kubernetes.io/projected/c842f0bb-64ee-4e70-a276-cf281480cf05-kube-api-access-b5rj8\") pod \"nova-scheduler-0\" (UID: \"c842f0bb-64ee-4e70-a276-cf281480cf05\") " pod="openstack/nova-scheduler-0" Nov 24 11:28:32 crc kubenswrapper[5072]: I1124 11:28:32.903236 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c842f0bb-64ee-4e70-a276-cf281480cf05-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c842f0bb-64ee-4e70-a276-cf281480cf05\") " pod="openstack/nova-scheduler-0" Nov 24 11:28:32 crc kubenswrapper[5072]: I1124 11:28:32.906091 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c842f0bb-64ee-4e70-a276-cf281480cf05-config-data\") pod \"nova-scheduler-0\" (UID: \"c842f0bb-64ee-4e70-a276-cf281480cf05\") " pod="openstack/nova-scheduler-0" Nov 24 11:28:32 crc kubenswrapper[5072]: I1124 11:28:32.907101 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c842f0bb-64ee-4e70-a276-cf281480cf05-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c842f0bb-64ee-4e70-a276-cf281480cf05\") " pod="openstack/nova-scheduler-0" Nov 24 11:28:32 crc kubenswrapper[5072]: I1124 11:28:32.924291 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5rj8\" (UniqueName: \"kubernetes.io/projected/c842f0bb-64ee-4e70-a276-cf281480cf05-kube-api-access-b5rj8\") pod \"nova-scheduler-0\" (UID: \"c842f0bb-64ee-4e70-a276-cf281480cf05\") " pod="openstack/nova-scheduler-0" Nov 24 11:28:33 crc kubenswrapper[5072]: I1124 11:28:33.029281 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="179bc010-e872-4be0-b453-088a8260caa5" path="/var/lib/kubelet/pods/179bc010-e872-4be0-b453-088a8260caa5/volumes" Nov 24 11:28:33 crc kubenswrapper[5072]: I1124 11:28:33.031075 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ab8c206-f9b3-4aa1-96c7-3a19f7a9b1b2" path="/var/lib/kubelet/pods/2ab8c206-f9b3-4aa1-96c7-3a19f7a9b1b2/volumes" Nov 24 11:28:33 crc kubenswrapper[5072]: I1124 11:28:33.065616 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 11:28:33 crc kubenswrapper[5072]: W1124 11:28:33.533484 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc842f0bb_64ee_4e70_a276_cf281480cf05.slice/crio-0c4cc9f862f47758543af489575b962005df4a7f00e583fb7ce313a6e8a2e0b0 WatchSource:0}: Error finding container 0c4cc9f862f47758543af489575b962005df4a7f00e583fb7ce313a6e8a2e0b0: Status 404 returned error can't find the container with id 0c4cc9f862f47758543af489575b962005df4a7f00e583fb7ce313a6e8a2e0b0 Nov 24 11:28:33 crc kubenswrapper[5072]: I1124 11:28:33.534240 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 11:28:33 crc kubenswrapper[5072]: I1124 11:28:33.684775 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c842f0bb-64ee-4e70-a276-cf281480cf05","Type":"ContainerStarted","Data":"0c4cc9f862f47758543af489575b962005df4a7f00e583fb7ce313a6e8a2e0b0"} Nov 24 11:28:33 crc kubenswrapper[5072]: I1124 11:28:33.689059 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cb7d5b02-88e5-4f50-8039-3d573e832977","Type":"ContainerStarted","Data":"a311a959f7769049fcf38af35888dd7e2854e10506a1c6c17ce2be77ce71eb55"} Nov 24 11:28:33 crc kubenswrapper[5072]: I1124 11:28:33.689107 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cb7d5b02-88e5-4f50-8039-3d573e832977","Type":"ContainerStarted","Data":"378a80f61e696ee32c103e34b5e501892667234d08a0bbff8834b0435534eb37"} Nov 24 11:28:33 crc kubenswrapper[5072]: I1124 11:28:33.714435 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.714414915 podStartE2EDuration="2.714414915s" podCreationTimestamp="2025-11-24 11:28:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:28:33.709282797 +0000 UTC m=+1165.420807313" watchObservedRunningTime="2025-11-24 11:28:33.714414915 +0000 UTC m=+1165.425939391" Nov 24 11:28:34 crc kubenswrapper[5072]: I1124 11:28:34.705171 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c842f0bb-64ee-4e70-a276-cf281480cf05","Type":"ContainerStarted","Data":"f2273ddbc062dbe262824d8381a827d1eb3bbcecead9b5ce2ab23951273481e4"} Nov 24 11:28:34 crc kubenswrapper[5072]: I1124 11:28:34.746899 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.746873951 podStartE2EDuration="2.746873951s" podCreationTimestamp="2025-11-24 11:28:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:28:34.733036237 +0000 UTC m=+1166.444560753" watchObservedRunningTime="2025-11-24 11:28:34.746873951 +0000 UTC m=+1166.458398467" Nov 24 11:28:37 crc kubenswrapper[5072]: I1124 11:28:37.052536 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 24 11:28:37 crc kubenswrapper[5072]: I1124 11:28:37.052887 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 24 11:28:38 crc kubenswrapper[5072]: I1124 11:28:38.066119 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 24 11:28:39 crc kubenswrapper[5072]: I1124 11:28:39.966064 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 24 11:28:39 crc kubenswrapper[5072]: I1124 11:28:39.966523 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 24 11:28:40 crc kubenswrapper[5072]: I1124 11:28:40.980525 5072 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="82f52ff9-d0f6-4a88-bc4e-47d4d47808ac" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.186:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 11:28:40 crc kubenswrapper[5072]: I1124 11:28:40.980536 5072 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="82f52ff9-d0f6-4a88-bc4e-47d4d47808ac" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.186:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 11:28:42 crc kubenswrapper[5072]: I1124 11:28:42.052442 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 24 11:28:42 crc kubenswrapper[5072]: I1124 11:28:42.052513 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 24 11:28:43 crc kubenswrapper[5072]: I1124 11:28:43.061587 5072 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="cb7d5b02-88e5-4f50-8039-3d573e832977" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.187:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 11:28:43 crc kubenswrapper[5072]: I1124 11:28:43.066409 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 24 11:28:43 crc kubenswrapper[5072]: I1124 11:28:43.070573 5072 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="cb7d5b02-88e5-4f50-8039-3d573e832977" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.187:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 11:28:43 crc kubenswrapper[5072]: I1124 11:28:43.098705 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 24 11:28:43 crc kubenswrapper[5072]: I1124 11:28:43.853226 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 24 11:28:49 crc kubenswrapper[5072]: I1124 11:28:49.973759 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 24 11:28:49 crc kubenswrapper[5072]: I1124 11:28:49.974626 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 24 11:28:49 crc kubenswrapper[5072]: I1124 11:28:49.977182 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 24 11:28:49 crc kubenswrapper[5072]: I1124 11:28:49.980684 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 24 11:28:50 crc kubenswrapper[5072]: I1124 11:28:50.870101 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 24 11:28:50 crc kubenswrapper[5072]: I1124 11:28:50.878906 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 24 11:28:52 crc kubenswrapper[5072]: I1124 11:28:52.057921 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 24 11:28:52 crc kubenswrapper[5072]: I1124 11:28:52.060403 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 24 11:28:52 crc kubenswrapper[5072]: I1124 11:28:52.064271 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 24 11:28:52 crc kubenswrapper[5072]: I1124 11:28:52.891360 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 24 11:28:52 crc kubenswrapper[5072]: I1124 11:28:52.963788 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 24 11:29:02 crc kubenswrapper[5072]: I1124 11:29:02.135845 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 11:29:03 crc kubenswrapper[5072]: I1124 11:29:03.542805 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 11:29:06 crc kubenswrapper[5072]: I1124 11:29:06.367926 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="354afe75-70d3-4c45-a990-0299f821b0af" containerName="rabbitmq" containerID="cri-o://5289899340a01a653ec7ac1b228e516c26a5e7582db802a8b49f051bfabe2c2f" gracePeriod=604796 Nov 24 11:29:08 crc kubenswrapper[5072]: I1124 11:29:08.069413 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="224cff60-3d72-478d-9788-926bbca42ad2" containerName="rabbitmq" containerID="cri-o://7632bd7692c742dde61619c49b4b4c3df75f9dab1b21043cfeb0c078e48057b5" gracePeriod=604796 Nov 24 11:29:08 crc kubenswrapper[5072]: I1124 11:29:08.751911 5072 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="354afe75-70d3-4c45-a990-0299f821b0af" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.99:5671: connect: connection refused" Nov 24 11:29:09 crc kubenswrapper[5072]: I1124 11:29:09.055227 5072 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="224cff60-3d72-478d-9788-926bbca42ad2" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.100:5671: connect: connection refused" Nov 24 11:29:13 crc kubenswrapper[5072]: I1124 11:29:13.032052 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 24 11:29:13 crc kubenswrapper[5072]: I1124 11:29:13.120618 5072 generic.go:334] "Generic (PLEG): container finished" podID="354afe75-70d3-4c45-a990-0299f821b0af" containerID="5289899340a01a653ec7ac1b228e516c26a5e7582db802a8b49f051bfabe2c2f" exitCode=0 Nov 24 11:29:13 crc kubenswrapper[5072]: I1124 11:29:13.120687 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"354afe75-70d3-4c45-a990-0299f821b0af","Type":"ContainerDied","Data":"5289899340a01a653ec7ac1b228e516c26a5e7582db802a8b49f051bfabe2c2f"} Nov 24 11:29:13 crc kubenswrapper[5072]: I1124 11:29:13.120728 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"354afe75-70d3-4c45-a990-0299f821b0af","Type":"ContainerDied","Data":"5d84a0f6dcbc41495cb0e6095d4bf49c2d0904b4b71e374fbc7755861fc0bf62"} Nov 24 11:29:13 crc kubenswrapper[5072]: I1124 11:29:13.120753 5072 scope.go:117] "RemoveContainer" containerID="5289899340a01a653ec7ac1b228e516c26a5e7582db802a8b49f051bfabe2c2f" Nov 24 11:29:13 crc kubenswrapper[5072]: I1124 11:29:13.120939 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 24 11:29:13 crc kubenswrapper[5072]: I1124 11:29:13.128071 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/354afe75-70d3-4c45-a990-0299f821b0af-server-conf\") pod \"354afe75-70d3-4c45-a990-0299f821b0af\" (UID: \"354afe75-70d3-4c45-a990-0299f821b0af\") " Nov 24 11:29:13 crc kubenswrapper[5072]: I1124 11:29:13.128112 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/354afe75-70d3-4c45-a990-0299f821b0af-pod-info\") pod \"354afe75-70d3-4c45-a990-0299f821b0af\" (UID: \"354afe75-70d3-4c45-a990-0299f821b0af\") " Nov 24 11:29:13 crc kubenswrapper[5072]: I1124 11:29:13.128158 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/354afe75-70d3-4c45-a990-0299f821b0af-plugins-conf\") pod \"354afe75-70d3-4c45-a990-0299f821b0af\" (UID: \"354afe75-70d3-4c45-a990-0299f821b0af\") " Nov 24 11:29:13 crc kubenswrapper[5072]: I1124 11:29:13.128275 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/354afe75-70d3-4c45-a990-0299f821b0af-config-data\") pod \"354afe75-70d3-4c45-a990-0299f821b0af\" (UID: \"354afe75-70d3-4c45-a990-0299f821b0af\") " Nov 24 11:29:13 crc kubenswrapper[5072]: I1124 11:29:13.128296 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/354afe75-70d3-4c45-a990-0299f821b0af-rabbitmq-tls\") pod \"354afe75-70d3-4c45-a990-0299f821b0af\" (UID: \"354afe75-70d3-4c45-a990-0299f821b0af\") " Nov 24 11:29:13 crc kubenswrapper[5072]: I1124 11:29:13.128532 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/354afe75-70d3-4c45-a990-0299f821b0af-rabbitmq-erlang-cookie\") pod \"354afe75-70d3-4c45-a990-0299f821b0af\" (UID: \"354afe75-70d3-4c45-a990-0299f821b0af\") " Nov 24 11:29:13 crc kubenswrapper[5072]: I1124 11:29:13.128578 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwffz\" (UniqueName: \"kubernetes.io/projected/354afe75-70d3-4c45-a990-0299f821b0af-kube-api-access-lwffz\") pod \"354afe75-70d3-4c45-a990-0299f821b0af\" (UID: \"354afe75-70d3-4c45-a990-0299f821b0af\") " Nov 24 11:29:13 crc kubenswrapper[5072]: I1124 11:29:13.128600 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/354afe75-70d3-4c45-a990-0299f821b0af-erlang-cookie-secret\") pod \"354afe75-70d3-4c45-a990-0299f821b0af\" (UID: \"354afe75-70d3-4c45-a990-0299f821b0af\") " Nov 24 11:29:13 crc kubenswrapper[5072]: I1124 11:29:13.128678 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"354afe75-70d3-4c45-a990-0299f821b0af\" (UID: \"354afe75-70d3-4c45-a990-0299f821b0af\") " Nov 24 11:29:13 crc kubenswrapper[5072]: I1124 11:29:13.128755 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/354afe75-70d3-4c45-a990-0299f821b0af-rabbitmq-confd\") pod \"354afe75-70d3-4c45-a990-0299f821b0af\" (UID: \"354afe75-70d3-4c45-a990-0299f821b0af\") " Nov 24 11:29:13 crc kubenswrapper[5072]: I1124 11:29:13.128827 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/354afe75-70d3-4c45-a990-0299f821b0af-rabbitmq-plugins\") pod \"354afe75-70d3-4c45-a990-0299f821b0af\" (UID: \"354afe75-70d3-4c45-a990-0299f821b0af\") " Nov 24 11:29:13 crc kubenswrapper[5072]: I1124 11:29:13.139017 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/354afe75-70d3-4c45-a990-0299f821b0af-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "354afe75-70d3-4c45-a990-0299f821b0af" (UID: "354afe75-70d3-4c45-a990-0299f821b0af"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:29:13 crc kubenswrapper[5072]: I1124 11:29:13.139855 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/354afe75-70d3-4c45-a990-0299f821b0af-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "354afe75-70d3-4c45-a990-0299f821b0af" (UID: "354afe75-70d3-4c45-a990-0299f821b0af"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:29:13 crc kubenswrapper[5072]: I1124 11:29:13.157043 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/354afe75-70d3-4c45-a990-0299f821b0af-pod-info" (OuterVolumeSpecName: "pod-info") pod "354afe75-70d3-4c45-a990-0299f821b0af" (UID: "354afe75-70d3-4c45-a990-0299f821b0af"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 24 11:29:13 crc kubenswrapper[5072]: I1124 11:29:13.157521 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/354afe75-70d3-4c45-a990-0299f821b0af-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "354afe75-70d3-4c45-a990-0299f821b0af" (UID: "354afe75-70d3-4c45-a990-0299f821b0af"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:29:13 crc kubenswrapper[5072]: I1124 11:29:13.161095 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "persistence") pod "354afe75-70d3-4c45-a990-0299f821b0af" (UID: "354afe75-70d3-4c45-a990-0299f821b0af"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 24 11:29:13 crc kubenswrapper[5072]: I1124 11:29:13.161672 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/354afe75-70d3-4c45-a990-0299f821b0af-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "354afe75-70d3-4c45-a990-0299f821b0af" (UID: "354afe75-70d3-4c45-a990-0299f821b0af"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:29:13 crc kubenswrapper[5072]: I1124 11:29:13.185624 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/354afe75-70d3-4c45-a990-0299f821b0af-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "354afe75-70d3-4c45-a990-0299f821b0af" (UID: "354afe75-70d3-4c45-a990-0299f821b0af"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:29:13 crc kubenswrapper[5072]: I1124 11:29:13.230807 5072 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/354afe75-70d3-4c45-a990-0299f821b0af-pod-info\") on node \"crc\" DevicePath \"\"" Nov 24 11:29:13 crc kubenswrapper[5072]: I1124 11:29:13.230833 5072 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/354afe75-70d3-4c45-a990-0299f821b0af-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 24 11:29:13 crc kubenswrapper[5072]: I1124 11:29:13.230842 5072 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/354afe75-70d3-4c45-a990-0299f821b0af-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 24 11:29:13 crc kubenswrapper[5072]: I1124 11:29:13.230851 5072 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/354afe75-70d3-4c45-a990-0299f821b0af-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 24 11:29:13 crc kubenswrapper[5072]: I1124 11:29:13.230859 5072 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/354afe75-70d3-4c45-a990-0299f821b0af-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 24 11:29:13 crc kubenswrapper[5072]: I1124 11:29:13.230879 5072 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Nov 24 11:29:13 crc kubenswrapper[5072]: I1124 11:29:13.230888 5072 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/354afe75-70d3-4c45-a990-0299f821b0af-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 24 11:29:13 crc kubenswrapper[5072]: I1124 11:29:13.250769 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/354afe75-70d3-4c45-a990-0299f821b0af-kube-api-access-lwffz" (OuterVolumeSpecName: "kube-api-access-lwffz") pod "354afe75-70d3-4c45-a990-0299f821b0af" (UID: "354afe75-70d3-4c45-a990-0299f821b0af"). InnerVolumeSpecName "kube-api-access-lwffz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:29:13 crc kubenswrapper[5072]: I1124 11:29:13.251613 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/354afe75-70d3-4c45-a990-0299f821b0af-config-data" (OuterVolumeSpecName: "config-data") pod "354afe75-70d3-4c45-a990-0299f821b0af" (UID: "354afe75-70d3-4c45-a990-0299f821b0af"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:29:13 crc kubenswrapper[5072]: I1124 11:29:13.287445 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/354afe75-70d3-4c45-a990-0299f821b0af-server-conf" (OuterVolumeSpecName: "server-conf") pod "354afe75-70d3-4c45-a990-0299f821b0af" (UID: "354afe75-70d3-4c45-a990-0299f821b0af"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:29:13 crc kubenswrapper[5072]: I1124 11:29:13.316975 5072 scope.go:117] "RemoveContainer" containerID="50ed5bcf7b58686c9c39d2083331f2f908ec020f73f7ca7435cdf2c9fd7abe38" Nov 24 11:29:13 crc kubenswrapper[5072]: I1124 11:29:13.319891 5072 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Nov 24 11:29:13 crc kubenswrapper[5072]: I1124 11:29:13.335287 5072 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/354afe75-70d3-4c45-a990-0299f821b0af-server-conf\") on node \"crc\" DevicePath \"\"" Nov 24 11:29:13 crc kubenswrapper[5072]: I1124 11:29:13.335338 5072 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/354afe75-70d3-4c45-a990-0299f821b0af-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:29:13 crc kubenswrapper[5072]: I1124 11:29:13.335351 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lwffz\" (UniqueName: \"kubernetes.io/projected/354afe75-70d3-4c45-a990-0299f821b0af-kube-api-access-lwffz\") on node \"crc\" DevicePath \"\"" Nov 24 11:29:13 crc kubenswrapper[5072]: I1124 11:29:13.335364 5072 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Nov 24 11:29:13 crc kubenswrapper[5072]: I1124 11:29:13.371430 5072 scope.go:117] "RemoveContainer" containerID="5289899340a01a653ec7ac1b228e516c26a5e7582db802a8b49f051bfabe2c2f" Nov 24 11:29:13 crc kubenswrapper[5072]: E1124 11:29:13.371974 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5289899340a01a653ec7ac1b228e516c26a5e7582db802a8b49f051bfabe2c2f\": container with ID starting with 5289899340a01a653ec7ac1b228e516c26a5e7582db802a8b49f051bfabe2c2f not found: ID does not exist" containerID="5289899340a01a653ec7ac1b228e516c26a5e7582db802a8b49f051bfabe2c2f" Nov 24 11:29:13 crc kubenswrapper[5072]: I1124 11:29:13.372015 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5289899340a01a653ec7ac1b228e516c26a5e7582db802a8b49f051bfabe2c2f"} err="failed to get container status \"5289899340a01a653ec7ac1b228e516c26a5e7582db802a8b49f051bfabe2c2f\": rpc error: code = NotFound desc = could not find container \"5289899340a01a653ec7ac1b228e516c26a5e7582db802a8b49f051bfabe2c2f\": container with ID starting with 5289899340a01a653ec7ac1b228e516c26a5e7582db802a8b49f051bfabe2c2f not found: ID does not exist" Nov 24 11:29:13 crc kubenswrapper[5072]: I1124 11:29:13.372046 5072 scope.go:117] "RemoveContainer" containerID="50ed5bcf7b58686c9c39d2083331f2f908ec020f73f7ca7435cdf2c9fd7abe38" Nov 24 11:29:13 crc kubenswrapper[5072]: E1124 11:29:13.372298 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50ed5bcf7b58686c9c39d2083331f2f908ec020f73f7ca7435cdf2c9fd7abe38\": container with ID starting with 50ed5bcf7b58686c9c39d2083331f2f908ec020f73f7ca7435cdf2c9fd7abe38 not found: ID does not exist" containerID="50ed5bcf7b58686c9c39d2083331f2f908ec020f73f7ca7435cdf2c9fd7abe38" Nov 24 11:29:13 crc kubenswrapper[5072]: I1124 11:29:13.372319 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50ed5bcf7b58686c9c39d2083331f2f908ec020f73f7ca7435cdf2c9fd7abe38"} err="failed to get container status \"50ed5bcf7b58686c9c39d2083331f2f908ec020f73f7ca7435cdf2c9fd7abe38\": rpc error: code = NotFound desc = could not find container \"50ed5bcf7b58686c9c39d2083331f2f908ec020f73f7ca7435cdf2c9fd7abe38\": container with ID starting with 50ed5bcf7b58686c9c39d2083331f2f908ec020f73f7ca7435cdf2c9fd7abe38 not found: ID does not exist" Nov 24 11:29:13 crc kubenswrapper[5072]: I1124 11:29:13.412542 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/354afe75-70d3-4c45-a990-0299f821b0af-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "354afe75-70d3-4c45-a990-0299f821b0af" (UID: "354afe75-70d3-4c45-a990-0299f821b0af"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:29:13 crc kubenswrapper[5072]: I1124 11:29:13.437133 5072 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/354afe75-70d3-4c45-a990-0299f821b0af-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 24 11:29:13 crc kubenswrapper[5072]: I1124 11:29:13.453719 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 11:29:13 crc kubenswrapper[5072]: I1124 11:29:13.462069 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 11:29:13 crc kubenswrapper[5072]: I1124 11:29:13.478462 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 11:29:13 crc kubenswrapper[5072]: E1124 11:29:13.478819 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="354afe75-70d3-4c45-a990-0299f821b0af" containerName="rabbitmq" Nov 24 11:29:13 crc kubenswrapper[5072]: I1124 11:29:13.478835 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="354afe75-70d3-4c45-a990-0299f821b0af" containerName="rabbitmq" Nov 24 11:29:13 crc kubenswrapper[5072]: E1124 11:29:13.478858 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="354afe75-70d3-4c45-a990-0299f821b0af" containerName="setup-container" Nov 24 11:29:13 crc kubenswrapper[5072]: I1124 11:29:13.478865 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="354afe75-70d3-4c45-a990-0299f821b0af" containerName="setup-container" Nov 24 11:29:13 crc kubenswrapper[5072]: I1124 11:29:13.479052 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="354afe75-70d3-4c45-a990-0299f821b0af" containerName="rabbitmq" Nov 24 11:29:13 crc kubenswrapper[5072]: I1124 11:29:13.479991 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 24 11:29:13 crc kubenswrapper[5072]: I1124 11:29:13.482540 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 24 11:29:13 crc kubenswrapper[5072]: I1124 11:29:13.482749 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 24 11:29:13 crc kubenswrapper[5072]: I1124 11:29:13.483071 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 24 11:29:13 crc kubenswrapper[5072]: I1124 11:29:13.484527 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 24 11:29:13 crc kubenswrapper[5072]: I1124 11:29:13.484592 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 24 11:29:13 crc kubenswrapper[5072]: I1124 11:29:13.484701 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-md6cz" Nov 24 11:29:13 crc kubenswrapper[5072]: I1124 11:29:13.484741 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 24 11:29:13 crc kubenswrapper[5072]: I1124 11:29:13.495251 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 11:29:13 crc kubenswrapper[5072]: I1124 11:29:13.538478 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/02112c1c-a6a9-42e6-938e-e3e8d7b7217c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"02112c1c-a6a9-42e6-938e-e3e8d7b7217c\") " pod="openstack/rabbitmq-server-0" Nov 24 11:29:13 crc kubenswrapper[5072]: I1124 11:29:13.538601 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/02112c1c-a6a9-42e6-938e-e3e8d7b7217c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"02112c1c-a6a9-42e6-938e-e3e8d7b7217c\") " pod="openstack/rabbitmq-server-0" Nov 24 11:29:13 crc kubenswrapper[5072]: I1124 11:29:13.538720 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hh48\" (UniqueName: \"kubernetes.io/projected/02112c1c-a6a9-42e6-938e-e3e8d7b7217c-kube-api-access-5hh48\") pod \"rabbitmq-server-0\" (UID: \"02112c1c-a6a9-42e6-938e-e3e8d7b7217c\") " pod="openstack/rabbitmq-server-0" Nov 24 11:29:13 crc kubenswrapper[5072]: I1124 11:29:13.540417 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/02112c1c-a6a9-42e6-938e-e3e8d7b7217c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"02112c1c-a6a9-42e6-938e-e3e8d7b7217c\") " pod="openstack/rabbitmq-server-0" Nov 24 11:29:13 crc kubenswrapper[5072]: I1124 11:29:13.540502 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/02112c1c-a6a9-42e6-938e-e3e8d7b7217c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"02112c1c-a6a9-42e6-938e-e3e8d7b7217c\") " pod="openstack/rabbitmq-server-0" Nov 24 11:29:13 crc kubenswrapper[5072]: I1124 11:29:13.540532 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/02112c1c-a6a9-42e6-938e-e3e8d7b7217c-config-data\") pod \"rabbitmq-server-0\" (UID: \"02112c1c-a6a9-42e6-938e-e3e8d7b7217c\") " pod="openstack/rabbitmq-server-0" Nov 24 11:29:13 crc kubenswrapper[5072]: I1124 11:29:13.540587 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/02112c1c-a6a9-42e6-938e-e3e8d7b7217c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"02112c1c-a6a9-42e6-938e-e3e8d7b7217c\") " pod="openstack/rabbitmq-server-0" Nov 24 11:29:13 crc kubenswrapper[5072]: I1124 11:29:13.540617 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"02112c1c-a6a9-42e6-938e-e3e8d7b7217c\") " pod="openstack/rabbitmq-server-0" Nov 24 11:29:13 crc kubenswrapper[5072]: I1124 11:29:13.540678 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/02112c1c-a6a9-42e6-938e-e3e8d7b7217c-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"02112c1c-a6a9-42e6-938e-e3e8d7b7217c\") " pod="openstack/rabbitmq-server-0" Nov 24 11:29:13 crc kubenswrapper[5072]: I1124 11:29:13.540719 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/02112c1c-a6a9-42e6-938e-e3e8d7b7217c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"02112c1c-a6a9-42e6-938e-e3e8d7b7217c\") " pod="openstack/rabbitmq-server-0" Nov 24 11:29:13 crc kubenswrapper[5072]: I1124 11:29:13.540777 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/02112c1c-a6a9-42e6-938e-e3e8d7b7217c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"02112c1c-a6a9-42e6-938e-e3e8d7b7217c\") " pod="openstack/rabbitmq-server-0" Nov 24 11:29:13 crc kubenswrapper[5072]: I1124 11:29:13.642736 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/02112c1c-a6a9-42e6-938e-e3e8d7b7217c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"02112c1c-a6a9-42e6-938e-e3e8d7b7217c\") " pod="openstack/rabbitmq-server-0" Nov 24 11:29:13 crc kubenswrapper[5072]: I1124 11:29:13.642815 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/02112c1c-a6a9-42e6-938e-e3e8d7b7217c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"02112c1c-a6a9-42e6-938e-e3e8d7b7217c\") " pod="openstack/rabbitmq-server-0" Nov 24 11:29:13 crc kubenswrapper[5072]: I1124 11:29:13.642841 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hh48\" (UniqueName: \"kubernetes.io/projected/02112c1c-a6a9-42e6-938e-e3e8d7b7217c-kube-api-access-5hh48\") pod \"rabbitmq-server-0\" (UID: \"02112c1c-a6a9-42e6-938e-e3e8d7b7217c\") " pod="openstack/rabbitmq-server-0" Nov 24 11:29:13 crc kubenswrapper[5072]: I1124 11:29:13.642886 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/02112c1c-a6a9-42e6-938e-e3e8d7b7217c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"02112c1c-a6a9-42e6-938e-e3e8d7b7217c\") " pod="openstack/rabbitmq-server-0" Nov 24 11:29:13 crc kubenswrapper[5072]: I1124 11:29:13.642940 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/02112c1c-a6a9-42e6-938e-e3e8d7b7217c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"02112c1c-a6a9-42e6-938e-e3e8d7b7217c\") " pod="openstack/rabbitmq-server-0" Nov 24 11:29:13 crc kubenswrapper[5072]: I1124 11:29:13.642960 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/02112c1c-a6a9-42e6-938e-e3e8d7b7217c-config-data\") pod \"rabbitmq-server-0\" (UID: \"02112c1c-a6a9-42e6-938e-e3e8d7b7217c\") " pod="openstack/rabbitmq-server-0" Nov 24 11:29:13 crc kubenswrapper[5072]: I1124 11:29:13.642994 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/02112c1c-a6a9-42e6-938e-e3e8d7b7217c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"02112c1c-a6a9-42e6-938e-e3e8d7b7217c\") " pod="openstack/rabbitmq-server-0" Nov 24 11:29:13 crc kubenswrapper[5072]: I1124 11:29:13.643012 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"02112c1c-a6a9-42e6-938e-e3e8d7b7217c\") " pod="openstack/rabbitmq-server-0" Nov 24 11:29:13 crc kubenswrapper[5072]: I1124 11:29:13.643037 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/02112c1c-a6a9-42e6-938e-e3e8d7b7217c-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"02112c1c-a6a9-42e6-938e-e3e8d7b7217c\") " pod="openstack/rabbitmq-server-0" Nov 24 11:29:13 crc kubenswrapper[5072]: I1124 11:29:13.643065 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/02112c1c-a6a9-42e6-938e-e3e8d7b7217c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"02112c1c-a6a9-42e6-938e-e3e8d7b7217c\") " pod="openstack/rabbitmq-server-0" Nov 24 11:29:13 crc kubenswrapper[5072]: I1124 11:29:13.643087 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/02112c1c-a6a9-42e6-938e-e3e8d7b7217c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"02112c1c-a6a9-42e6-938e-e3e8d7b7217c\") " pod="openstack/rabbitmq-server-0" Nov 24 11:29:13 crc kubenswrapper[5072]: I1124 11:29:13.643580 5072 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"02112c1c-a6a9-42e6-938e-e3e8d7b7217c\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/rabbitmq-server-0" Nov 24 11:29:13 crc kubenswrapper[5072]: I1124 11:29:13.644522 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/02112c1c-a6a9-42e6-938e-e3e8d7b7217c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"02112c1c-a6a9-42e6-938e-e3e8d7b7217c\") " pod="openstack/rabbitmq-server-0" Nov 24 11:29:13 crc kubenswrapper[5072]: I1124 11:29:13.644747 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/02112c1c-a6a9-42e6-938e-e3e8d7b7217c-config-data\") pod \"rabbitmq-server-0\" (UID: \"02112c1c-a6a9-42e6-938e-e3e8d7b7217c\") " pod="openstack/rabbitmq-server-0" Nov 24 11:29:13 crc kubenswrapper[5072]: I1124 11:29:13.644893 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/02112c1c-a6a9-42e6-938e-e3e8d7b7217c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"02112c1c-a6a9-42e6-938e-e3e8d7b7217c\") " pod="openstack/rabbitmq-server-0" Nov 24 11:29:13 crc kubenswrapper[5072]: I1124 11:29:13.645547 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/02112c1c-a6a9-42e6-938e-e3e8d7b7217c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"02112c1c-a6a9-42e6-938e-e3e8d7b7217c\") " pod="openstack/rabbitmq-server-0" Nov 24 11:29:13 crc kubenswrapper[5072]: I1124 11:29:13.647252 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/02112c1c-a6a9-42e6-938e-e3e8d7b7217c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"02112c1c-a6a9-42e6-938e-e3e8d7b7217c\") " pod="openstack/rabbitmq-server-0" Nov 24 11:29:13 crc kubenswrapper[5072]: I1124 11:29:13.647609 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/02112c1c-a6a9-42e6-938e-e3e8d7b7217c-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"02112c1c-a6a9-42e6-938e-e3e8d7b7217c\") " pod="openstack/rabbitmq-server-0" Nov 24 11:29:13 crc kubenswrapper[5072]: I1124 11:29:13.647619 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/02112c1c-a6a9-42e6-938e-e3e8d7b7217c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"02112c1c-a6a9-42e6-938e-e3e8d7b7217c\") " pod="openstack/rabbitmq-server-0" Nov 24 11:29:13 crc kubenswrapper[5072]: I1124 11:29:13.650261 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/02112c1c-a6a9-42e6-938e-e3e8d7b7217c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"02112c1c-a6a9-42e6-938e-e3e8d7b7217c\") " pod="openstack/rabbitmq-server-0" Nov 24 11:29:13 crc kubenswrapper[5072]: I1124 11:29:13.653322 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/02112c1c-a6a9-42e6-938e-e3e8d7b7217c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"02112c1c-a6a9-42e6-938e-e3e8d7b7217c\") " pod="openstack/rabbitmq-server-0" Nov 24 11:29:13 crc kubenswrapper[5072]: I1124 11:29:13.691085 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hh48\" (UniqueName: \"kubernetes.io/projected/02112c1c-a6a9-42e6-938e-e3e8d7b7217c-kube-api-access-5hh48\") pod \"rabbitmq-server-0\" (UID: \"02112c1c-a6a9-42e6-938e-e3e8d7b7217c\") " pod="openstack/rabbitmq-server-0" Nov 24 11:29:13 crc kubenswrapper[5072]: I1124 11:29:13.691430 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"02112c1c-a6a9-42e6-938e-e3e8d7b7217c\") " pod="openstack/rabbitmq-server-0" Nov 24 11:29:13 crc kubenswrapper[5072]: I1124 11:29:13.796487 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 24 11:29:14 crc kubenswrapper[5072]: I1124 11:29:14.361838 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 11:29:14 crc kubenswrapper[5072]: I1124 11:29:14.565146 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:29:14 crc kubenswrapper[5072]: I1124 11:29:14.718087 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/224cff60-3d72-478d-9788-926bbca42ad2-erlang-cookie-secret\") pod \"224cff60-3d72-478d-9788-926bbca42ad2\" (UID: \"224cff60-3d72-478d-9788-926bbca42ad2\") " Nov 24 11:29:14 crc kubenswrapper[5072]: I1124 11:29:14.718139 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/224cff60-3d72-478d-9788-926bbca42ad2-server-conf\") pod \"224cff60-3d72-478d-9788-926bbca42ad2\" (UID: \"224cff60-3d72-478d-9788-926bbca42ad2\") " Nov 24 11:29:14 crc kubenswrapper[5072]: I1124 11:29:14.718171 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/224cff60-3d72-478d-9788-926bbca42ad2-plugins-conf\") pod \"224cff60-3d72-478d-9788-926bbca42ad2\" (UID: \"224cff60-3d72-478d-9788-926bbca42ad2\") " Nov 24 11:29:14 crc kubenswrapper[5072]: I1124 11:29:14.718245 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/224cff60-3d72-478d-9788-926bbca42ad2-rabbitmq-erlang-cookie\") pod \"224cff60-3d72-478d-9788-926bbca42ad2\" (UID: \"224cff60-3d72-478d-9788-926bbca42ad2\") " Nov 24 11:29:14 crc kubenswrapper[5072]: I1124 11:29:14.718312 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/224cff60-3d72-478d-9788-926bbca42ad2-rabbitmq-plugins\") pod \"224cff60-3d72-478d-9788-926bbca42ad2\" (UID: \"224cff60-3d72-478d-9788-926bbca42ad2\") " Nov 24 11:29:14 crc kubenswrapper[5072]: I1124 11:29:14.718818 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/224cff60-3d72-478d-9788-926bbca42ad2-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "224cff60-3d72-478d-9788-926bbca42ad2" (UID: "224cff60-3d72-478d-9788-926bbca42ad2"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:29:14 crc kubenswrapper[5072]: I1124 11:29:14.718828 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/224cff60-3d72-478d-9788-926bbca42ad2-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "224cff60-3d72-478d-9788-926bbca42ad2" (UID: "224cff60-3d72-478d-9788-926bbca42ad2"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:29:14 crc kubenswrapper[5072]: I1124 11:29:14.718880 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/224cff60-3d72-478d-9788-926bbca42ad2-pod-info\") pod \"224cff60-3d72-478d-9788-926bbca42ad2\" (UID: \"224cff60-3d72-478d-9788-926bbca42ad2\") " Nov 24 11:29:14 crc kubenswrapper[5072]: I1124 11:29:14.718936 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/224cff60-3d72-478d-9788-926bbca42ad2-rabbitmq-confd\") pod \"224cff60-3d72-478d-9788-926bbca42ad2\" (UID: \"224cff60-3d72-478d-9788-926bbca42ad2\") " Nov 24 11:29:14 crc kubenswrapper[5072]: I1124 11:29:14.719208 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cts8f\" (UniqueName: \"kubernetes.io/projected/224cff60-3d72-478d-9788-926bbca42ad2-kube-api-access-cts8f\") pod \"224cff60-3d72-478d-9788-926bbca42ad2\" (UID: \"224cff60-3d72-478d-9788-926bbca42ad2\") " Nov 24 11:29:14 crc kubenswrapper[5072]: I1124 11:29:14.719239 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/224cff60-3d72-478d-9788-926bbca42ad2-config-data\") pod \"224cff60-3d72-478d-9788-926bbca42ad2\" (UID: \"224cff60-3d72-478d-9788-926bbca42ad2\") " Nov 24 11:29:14 crc kubenswrapper[5072]: I1124 11:29:14.719602 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"224cff60-3d72-478d-9788-926bbca42ad2\" (UID: \"224cff60-3d72-478d-9788-926bbca42ad2\") " Nov 24 11:29:14 crc kubenswrapper[5072]: I1124 11:29:14.719636 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/224cff60-3d72-478d-9788-926bbca42ad2-rabbitmq-tls\") pod \"224cff60-3d72-478d-9788-926bbca42ad2\" (UID: \"224cff60-3d72-478d-9788-926bbca42ad2\") " Nov 24 11:29:14 crc kubenswrapper[5072]: I1124 11:29:14.719393 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/224cff60-3d72-478d-9788-926bbca42ad2-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "224cff60-3d72-478d-9788-926bbca42ad2" (UID: "224cff60-3d72-478d-9788-926bbca42ad2"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:29:14 crc kubenswrapper[5072]: I1124 11:29:14.720786 5072 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/224cff60-3d72-478d-9788-926bbca42ad2-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 24 11:29:14 crc kubenswrapper[5072]: I1124 11:29:14.720818 5072 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/224cff60-3d72-478d-9788-926bbca42ad2-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 24 11:29:14 crc kubenswrapper[5072]: I1124 11:29:14.720829 5072 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/224cff60-3d72-478d-9788-926bbca42ad2-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 24 11:29:14 crc kubenswrapper[5072]: I1124 11:29:14.723124 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "persistence") pod "224cff60-3d72-478d-9788-926bbca42ad2" (UID: "224cff60-3d72-478d-9788-926bbca42ad2"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 24 11:29:14 crc kubenswrapper[5072]: I1124 11:29:14.723299 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/224cff60-3d72-478d-9788-926bbca42ad2-pod-info" (OuterVolumeSpecName: "pod-info") pod "224cff60-3d72-478d-9788-926bbca42ad2" (UID: "224cff60-3d72-478d-9788-926bbca42ad2"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 24 11:29:14 crc kubenswrapper[5072]: I1124 11:29:14.723954 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/224cff60-3d72-478d-9788-926bbca42ad2-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "224cff60-3d72-478d-9788-926bbca42ad2" (UID: "224cff60-3d72-478d-9788-926bbca42ad2"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:29:14 crc kubenswrapper[5072]: I1124 11:29:14.727055 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/224cff60-3d72-478d-9788-926bbca42ad2-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "224cff60-3d72-478d-9788-926bbca42ad2" (UID: "224cff60-3d72-478d-9788-926bbca42ad2"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:29:14 crc kubenswrapper[5072]: I1124 11:29:14.750794 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/224cff60-3d72-478d-9788-926bbca42ad2-kube-api-access-cts8f" (OuterVolumeSpecName: "kube-api-access-cts8f") pod "224cff60-3d72-478d-9788-926bbca42ad2" (UID: "224cff60-3d72-478d-9788-926bbca42ad2"). InnerVolumeSpecName "kube-api-access-cts8f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:29:14 crc kubenswrapper[5072]: I1124 11:29:14.764063 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/224cff60-3d72-478d-9788-926bbca42ad2-config-data" (OuterVolumeSpecName: "config-data") pod "224cff60-3d72-478d-9788-926bbca42ad2" (UID: "224cff60-3d72-478d-9788-926bbca42ad2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:29:14 crc kubenswrapper[5072]: I1124 11:29:14.816043 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/224cff60-3d72-478d-9788-926bbca42ad2-server-conf" (OuterVolumeSpecName: "server-conf") pod "224cff60-3d72-478d-9788-926bbca42ad2" (UID: "224cff60-3d72-478d-9788-926bbca42ad2"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:29:14 crc kubenswrapper[5072]: I1124 11:29:14.823138 5072 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/224cff60-3d72-478d-9788-926bbca42ad2-server-conf\") on node \"crc\" DevicePath \"\"" Nov 24 11:29:14 crc kubenswrapper[5072]: I1124 11:29:14.823170 5072 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/224cff60-3d72-478d-9788-926bbca42ad2-pod-info\") on node \"crc\" DevicePath \"\"" Nov 24 11:29:14 crc kubenswrapper[5072]: I1124 11:29:14.823182 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cts8f\" (UniqueName: \"kubernetes.io/projected/224cff60-3d72-478d-9788-926bbca42ad2-kube-api-access-cts8f\") on node \"crc\" DevicePath \"\"" Nov 24 11:29:14 crc kubenswrapper[5072]: I1124 11:29:14.823191 5072 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/224cff60-3d72-478d-9788-926bbca42ad2-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:29:14 crc kubenswrapper[5072]: I1124 11:29:14.823214 5072 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Nov 24 11:29:14 crc kubenswrapper[5072]: I1124 11:29:14.823223 5072 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/224cff60-3d72-478d-9788-926bbca42ad2-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 24 11:29:14 crc kubenswrapper[5072]: I1124 11:29:14.823231 5072 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/224cff60-3d72-478d-9788-926bbca42ad2-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 24 11:29:14 crc kubenswrapper[5072]: I1124 11:29:14.847181 5072 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Nov 24 11:29:14 crc kubenswrapper[5072]: I1124 11:29:14.876952 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/224cff60-3d72-478d-9788-926bbca42ad2-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "224cff60-3d72-478d-9788-926bbca42ad2" (UID: "224cff60-3d72-478d-9788-926bbca42ad2"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:29:14 crc kubenswrapper[5072]: I1124 11:29:14.924389 5072 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/224cff60-3d72-478d-9788-926bbca42ad2-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 24 11:29:14 crc kubenswrapper[5072]: I1124 11:29:14.925130 5072 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Nov 24 11:29:15 crc kubenswrapper[5072]: I1124 11:29:15.026789 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="354afe75-70d3-4c45-a990-0299f821b0af" path="/var/lib/kubelet/pods/354afe75-70d3-4c45-a990-0299f821b0af/volumes" Nov 24 11:29:15 crc kubenswrapper[5072]: I1124 11:29:15.158354 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"02112c1c-a6a9-42e6-938e-e3e8d7b7217c","Type":"ContainerStarted","Data":"0e05b4ec78cc1eb481958a6629f01e9ee594006a16529bd7347aa615c1698312"} Nov 24 11:29:15 crc kubenswrapper[5072]: I1124 11:29:15.160566 5072 generic.go:334] "Generic (PLEG): container finished" podID="224cff60-3d72-478d-9788-926bbca42ad2" containerID="7632bd7692c742dde61619c49b4b4c3df75f9dab1b21043cfeb0c078e48057b5" exitCode=0 Nov 24 11:29:15 crc kubenswrapper[5072]: I1124 11:29:15.160609 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"224cff60-3d72-478d-9788-926bbca42ad2","Type":"ContainerDied","Data":"7632bd7692c742dde61619c49b4b4c3df75f9dab1b21043cfeb0c078e48057b5"} Nov 24 11:29:15 crc kubenswrapper[5072]: I1124 11:29:15.160640 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"224cff60-3d72-478d-9788-926bbca42ad2","Type":"ContainerDied","Data":"9627420c3e20b82c910779ae70b18b459e6760fccf8bef29f33639e6dfc6cc89"} Nov 24 11:29:15 crc kubenswrapper[5072]: I1124 11:29:15.160657 5072 scope.go:117] "RemoveContainer" containerID="7632bd7692c742dde61619c49b4b4c3df75f9dab1b21043cfeb0c078e48057b5" Nov 24 11:29:15 crc kubenswrapper[5072]: I1124 11:29:15.160675 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:29:15 crc kubenswrapper[5072]: I1124 11:29:15.183541 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 11:29:15 crc kubenswrapper[5072]: I1124 11:29:15.187513 5072 scope.go:117] "RemoveContainer" containerID="2e81d597c043ecd78e584bee1d8d13ad13881786d38a4fbb7fe5f5e65775c121" Nov 24 11:29:15 crc kubenswrapper[5072]: I1124 11:29:15.191281 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 11:29:15 crc kubenswrapper[5072]: I1124 11:29:15.237490 5072 scope.go:117] "RemoveContainer" containerID="7632bd7692c742dde61619c49b4b4c3df75f9dab1b21043cfeb0c078e48057b5" Nov 24 11:29:15 crc kubenswrapper[5072]: E1124 11:29:15.238020 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7632bd7692c742dde61619c49b4b4c3df75f9dab1b21043cfeb0c078e48057b5\": container with ID starting with 7632bd7692c742dde61619c49b4b4c3df75f9dab1b21043cfeb0c078e48057b5 not found: ID does not exist" containerID="7632bd7692c742dde61619c49b4b4c3df75f9dab1b21043cfeb0c078e48057b5" Nov 24 11:29:15 crc kubenswrapper[5072]: I1124 11:29:15.238058 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7632bd7692c742dde61619c49b4b4c3df75f9dab1b21043cfeb0c078e48057b5"} err="failed to get container status \"7632bd7692c742dde61619c49b4b4c3df75f9dab1b21043cfeb0c078e48057b5\": rpc error: code = NotFound desc = could not find container \"7632bd7692c742dde61619c49b4b4c3df75f9dab1b21043cfeb0c078e48057b5\": container with ID starting with 7632bd7692c742dde61619c49b4b4c3df75f9dab1b21043cfeb0c078e48057b5 not found: ID does not exist" Nov 24 11:29:15 crc kubenswrapper[5072]: I1124 11:29:15.238089 5072 scope.go:117] "RemoveContainer" containerID="2e81d597c043ecd78e584bee1d8d13ad13881786d38a4fbb7fe5f5e65775c121" Nov 24 11:29:15 crc kubenswrapper[5072]: E1124 11:29:15.238512 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e81d597c043ecd78e584bee1d8d13ad13881786d38a4fbb7fe5f5e65775c121\": container with ID starting with 2e81d597c043ecd78e584bee1d8d13ad13881786d38a4fbb7fe5f5e65775c121 not found: ID does not exist" containerID="2e81d597c043ecd78e584bee1d8d13ad13881786d38a4fbb7fe5f5e65775c121" Nov 24 11:29:15 crc kubenswrapper[5072]: I1124 11:29:15.238541 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e81d597c043ecd78e584bee1d8d13ad13881786d38a4fbb7fe5f5e65775c121"} err="failed to get container status \"2e81d597c043ecd78e584bee1d8d13ad13881786d38a4fbb7fe5f5e65775c121\": rpc error: code = NotFound desc = could not find container \"2e81d597c043ecd78e584bee1d8d13ad13881786d38a4fbb7fe5f5e65775c121\": container with ID starting with 2e81d597c043ecd78e584bee1d8d13ad13881786d38a4fbb7fe5f5e65775c121 not found: ID does not exist" Nov 24 11:29:15 crc kubenswrapper[5072]: I1124 11:29:15.242780 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 11:29:15 crc kubenswrapper[5072]: E1124 11:29:15.245460 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="224cff60-3d72-478d-9788-926bbca42ad2" containerName="setup-container" Nov 24 11:29:15 crc kubenswrapper[5072]: I1124 11:29:15.245496 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="224cff60-3d72-478d-9788-926bbca42ad2" containerName="setup-container" Nov 24 11:29:15 crc kubenswrapper[5072]: E1124 11:29:15.245548 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="224cff60-3d72-478d-9788-926bbca42ad2" containerName="rabbitmq" Nov 24 11:29:15 crc kubenswrapper[5072]: I1124 11:29:15.245557 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="224cff60-3d72-478d-9788-926bbca42ad2" containerName="rabbitmq" Nov 24 11:29:15 crc kubenswrapper[5072]: I1124 11:29:15.247874 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="224cff60-3d72-478d-9788-926bbca42ad2" containerName="rabbitmq" Nov 24 11:29:15 crc kubenswrapper[5072]: I1124 11:29:15.248883 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:29:15 crc kubenswrapper[5072]: I1124 11:29:15.255429 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 24 11:29:15 crc kubenswrapper[5072]: I1124 11:29:15.255619 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 24 11:29:15 crc kubenswrapper[5072]: I1124 11:29:15.255727 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 24 11:29:15 crc kubenswrapper[5072]: I1124 11:29:15.255793 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 24 11:29:15 crc kubenswrapper[5072]: I1124 11:29:15.256249 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 24 11:29:15 crc kubenswrapper[5072]: I1124 11:29:15.256434 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 24 11:29:15 crc kubenswrapper[5072]: I1124 11:29:15.256518 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-np4n4" Nov 24 11:29:15 crc kubenswrapper[5072]: I1124 11:29:15.268329 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 11:29:15 crc kubenswrapper[5072]: I1124 11:29:15.445239 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/38928c57-6c7d-4fb6-afe8-ed2602e450c3-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"38928c57-6c7d-4fb6-afe8-ed2602e450c3\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:29:15 crc kubenswrapper[5072]: I1124 11:29:15.445299 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/38928c57-6c7d-4fb6-afe8-ed2602e450c3-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"38928c57-6c7d-4fb6-afe8-ed2602e450c3\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:29:15 crc kubenswrapper[5072]: I1124 11:29:15.445344 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/38928c57-6c7d-4fb6-afe8-ed2602e450c3-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"38928c57-6c7d-4fb6-afe8-ed2602e450c3\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:29:15 crc kubenswrapper[5072]: I1124 11:29:15.445398 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/38928c57-6c7d-4fb6-afe8-ed2602e450c3-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"38928c57-6c7d-4fb6-afe8-ed2602e450c3\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:29:15 crc kubenswrapper[5072]: I1124 11:29:15.445480 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/38928c57-6c7d-4fb6-afe8-ed2602e450c3-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"38928c57-6c7d-4fb6-afe8-ed2602e450c3\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:29:15 crc kubenswrapper[5072]: I1124 11:29:15.445513 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"38928c57-6c7d-4fb6-afe8-ed2602e450c3\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:29:15 crc kubenswrapper[5072]: I1124 11:29:15.445596 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nfxh\" (UniqueName: \"kubernetes.io/projected/38928c57-6c7d-4fb6-afe8-ed2602e450c3-kube-api-access-9nfxh\") pod \"rabbitmq-cell1-server-0\" (UID: \"38928c57-6c7d-4fb6-afe8-ed2602e450c3\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:29:15 crc kubenswrapper[5072]: I1124 11:29:15.445670 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/38928c57-6c7d-4fb6-afe8-ed2602e450c3-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"38928c57-6c7d-4fb6-afe8-ed2602e450c3\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:29:15 crc kubenswrapper[5072]: I1124 11:29:15.445705 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/38928c57-6c7d-4fb6-afe8-ed2602e450c3-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"38928c57-6c7d-4fb6-afe8-ed2602e450c3\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:29:15 crc kubenswrapper[5072]: I1124 11:29:15.445793 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/38928c57-6c7d-4fb6-afe8-ed2602e450c3-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"38928c57-6c7d-4fb6-afe8-ed2602e450c3\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:29:15 crc kubenswrapper[5072]: I1124 11:29:15.445926 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/38928c57-6c7d-4fb6-afe8-ed2602e450c3-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"38928c57-6c7d-4fb6-afe8-ed2602e450c3\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:29:15 crc kubenswrapper[5072]: I1124 11:29:15.546953 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/38928c57-6c7d-4fb6-afe8-ed2602e450c3-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"38928c57-6c7d-4fb6-afe8-ed2602e450c3\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:29:15 crc kubenswrapper[5072]: I1124 11:29:15.547011 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/38928c57-6c7d-4fb6-afe8-ed2602e450c3-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"38928c57-6c7d-4fb6-afe8-ed2602e450c3\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:29:15 crc kubenswrapper[5072]: I1124 11:29:15.547081 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/38928c57-6c7d-4fb6-afe8-ed2602e450c3-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"38928c57-6c7d-4fb6-afe8-ed2602e450c3\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:29:15 crc kubenswrapper[5072]: I1124 11:29:15.547104 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"38928c57-6c7d-4fb6-afe8-ed2602e450c3\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:29:15 crc kubenswrapper[5072]: I1124 11:29:15.547122 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nfxh\" (UniqueName: \"kubernetes.io/projected/38928c57-6c7d-4fb6-afe8-ed2602e450c3-kube-api-access-9nfxh\") pod \"rabbitmq-cell1-server-0\" (UID: \"38928c57-6c7d-4fb6-afe8-ed2602e450c3\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:29:15 crc kubenswrapper[5072]: I1124 11:29:15.547145 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/38928c57-6c7d-4fb6-afe8-ed2602e450c3-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"38928c57-6c7d-4fb6-afe8-ed2602e450c3\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:29:15 crc kubenswrapper[5072]: I1124 11:29:15.547162 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/38928c57-6c7d-4fb6-afe8-ed2602e450c3-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"38928c57-6c7d-4fb6-afe8-ed2602e450c3\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:29:15 crc kubenswrapper[5072]: I1124 11:29:15.547191 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/38928c57-6c7d-4fb6-afe8-ed2602e450c3-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"38928c57-6c7d-4fb6-afe8-ed2602e450c3\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:29:15 crc kubenswrapper[5072]: I1124 11:29:15.547213 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/38928c57-6c7d-4fb6-afe8-ed2602e450c3-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"38928c57-6c7d-4fb6-afe8-ed2602e450c3\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:29:15 crc kubenswrapper[5072]: I1124 11:29:15.547239 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/38928c57-6c7d-4fb6-afe8-ed2602e450c3-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"38928c57-6c7d-4fb6-afe8-ed2602e450c3\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:29:15 crc kubenswrapper[5072]: I1124 11:29:15.547260 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/38928c57-6c7d-4fb6-afe8-ed2602e450c3-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"38928c57-6c7d-4fb6-afe8-ed2602e450c3\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:29:15 crc kubenswrapper[5072]: I1124 11:29:15.548305 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/38928c57-6c7d-4fb6-afe8-ed2602e450c3-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"38928c57-6c7d-4fb6-afe8-ed2602e450c3\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:29:15 crc kubenswrapper[5072]: I1124 11:29:15.548486 5072 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"38928c57-6c7d-4fb6-afe8-ed2602e450c3\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:29:15 crc kubenswrapper[5072]: I1124 11:29:15.548775 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/38928c57-6c7d-4fb6-afe8-ed2602e450c3-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"38928c57-6c7d-4fb6-afe8-ed2602e450c3\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:29:15 crc kubenswrapper[5072]: I1124 11:29:15.549181 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/38928c57-6c7d-4fb6-afe8-ed2602e450c3-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"38928c57-6c7d-4fb6-afe8-ed2602e450c3\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:29:15 crc kubenswrapper[5072]: I1124 11:29:15.549496 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/38928c57-6c7d-4fb6-afe8-ed2602e450c3-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"38928c57-6c7d-4fb6-afe8-ed2602e450c3\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:29:15 crc kubenswrapper[5072]: I1124 11:29:15.549588 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/38928c57-6c7d-4fb6-afe8-ed2602e450c3-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"38928c57-6c7d-4fb6-afe8-ed2602e450c3\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:29:15 crc kubenswrapper[5072]: I1124 11:29:15.583747 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/38928c57-6c7d-4fb6-afe8-ed2602e450c3-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"38928c57-6c7d-4fb6-afe8-ed2602e450c3\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:29:15 crc kubenswrapper[5072]: I1124 11:29:15.588946 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/38928c57-6c7d-4fb6-afe8-ed2602e450c3-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"38928c57-6c7d-4fb6-afe8-ed2602e450c3\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:29:15 crc kubenswrapper[5072]: I1124 11:29:15.592919 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/38928c57-6c7d-4fb6-afe8-ed2602e450c3-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"38928c57-6c7d-4fb6-afe8-ed2602e450c3\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:29:15 crc kubenswrapper[5072]: I1124 11:29:15.593093 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/38928c57-6c7d-4fb6-afe8-ed2602e450c3-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"38928c57-6c7d-4fb6-afe8-ed2602e450c3\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:29:15 crc kubenswrapper[5072]: I1124 11:29:15.593198 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nfxh\" (UniqueName: \"kubernetes.io/projected/38928c57-6c7d-4fb6-afe8-ed2602e450c3-kube-api-access-9nfxh\") pod \"rabbitmq-cell1-server-0\" (UID: \"38928c57-6c7d-4fb6-afe8-ed2602e450c3\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:29:15 crc kubenswrapper[5072]: I1124 11:29:15.616044 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"38928c57-6c7d-4fb6-afe8-ed2602e450c3\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:29:15 crc kubenswrapper[5072]: I1124 11:29:15.630803 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-8zxz2"] Nov 24 11:29:15 crc kubenswrapper[5072]: I1124 11:29:15.632524 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6447ccbd8f-8zxz2" Nov 24 11:29:15 crc kubenswrapper[5072]: I1124 11:29:15.635456 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Nov 24 11:29:15 crc kubenswrapper[5072]: I1124 11:29:15.641430 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-8zxz2"] Nov 24 11:29:15 crc kubenswrapper[5072]: I1124 11:29:15.761329 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5af0262b-936f-4af6-81de-426219aec18b-dns-svc\") pod \"dnsmasq-dns-6447ccbd8f-8zxz2\" (UID: \"5af0262b-936f-4af6-81de-426219aec18b\") " pod="openstack/dnsmasq-dns-6447ccbd8f-8zxz2" Nov 24 11:29:15 crc kubenswrapper[5072]: I1124 11:29:15.761632 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pstfr\" (UniqueName: \"kubernetes.io/projected/5af0262b-936f-4af6-81de-426219aec18b-kube-api-access-pstfr\") pod \"dnsmasq-dns-6447ccbd8f-8zxz2\" (UID: \"5af0262b-936f-4af6-81de-426219aec18b\") " pod="openstack/dnsmasq-dns-6447ccbd8f-8zxz2" Nov 24 11:29:15 crc kubenswrapper[5072]: I1124 11:29:15.761693 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5af0262b-936f-4af6-81de-426219aec18b-config\") pod \"dnsmasq-dns-6447ccbd8f-8zxz2\" (UID: \"5af0262b-936f-4af6-81de-426219aec18b\") " pod="openstack/dnsmasq-dns-6447ccbd8f-8zxz2" Nov 24 11:29:15 crc kubenswrapper[5072]: I1124 11:29:15.761755 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5af0262b-936f-4af6-81de-426219aec18b-openstack-edpm-ipam\") pod \"dnsmasq-dns-6447ccbd8f-8zxz2\" (UID: \"5af0262b-936f-4af6-81de-426219aec18b\") " pod="openstack/dnsmasq-dns-6447ccbd8f-8zxz2" Nov 24 11:29:15 crc kubenswrapper[5072]: I1124 11:29:15.761904 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5af0262b-936f-4af6-81de-426219aec18b-ovsdbserver-nb\") pod \"dnsmasq-dns-6447ccbd8f-8zxz2\" (UID: \"5af0262b-936f-4af6-81de-426219aec18b\") " pod="openstack/dnsmasq-dns-6447ccbd8f-8zxz2" Nov 24 11:29:15 crc kubenswrapper[5072]: I1124 11:29:15.761962 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5af0262b-936f-4af6-81de-426219aec18b-ovsdbserver-sb\") pod \"dnsmasq-dns-6447ccbd8f-8zxz2\" (UID: \"5af0262b-936f-4af6-81de-426219aec18b\") " pod="openstack/dnsmasq-dns-6447ccbd8f-8zxz2" Nov 24 11:29:15 crc kubenswrapper[5072]: I1124 11:29:15.863269 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5af0262b-936f-4af6-81de-426219aec18b-config\") pod \"dnsmasq-dns-6447ccbd8f-8zxz2\" (UID: \"5af0262b-936f-4af6-81de-426219aec18b\") " pod="openstack/dnsmasq-dns-6447ccbd8f-8zxz2" Nov 24 11:29:15 crc kubenswrapper[5072]: I1124 11:29:15.863316 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5af0262b-936f-4af6-81de-426219aec18b-openstack-edpm-ipam\") pod \"dnsmasq-dns-6447ccbd8f-8zxz2\" (UID: \"5af0262b-936f-4af6-81de-426219aec18b\") " pod="openstack/dnsmasq-dns-6447ccbd8f-8zxz2" Nov 24 11:29:15 crc kubenswrapper[5072]: I1124 11:29:15.863397 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5af0262b-936f-4af6-81de-426219aec18b-ovsdbserver-nb\") pod \"dnsmasq-dns-6447ccbd8f-8zxz2\" (UID: \"5af0262b-936f-4af6-81de-426219aec18b\") " pod="openstack/dnsmasq-dns-6447ccbd8f-8zxz2" Nov 24 11:29:15 crc kubenswrapper[5072]: I1124 11:29:15.863438 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5af0262b-936f-4af6-81de-426219aec18b-ovsdbserver-sb\") pod \"dnsmasq-dns-6447ccbd8f-8zxz2\" (UID: \"5af0262b-936f-4af6-81de-426219aec18b\") " pod="openstack/dnsmasq-dns-6447ccbd8f-8zxz2" Nov 24 11:29:15 crc kubenswrapper[5072]: I1124 11:29:15.863462 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5af0262b-936f-4af6-81de-426219aec18b-dns-svc\") pod \"dnsmasq-dns-6447ccbd8f-8zxz2\" (UID: \"5af0262b-936f-4af6-81de-426219aec18b\") " pod="openstack/dnsmasq-dns-6447ccbd8f-8zxz2" Nov 24 11:29:15 crc kubenswrapper[5072]: I1124 11:29:15.863494 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pstfr\" (UniqueName: \"kubernetes.io/projected/5af0262b-936f-4af6-81de-426219aec18b-kube-api-access-pstfr\") pod \"dnsmasq-dns-6447ccbd8f-8zxz2\" (UID: \"5af0262b-936f-4af6-81de-426219aec18b\") " pod="openstack/dnsmasq-dns-6447ccbd8f-8zxz2" Nov 24 11:29:15 crc kubenswrapper[5072]: I1124 11:29:15.864075 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5af0262b-936f-4af6-81de-426219aec18b-config\") pod \"dnsmasq-dns-6447ccbd8f-8zxz2\" (UID: \"5af0262b-936f-4af6-81de-426219aec18b\") " pod="openstack/dnsmasq-dns-6447ccbd8f-8zxz2" Nov 24 11:29:15 crc kubenswrapper[5072]: I1124 11:29:15.864187 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5af0262b-936f-4af6-81de-426219aec18b-ovsdbserver-sb\") pod \"dnsmasq-dns-6447ccbd8f-8zxz2\" (UID: \"5af0262b-936f-4af6-81de-426219aec18b\") " pod="openstack/dnsmasq-dns-6447ccbd8f-8zxz2" Nov 24 11:29:15 crc kubenswrapper[5072]: I1124 11:29:15.864699 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5af0262b-936f-4af6-81de-426219aec18b-dns-svc\") pod \"dnsmasq-dns-6447ccbd8f-8zxz2\" (UID: \"5af0262b-936f-4af6-81de-426219aec18b\") " pod="openstack/dnsmasq-dns-6447ccbd8f-8zxz2" Nov 24 11:29:15 crc kubenswrapper[5072]: I1124 11:29:15.864711 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5af0262b-936f-4af6-81de-426219aec18b-ovsdbserver-nb\") pod \"dnsmasq-dns-6447ccbd8f-8zxz2\" (UID: \"5af0262b-936f-4af6-81de-426219aec18b\") " pod="openstack/dnsmasq-dns-6447ccbd8f-8zxz2" Nov 24 11:29:15 crc kubenswrapper[5072]: I1124 11:29:15.865312 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5af0262b-936f-4af6-81de-426219aec18b-openstack-edpm-ipam\") pod \"dnsmasq-dns-6447ccbd8f-8zxz2\" (UID: \"5af0262b-936f-4af6-81de-426219aec18b\") " pod="openstack/dnsmasq-dns-6447ccbd8f-8zxz2" Nov 24 11:29:15 crc kubenswrapper[5072]: I1124 11:29:15.872740 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:29:15 crc kubenswrapper[5072]: I1124 11:29:15.880978 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pstfr\" (UniqueName: \"kubernetes.io/projected/5af0262b-936f-4af6-81de-426219aec18b-kube-api-access-pstfr\") pod \"dnsmasq-dns-6447ccbd8f-8zxz2\" (UID: \"5af0262b-936f-4af6-81de-426219aec18b\") " pod="openstack/dnsmasq-dns-6447ccbd8f-8zxz2" Nov 24 11:29:16 crc kubenswrapper[5072]: I1124 11:29:16.075804 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6447ccbd8f-8zxz2" Nov 24 11:29:16 crc kubenswrapper[5072]: I1124 11:29:16.182803 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"02112c1c-a6a9-42e6-938e-e3e8d7b7217c","Type":"ContainerStarted","Data":"4bdf92385eff5e4e2cb9f1377d2cefae289ca0e03669249fd6ffadb8aa049f20"} Nov 24 11:29:16 crc kubenswrapper[5072]: W1124 11:29:16.338023 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38928c57_6c7d_4fb6_afe8_ed2602e450c3.slice/crio-353fb8d6da8b47afddcc028aa042d41c556885171fe53e8fcc6efa2dff242a73 WatchSource:0}: Error finding container 353fb8d6da8b47afddcc028aa042d41c556885171fe53e8fcc6efa2dff242a73: Status 404 returned error can't find the container with id 353fb8d6da8b47afddcc028aa042d41c556885171fe53e8fcc6efa2dff242a73 Nov 24 11:29:16 crc kubenswrapper[5072]: I1124 11:29:16.340102 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 11:29:16 crc kubenswrapper[5072]: I1124 11:29:16.671187 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-8zxz2"] Nov 24 11:29:16 crc kubenswrapper[5072]: W1124 11:29:16.675243 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5af0262b_936f_4af6_81de_426219aec18b.slice/crio-2b2cd2af33072b98567000b38bad494c31f3469118ec824ffa27d707b396b403 WatchSource:0}: Error finding container 2b2cd2af33072b98567000b38bad494c31f3469118ec824ffa27d707b396b403: Status 404 returned error can't find the container with id 2b2cd2af33072b98567000b38bad494c31f3469118ec824ffa27d707b396b403 Nov 24 11:29:17 crc kubenswrapper[5072]: I1124 11:29:17.026158 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="224cff60-3d72-478d-9788-926bbca42ad2" path="/var/lib/kubelet/pods/224cff60-3d72-478d-9788-926bbca42ad2/volumes" Nov 24 11:29:17 crc kubenswrapper[5072]: I1124 11:29:17.191034 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"38928c57-6c7d-4fb6-afe8-ed2602e450c3","Type":"ContainerStarted","Data":"353fb8d6da8b47afddcc028aa042d41c556885171fe53e8fcc6efa2dff242a73"} Nov 24 11:29:17 crc kubenswrapper[5072]: I1124 11:29:17.192662 5072 generic.go:334] "Generic (PLEG): container finished" podID="5af0262b-936f-4af6-81de-426219aec18b" containerID="067f575ef9ba90a1abc82c6a2dcdc34c723075b3209e7a89ec14a6e3d40c33e1" exitCode=0 Nov 24 11:29:17 crc kubenswrapper[5072]: I1124 11:29:17.192755 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6447ccbd8f-8zxz2" event={"ID":"5af0262b-936f-4af6-81de-426219aec18b","Type":"ContainerDied","Data":"067f575ef9ba90a1abc82c6a2dcdc34c723075b3209e7a89ec14a6e3d40c33e1"} Nov 24 11:29:17 crc kubenswrapper[5072]: I1124 11:29:17.192791 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6447ccbd8f-8zxz2" event={"ID":"5af0262b-936f-4af6-81de-426219aec18b","Type":"ContainerStarted","Data":"2b2cd2af33072b98567000b38bad494c31f3469118ec824ffa27d707b396b403"} Nov 24 11:29:18 crc kubenswrapper[5072]: I1124 11:29:18.223100 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"38928c57-6c7d-4fb6-afe8-ed2602e450c3","Type":"ContainerStarted","Data":"13c42ab69159691f8bc124ad84b65b0d42e5bcc79fb205ae488799e2a53fd04b"} Nov 24 11:29:18 crc kubenswrapper[5072]: I1124 11:29:18.226092 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6447ccbd8f-8zxz2" event={"ID":"5af0262b-936f-4af6-81de-426219aec18b","Type":"ContainerStarted","Data":"3a190c83866fe7e8b1f6e5d05ffad54843ef5e67f1792a9554cd649122d2ade8"} Nov 24 11:29:18 crc kubenswrapper[5072]: I1124 11:29:18.226651 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6447ccbd8f-8zxz2" Nov 24 11:29:18 crc kubenswrapper[5072]: I1124 11:29:18.281664 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6447ccbd8f-8zxz2" podStartSLOduration=3.281646242 podStartE2EDuration="3.281646242s" podCreationTimestamp="2025-11-24 11:29:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:29:18.278179416 +0000 UTC m=+1209.989703892" watchObservedRunningTime="2025-11-24 11:29:18.281646242 +0000 UTC m=+1209.993170718" Nov 24 11:29:26 crc kubenswrapper[5072]: I1124 11:29:26.077589 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6447ccbd8f-8zxz2" Nov 24 11:29:26 crc kubenswrapper[5072]: I1124 11:29:26.176319 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-hl4mn"] Nov 24 11:29:26 crc kubenswrapper[5072]: I1124 11:29:26.177018 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b856c5697-hl4mn" podUID="8a85681e-0caa-48f6-8782-301c059a6380" containerName="dnsmasq-dns" containerID="cri-o://ad68e303220191203da71cc8f477c74d48a74897203681270d71f1d1803ce42f" gracePeriod=10 Nov 24 11:29:26 crc kubenswrapper[5072]: I1124 11:29:26.371459 5072 generic.go:334] "Generic (PLEG): container finished" podID="8a85681e-0caa-48f6-8782-301c059a6380" containerID="ad68e303220191203da71cc8f477c74d48a74897203681270d71f1d1803ce42f" exitCode=0 Nov 24 11:29:26 crc kubenswrapper[5072]: I1124 11:29:26.371781 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-hl4mn" event={"ID":"8a85681e-0caa-48f6-8782-301c059a6380","Type":"ContainerDied","Data":"ad68e303220191203da71cc8f477c74d48a74897203681270d71f1d1803ce42f"} Nov 24 11:29:26 crc kubenswrapper[5072]: I1124 11:29:26.384432 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-864d5fc68c-jrg65"] Nov 24 11:29:26 crc kubenswrapper[5072]: I1124 11:29:26.391649 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-864d5fc68c-jrg65" Nov 24 11:29:26 crc kubenswrapper[5072]: I1124 11:29:26.410459 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-864d5fc68c-jrg65"] Nov 24 11:29:26 crc kubenswrapper[5072]: I1124 11:29:26.510117 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5621b8b6-4676-4b1c-992c-839a60accf2f-dns-svc\") pod \"dnsmasq-dns-864d5fc68c-jrg65\" (UID: \"5621b8b6-4676-4b1c-992c-839a60accf2f\") " pod="openstack/dnsmasq-dns-864d5fc68c-jrg65" Nov 24 11:29:26 crc kubenswrapper[5072]: I1124 11:29:26.510255 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5621b8b6-4676-4b1c-992c-839a60accf2f-ovsdbserver-sb\") pod \"dnsmasq-dns-864d5fc68c-jrg65\" (UID: \"5621b8b6-4676-4b1c-992c-839a60accf2f\") " pod="openstack/dnsmasq-dns-864d5fc68c-jrg65" Nov 24 11:29:26 crc kubenswrapper[5072]: I1124 11:29:26.510458 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5621b8b6-4676-4b1c-992c-839a60accf2f-ovsdbserver-nb\") pod \"dnsmasq-dns-864d5fc68c-jrg65\" (UID: \"5621b8b6-4676-4b1c-992c-839a60accf2f\") " pod="openstack/dnsmasq-dns-864d5fc68c-jrg65" Nov 24 11:29:26 crc kubenswrapper[5072]: I1124 11:29:26.510572 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5621b8b6-4676-4b1c-992c-839a60accf2f-config\") pod \"dnsmasq-dns-864d5fc68c-jrg65\" (UID: \"5621b8b6-4676-4b1c-992c-839a60accf2f\") " pod="openstack/dnsmasq-dns-864d5fc68c-jrg65" Nov 24 11:29:26 crc kubenswrapper[5072]: I1124 11:29:26.510714 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5621b8b6-4676-4b1c-992c-839a60accf2f-openstack-edpm-ipam\") pod \"dnsmasq-dns-864d5fc68c-jrg65\" (UID: \"5621b8b6-4676-4b1c-992c-839a60accf2f\") " pod="openstack/dnsmasq-dns-864d5fc68c-jrg65" Nov 24 11:29:26 crc kubenswrapper[5072]: I1124 11:29:26.510795 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57q7p\" (UniqueName: \"kubernetes.io/projected/5621b8b6-4676-4b1c-992c-839a60accf2f-kube-api-access-57q7p\") pod \"dnsmasq-dns-864d5fc68c-jrg65\" (UID: \"5621b8b6-4676-4b1c-992c-839a60accf2f\") " pod="openstack/dnsmasq-dns-864d5fc68c-jrg65" Nov 24 11:29:26 crc kubenswrapper[5072]: I1124 11:29:26.616771 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5621b8b6-4676-4b1c-992c-839a60accf2f-config\") pod \"dnsmasq-dns-864d5fc68c-jrg65\" (UID: \"5621b8b6-4676-4b1c-992c-839a60accf2f\") " pod="openstack/dnsmasq-dns-864d5fc68c-jrg65" Nov 24 11:29:26 crc kubenswrapper[5072]: I1124 11:29:26.616858 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5621b8b6-4676-4b1c-992c-839a60accf2f-openstack-edpm-ipam\") pod \"dnsmasq-dns-864d5fc68c-jrg65\" (UID: \"5621b8b6-4676-4b1c-992c-839a60accf2f\") " pod="openstack/dnsmasq-dns-864d5fc68c-jrg65" Nov 24 11:29:26 crc kubenswrapper[5072]: I1124 11:29:26.616898 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57q7p\" (UniqueName: \"kubernetes.io/projected/5621b8b6-4676-4b1c-992c-839a60accf2f-kube-api-access-57q7p\") pod \"dnsmasq-dns-864d5fc68c-jrg65\" (UID: \"5621b8b6-4676-4b1c-992c-839a60accf2f\") " pod="openstack/dnsmasq-dns-864d5fc68c-jrg65" Nov 24 11:29:26 crc kubenswrapper[5072]: I1124 11:29:26.616933 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5621b8b6-4676-4b1c-992c-839a60accf2f-dns-svc\") pod \"dnsmasq-dns-864d5fc68c-jrg65\" (UID: \"5621b8b6-4676-4b1c-992c-839a60accf2f\") " pod="openstack/dnsmasq-dns-864d5fc68c-jrg65" Nov 24 11:29:26 crc kubenswrapper[5072]: I1124 11:29:26.616975 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5621b8b6-4676-4b1c-992c-839a60accf2f-ovsdbserver-sb\") pod \"dnsmasq-dns-864d5fc68c-jrg65\" (UID: \"5621b8b6-4676-4b1c-992c-839a60accf2f\") " pod="openstack/dnsmasq-dns-864d5fc68c-jrg65" Nov 24 11:29:26 crc kubenswrapper[5072]: I1124 11:29:26.617041 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5621b8b6-4676-4b1c-992c-839a60accf2f-ovsdbserver-nb\") pod \"dnsmasq-dns-864d5fc68c-jrg65\" (UID: \"5621b8b6-4676-4b1c-992c-839a60accf2f\") " pod="openstack/dnsmasq-dns-864d5fc68c-jrg65" Nov 24 11:29:26 crc kubenswrapper[5072]: I1124 11:29:26.618118 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5621b8b6-4676-4b1c-992c-839a60accf2f-ovsdbserver-nb\") pod \"dnsmasq-dns-864d5fc68c-jrg65\" (UID: \"5621b8b6-4676-4b1c-992c-839a60accf2f\") " pod="openstack/dnsmasq-dns-864d5fc68c-jrg65" Nov 24 11:29:26 crc kubenswrapper[5072]: I1124 11:29:26.618121 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5621b8b6-4676-4b1c-992c-839a60accf2f-config\") pod \"dnsmasq-dns-864d5fc68c-jrg65\" (UID: \"5621b8b6-4676-4b1c-992c-839a60accf2f\") " pod="openstack/dnsmasq-dns-864d5fc68c-jrg65" Nov 24 11:29:26 crc kubenswrapper[5072]: I1124 11:29:26.618800 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5621b8b6-4676-4b1c-992c-839a60accf2f-ovsdbserver-sb\") pod \"dnsmasq-dns-864d5fc68c-jrg65\" (UID: \"5621b8b6-4676-4b1c-992c-839a60accf2f\") " pod="openstack/dnsmasq-dns-864d5fc68c-jrg65" Nov 24 11:29:26 crc kubenswrapper[5072]: I1124 11:29:26.619928 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5621b8b6-4676-4b1c-992c-839a60accf2f-openstack-edpm-ipam\") pod \"dnsmasq-dns-864d5fc68c-jrg65\" (UID: \"5621b8b6-4676-4b1c-992c-839a60accf2f\") " pod="openstack/dnsmasq-dns-864d5fc68c-jrg65" Nov 24 11:29:26 crc kubenswrapper[5072]: I1124 11:29:26.620296 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5621b8b6-4676-4b1c-992c-839a60accf2f-dns-svc\") pod \"dnsmasq-dns-864d5fc68c-jrg65\" (UID: \"5621b8b6-4676-4b1c-992c-839a60accf2f\") " pod="openstack/dnsmasq-dns-864d5fc68c-jrg65" Nov 24 11:29:26 crc kubenswrapper[5072]: I1124 11:29:26.633555 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57q7p\" (UniqueName: \"kubernetes.io/projected/5621b8b6-4676-4b1c-992c-839a60accf2f-kube-api-access-57q7p\") pod \"dnsmasq-dns-864d5fc68c-jrg65\" (UID: \"5621b8b6-4676-4b1c-992c-839a60accf2f\") " pod="openstack/dnsmasq-dns-864d5fc68c-jrg65" Nov 24 11:29:26 crc kubenswrapper[5072]: I1124 11:29:26.708159 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b856c5697-hl4mn" Nov 24 11:29:26 crc kubenswrapper[5072]: I1124 11:29:26.718417 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4bj9m\" (UniqueName: \"kubernetes.io/projected/8a85681e-0caa-48f6-8782-301c059a6380-kube-api-access-4bj9m\") pod \"8a85681e-0caa-48f6-8782-301c059a6380\" (UID: \"8a85681e-0caa-48f6-8782-301c059a6380\") " Nov 24 11:29:26 crc kubenswrapper[5072]: I1124 11:29:26.718492 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8a85681e-0caa-48f6-8782-301c059a6380-ovsdbserver-nb\") pod \"8a85681e-0caa-48f6-8782-301c059a6380\" (UID: \"8a85681e-0caa-48f6-8782-301c059a6380\") " Nov 24 11:29:26 crc kubenswrapper[5072]: I1124 11:29:26.718661 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a85681e-0caa-48f6-8782-301c059a6380-config\") pod \"8a85681e-0caa-48f6-8782-301c059a6380\" (UID: \"8a85681e-0caa-48f6-8782-301c059a6380\") " Nov 24 11:29:26 crc kubenswrapper[5072]: I1124 11:29:26.718714 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8a85681e-0caa-48f6-8782-301c059a6380-ovsdbserver-sb\") pod \"8a85681e-0caa-48f6-8782-301c059a6380\" (UID: \"8a85681e-0caa-48f6-8782-301c059a6380\") " Nov 24 11:29:26 crc kubenswrapper[5072]: I1124 11:29:26.718823 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a85681e-0caa-48f6-8782-301c059a6380-dns-svc\") pod \"8a85681e-0caa-48f6-8782-301c059a6380\" (UID: \"8a85681e-0caa-48f6-8782-301c059a6380\") " Nov 24 11:29:26 crc kubenswrapper[5072]: I1124 11:29:26.722205 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a85681e-0caa-48f6-8782-301c059a6380-kube-api-access-4bj9m" (OuterVolumeSpecName: "kube-api-access-4bj9m") pod "8a85681e-0caa-48f6-8782-301c059a6380" (UID: "8a85681e-0caa-48f6-8782-301c059a6380"). InnerVolumeSpecName "kube-api-access-4bj9m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:29:26 crc kubenswrapper[5072]: I1124 11:29:26.722477 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-864d5fc68c-jrg65" Nov 24 11:29:26 crc kubenswrapper[5072]: I1124 11:29:26.766563 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a85681e-0caa-48f6-8782-301c059a6380-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8a85681e-0caa-48f6-8782-301c059a6380" (UID: "8a85681e-0caa-48f6-8782-301c059a6380"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:29:26 crc kubenswrapper[5072]: I1124 11:29:26.780716 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a85681e-0caa-48f6-8782-301c059a6380-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8a85681e-0caa-48f6-8782-301c059a6380" (UID: "8a85681e-0caa-48f6-8782-301c059a6380"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:29:26 crc kubenswrapper[5072]: I1124 11:29:26.791323 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a85681e-0caa-48f6-8782-301c059a6380-config" (OuterVolumeSpecName: "config") pod "8a85681e-0caa-48f6-8782-301c059a6380" (UID: "8a85681e-0caa-48f6-8782-301c059a6380"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:29:26 crc kubenswrapper[5072]: I1124 11:29:26.791968 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a85681e-0caa-48f6-8782-301c059a6380-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8a85681e-0caa-48f6-8782-301c059a6380" (UID: "8a85681e-0caa-48f6-8782-301c059a6380"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:29:26 crc kubenswrapper[5072]: I1124 11:29:26.819963 5072 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a85681e-0caa-48f6-8782-301c059a6380-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:29:26 crc kubenswrapper[5072]: I1124 11:29:26.819994 5072 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8a85681e-0caa-48f6-8782-301c059a6380-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 11:29:26 crc kubenswrapper[5072]: I1124 11:29:26.820004 5072 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a85681e-0caa-48f6-8782-301c059a6380-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 11:29:26 crc kubenswrapper[5072]: I1124 11:29:26.820013 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4bj9m\" (UniqueName: \"kubernetes.io/projected/8a85681e-0caa-48f6-8782-301c059a6380-kube-api-access-4bj9m\") on node \"crc\" DevicePath \"\"" Nov 24 11:29:26 crc kubenswrapper[5072]: I1124 11:29:26.820022 5072 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8a85681e-0caa-48f6-8782-301c059a6380-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 11:29:27 crc kubenswrapper[5072]: W1124 11:29:27.193467 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5621b8b6_4676_4b1c_992c_839a60accf2f.slice/crio-3716264b6193e6ed9589b5a0c86c39b2eaab02ece8cd351b639fcb5baa94459a WatchSource:0}: Error finding container 3716264b6193e6ed9589b5a0c86c39b2eaab02ece8cd351b639fcb5baa94459a: Status 404 returned error can't find the container with id 3716264b6193e6ed9589b5a0c86c39b2eaab02ece8cd351b639fcb5baa94459a Nov 24 11:29:27 crc kubenswrapper[5072]: I1124 11:29:27.203496 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-864d5fc68c-jrg65"] Nov 24 11:29:27 crc kubenswrapper[5072]: I1124 11:29:27.387083 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-hl4mn" event={"ID":"8a85681e-0caa-48f6-8782-301c059a6380","Type":"ContainerDied","Data":"2be961e1f5737a585999fd66594b58ea46864b77fd06fe9d02261c12603fc722"} Nov 24 11:29:27 crc kubenswrapper[5072]: I1124 11:29:27.387162 5072 scope.go:117] "RemoveContainer" containerID="ad68e303220191203da71cc8f477c74d48a74897203681270d71f1d1803ce42f" Nov 24 11:29:27 crc kubenswrapper[5072]: I1124 11:29:27.387340 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b856c5697-hl4mn" Nov 24 11:29:27 crc kubenswrapper[5072]: I1124 11:29:27.396324 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-864d5fc68c-jrg65" event={"ID":"5621b8b6-4676-4b1c-992c-839a60accf2f","Type":"ContainerStarted","Data":"3716264b6193e6ed9589b5a0c86c39b2eaab02ece8cd351b639fcb5baa94459a"} Nov 24 11:29:27 crc kubenswrapper[5072]: I1124 11:29:27.455887 5072 scope.go:117] "RemoveContainer" containerID="8ce26fce3409fdaa9d8fbdb51e6a94dc52eba262d55fe9f8c18693fe3377d195" Nov 24 11:29:27 crc kubenswrapper[5072]: I1124 11:29:27.480693 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-hl4mn"] Nov 24 11:29:27 crc kubenswrapper[5072]: I1124 11:29:27.484369 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-hl4mn"] Nov 24 11:29:28 crc kubenswrapper[5072]: I1124 11:29:28.406943 5072 generic.go:334] "Generic (PLEG): container finished" podID="5621b8b6-4676-4b1c-992c-839a60accf2f" containerID="9d904d00700c38dbefc8e8705784549afa994843f6e475f67fc3b4ee79347a20" exitCode=0 Nov 24 11:29:28 crc kubenswrapper[5072]: I1124 11:29:28.407045 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-864d5fc68c-jrg65" event={"ID":"5621b8b6-4676-4b1c-992c-839a60accf2f","Type":"ContainerDied","Data":"9d904d00700c38dbefc8e8705784549afa994843f6e475f67fc3b4ee79347a20"} Nov 24 11:29:29 crc kubenswrapper[5072]: I1124 11:29:29.038940 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a85681e-0caa-48f6-8782-301c059a6380" path="/var/lib/kubelet/pods/8a85681e-0caa-48f6-8782-301c059a6380/volumes" Nov 24 11:29:29 crc kubenswrapper[5072]: I1124 11:29:29.422703 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-864d5fc68c-jrg65" event={"ID":"5621b8b6-4676-4b1c-992c-839a60accf2f","Type":"ContainerStarted","Data":"a3d4be33e860993bf6cf98325480de7cbe9f49c4cf2d65e2e3c0445b781fb432"} Nov 24 11:29:29 crc kubenswrapper[5072]: I1124 11:29:29.424209 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-864d5fc68c-jrg65" Nov 24 11:29:29 crc kubenswrapper[5072]: I1124 11:29:29.450622 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-864d5fc68c-jrg65" podStartSLOduration=3.450598239 podStartE2EDuration="3.450598239s" podCreationTimestamp="2025-11-24 11:29:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:29:29.448553079 +0000 UTC m=+1221.160077595" watchObservedRunningTime="2025-11-24 11:29:29.450598239 +0000 UTC m=+1221.162122745" Nov 24 11:29:36 crc kubenswrapper[5072]: I1124 11:29:36.724590 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-864d5fc68c-jrg65" Nov 24 11:29:36 crc kubenswrapper[5072]: I1124 11:29:36.834285 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-8zxz2"] Nov 24 11:29:36 crc kubenswrapper[5072]: I1124 11:29:36.834587 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6447ccbd8f-8zxz2" podUID="5af0262b-936f-4af6-81de-426219aec18b" containerName="dnsmasq-dns" containerID="cri-o://3a190c83866fe7e8b1f6e5d05ffad54843ef5e67f1792a9554cd649122d2ade8" gracePeriod=10 Nov 24 11:29:37 crc kubenswrapper[5072]: I1124 11:29:37.351318 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6447ccbd8f-8zxz2" Nov 24 11:29:37 crc kubenswrapper[5072]: I1124 11:29:37.534123 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5af0262b-936f-4af6-81de-426219aec18b-ovsdbserver-sb\") pod \"5af0262b-936f-4af6-81de-426219aec18b\" (UID: \"5af0262b-936f-4af6-81de-426219aec18b\") " Nov 24 11:29:37 crc kubenswrapper[5072]: I1124 11:29:37.534226 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5af0262b-936f-4af6-81de-426219aec18b-ovsdbserver-nb\") pod \"5af0262b-936f-4af6-81de-426219aec18b\" (UID: \"5af0262b-936f-4af6-81de-426219aec18b\") " Nov 24 11:29:37 crc kubenswrapper[5072]: I1124 11:29:37.534337 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5af0262b-936f-4af6-81de-426219aec18b-dns-svc\") pod \"5af0262b-936f-4af6-81de-426219aec18b\" (UID: \"5af0262b-936f-4af6-81de-426219aec18b\") " Nov 24 11:29:37 crc kubenswrapper[5072]: I1124 11:29:37.534559 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5af0262b-936f-4af6-81de-426219aec18b-openstack-edpm-ipam\") pod \"5af0262b-936f-4af6-81de-426219aec18b\" (UID: \"5af0262b-936f-4af6-81de-426219aec18b\") " Nov 24 11:29:37 crc kubenswrapper[5072]: I1124 11:29:37.534643 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5af0262b-936f-4af6-81de-426219aec18b-config\") pod \"5af0262b-936f-4af6-81de-426219aec18b\" (UID: \"5af0262b-936f-4af6-81de-426219aec18b\") " Nov 24 11:29:37 crc kubenswrapper[5072]: I1124 11:29:37.534715 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pstfr\" (UniqueName: \"kubernetes.io/projected/5af0262b-936f-4af6-81de-426219aec18b-kube-api-access-pstfr\") pod \"5af0262b-936f-4af6-81de-426219aec18b\" (UID: \"5af0262b-936f-4af6-81de-426219aec18b\") " Nov 24 11:29:37 crc kubenswrapper[5072]: I1124 11:29:37.541731 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5af0262b-936f-4af6-81de-426219aec18b-kube-api-access-pstfr" (OuterVolumeSpecName: "kube-api-access-pstfr") pod "5af0262b-936f-4af6-81de-426219aec18b" (UID: "5af0262b-936f-4af6-81de-426219aec18b"). InnerVolumeSpecName "kube-api-access-pstfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:29:37 crc kubenswrapper[5072]: I1124 11:29:37.548109 5072 generic.go:334] "Generic (PLEG): container finished" podID="5af0262b-936f-4af6-81de-426219aec18b" containerID="3a190c83866fe7e8b1f6e5d05ffad54843ef5e67f1792a9554cd649122d2ade8" exitCode=0 Nov 24 11:29:37 crc kubenswrapper[5072]: I1124 11:29:37.548161 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6447ccbd8f-8zxz2" event={"ID":"5af0262b-936f-4af6-81de-426219aec18b","Type":"ContainerDied","Data":"3a190c83866fe7e8b1f6e5d05ffad54843ef5e67f1792a9554cd649122d2ade8"} Nov 24 11:29:37 crc kubenswrapper[5072]: I1124 11:29:37.548200 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6447ccbd8f-8zxz2" event={"ID":"5af0262b-936f-4af6-81de-426219aec18b","Type":"ContainerDied","Data":"2b2cd2af33072b98567000b38bad494c31f3469118ec824ffa27d707b396b403"} Nov 24 11:29:37 crc kubenswrapper[5072]: I1124 11:29:37.548228 5072 scope.go:117] "RemoveContainer" containerID="3a190c83866fe7e8b1f6e5d05ffad54843ef5e67f1792a9554cd649122d2ade8" Nov 24 11:29:37 crc kubenswrapper[5072]: I1124 11:29:37.548425 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6447ccbd8f-8zxz2" Nov 24 11:29:37 crc kubenswrapper[5072]: I1124 11:29:37.629829 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5af0262b-936f-4af6-81de-426219aec18b-config" (OuterVolumeSpecName: "config") pod "5af0262b-936f-4af6-81de-426219aec18b" (UID: "5af0262b-936f-4af6-81de-426219aec18b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:29:37 crc kubenswrapper[5072]: I1124 11:29:37.630923 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5af0262b-936f-4af6-81de-426219aec18b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5af0262b-936f-4af6-81de-426219aec18b" (UID: "5af0262b-936f-4af6-81de-426219aec18b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:29:37 crc kubenswrapper[5072]: I1124 11:29:37.639918 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5af0262b-936f-4af6-81de-426219aec18b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5af0262b-936f-4af6-81de-426219aec18b" (UID: "5af0262b-936f-4af6-81de-426219aec18b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:29:37 crc kubenswrapper[5072]: I1124 11:29:37.644117 5072 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5af0262b-936f-4af6-81de-426219aec18b-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:29:37 crc kubenswrapper[5072]: I1124 11:29:37.644231 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pstfr\" (UniqueName: \"kubernetes.io/projected/5af0262b-936f-4af6-81de-426219aec18b-kube-api-access-pstfr\") on node \"crc\" DevicePath \"\"" Nov 24 11:29:37 crc kubenswrapper[5072]: I1124 11:29:37.644319 5072 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5af0262b-936f-4af6-81de-426219aec18b-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 11:29:37 crc kubenswrapper[5072]: I1124 11:29:37.670162 5072 scope.go:117] "RemoveContainer" containerID="067f575ef9ba90a1abc82c6a2dcdc34c723075b3209e7a89ec14a6e3d40c33e1" Nov 24 11:29:37 crc kubenswrapper[5072]: I1124 11:29:37.709157 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5af0262b-936f-4af6-81de-426219aec18b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5af0262b-936f-4af6-81de-426219aec18b" (UID: "5af0262b-936f-4af6-81de-426219aec18b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:29:37 crc kubenswrapper[5072]: I1124 11:29:37.709320 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5af0262b-936f-4af6-81de-426219aec18b-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "5af0262b-936f-4af6-81de-426219aec18b" (UID: "5af0262b-936f-4af6-81de-426219aec18b"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:29:37 crc kubenswrapper[5072]: I1124 11:29:37.745693 5072 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5af0262b-936f-4af6-81de-426219aec18b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 11:29:37 crc kubenswrapper[5072]: I1124 11:29:37.746964 5072 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5af0262b-936f-4af6-81de-426219aec18b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 11:29:37 crc kubenswrapper[5072]: I1124 11:29:37.747089 5072 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5af0262b-936f-4af6-81de-426219aec18b-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Nov 24 11:29:37 crc kubenswrapper[5072]: I1124 11:29:37.780899 5072 scope.go:117] "RemoveContainer" containerID="3a190c83866fe7e8b1f6e5d05ffad54843ef5e67f1792a9554cd649122d2ade8" Nov 24 11:29:37 crc kubenswrapper[5072]: E1124 11:29:37.781491 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a190c83866fe7e8b1f6e5d05ffad54843ef5e67f1792a9554cd649122d2ade8\": container with ID starting with 3a190c83866fe7e8b1f6e5d05ffad54843ef5e67f1792a9554cd649122d2ade8 not found: ID does not exist" containerID="3a190c83866fe7e8b1f6e5d05ffad54843ef5e67f1792a9554cd649122d2ade8" Nov 24 11:29:37 crc kubenswrapper[5072]: I1124 11:29:37.781767 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a190c83866fe7e8b1f6e5d05ffad54843ef5e67f1792a9554cd649122d2ade8"} err="failed to get container status \"3a190c83866fe7e8b1f6e5d05ffad54843ef5e67f1792a9554cd649122d2ade8\": rpc error: code = NotFound desc = could not find container \"3a190c83866fe7e8b1f6e5d05ffad54843ef5e67f1792a9554cd649122d2ade8\": container with ID starting with 3a190c83866fe7e8b1f6e5d05ffad54843ef5e67f1792a9554cd649122d2ade8 not found: ID does not exist" Nov 24 11:29:37 crc kubenswrapper[5072]: I1124 11:29:37.781893 5072 scope.go:117] "RemoveContainer" containerID="067f575ef9ba90a1abc82c6a2dcdc34c723075b3209e7a89ec14a6e3d40c33e1" Nov 24 11:29:37 crc kubenswrapper[5072]: E1124 11:29:37.782488 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"067f575ef9ba90a1abc82c6a2dcdc34c723075b3209e7a89ec14a6e3d40c33e1\": container with ID starting with 067f575ef9ba90a1abc82c6a2dcdc34c723075b3209e7a89ec14a6e3d40c33e1 not found: ID does not exist" containerID="067f575ef9ba90a1abc82c6a2dcdc34c723075b3209e7a89ec14a6e3d40c33e1" Nov 24 11:29:37 crc kubenswrapper[5072]: I1124 11:29:37.782595 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"067f575ef9ba90a1abc82c6a2dcdc34c723075b3209e7a89ec14a6e3d40c33e1"} err="failed to get container status \"067f575ef9ba90a1abc82c6a2dcdc34c723075b3209e7a89ec14a6e3d40c33e1\": rpc error: code = NotFound desc = could not find container \"067f575ef9ba90a1abc82c6a2dcdc34c723075b3209e7a89ec14a6e3d40c33e1\": container with ID starting with 067f575ef9ba90a1abc82c6a2dcdc34c723075b3209e7a89ec14a6e3d40c33e1 not found: ID does not exist" Nov 24 11:29:37 crc kubenswrapper[5072]: I1124 11:29:37.877158 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-8zxz2"] Nov 24 11:29:37 crc kubenswrapper[5072]: I1124 11:29:37.884338 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-8zxz2"] Nov 24 11:29:39 crc kubenswrapper[5072]: I1124 11:29:39.025348 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5af0262b-936f-4af6-81de-426219aec18b" path="/var/lib/kubelet/pods/5af0262b-936f-4af6-81de-426219aec18b/volumes" Nov 24 11:29:46 crc kubenswrapper[5072]: I1124 11:29:46.872748 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-j95cn"] Nov 24 11:29:46 crc kubenswrapper[5072]: E1124 11:29:46.873592 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a85681e-0caa-48f6-8782-301c059a6380" containerName="init" Nov 24 11:29:46 crc kubenswrapper[5072]: I1124 11:29:46.873607 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a85681e-0caa-48f6-8782-301c059a6380" containerName="init" Nov 24 11:29:46 crc kubenswrapper[5072]: E1124 11:29:46.873621 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5af0262b-936f-4af6-81de-426219aec18b" containerName="dnsmasq-dns" Nov 24 11:29:46 crc kubenswrapper[5072]: I1124 11:29:46.873628 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="5af0262b-936f-4af6-81de-426219aec18b" containerName="dnsmasq-dns" Nov 24 11:29:46 crc kubenswrapper[5072]: E1124 11:29:46.873644 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5af0262b-936f-4af6-81de-426219aec18b" containerName="init" Nov 24 11:29:46 crc kubenswrapper[5072]: I1124 11:29:46.873653 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="5af0262b-936f-4af6-81de-426219aec18b" containerName="init" Nov 24 11:29:46 crc kubenswrapper[5072]: E1124 11:29:46.873671 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a85681e-0caa-48f6-8782-301c059a6380" containerName="dnsmasq-dns" Nov 24 11:29:46 crc kubenswrapper[5072]: I1124 11:29:46.873678 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a85681e-0caa-48f6-8782-301c059a6380" containerName="dnsmasq-dns" Nov 24 11:29:46 crc kubenswrapper[5072]: I1124 11:29:46.873896 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a85681e-0caa-48f6-8782-301c059a6380" containerName="dnsmasq-dns" Nov 24 11:29:46 crc kubenswrapper[5072]: I1124 11:29:46.873912 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="5af0262b-936f-4af6-81de-426219aec18b" containerName="dnsmasq-dns" Nov 24 11:29:46 crc kubenswrapper[5072]: I1124 11:29:46.874627 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-j95cn" Nov 24 11:29:46 crc kubenswrapper[5072]: I1124 11:29:46.879718 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 11:29:46 crc kubenswrapper[5072]: I1124 11:29:46.879902 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 11:29:46 crc kubenswrapper[5072]: I1124 11:29:46.879958 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-b6s7d" Nov 24 11:29:46 crc kubenswrapper[5072]: I1124 11:29:46.880659 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 11:29:46 crc kubenswrapper[5072]: I1124 11:29:46.898043 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-j95cn"] Nov 24 11:29:47 crc kubenswrapper[5072]: I1124 11:29:47.043836 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45bca15f-243e-425b-b451-de61c3da8a4d-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-j95cn\" (UID: \"45bca15f-243e-425b-b451-de61c3da8a4d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-j95cn" Nov 24 11:29:47 crc kubenswrapper[5072]: I1124 11:29:47.043955 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/45bca15f-243e-425b-b451-de61c3da8a4d-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-j95cn\" (UID: \"45bca15f-243e-425b-b451-de61c3da8a4d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-j95cn" Nov 24 11:29:47 crc kubenswrapper[5072]: I1124 11:29:47.044061 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmx5x\" (UniqueName: \"kubernetes.io/projected/45bca15f-243e-425b-b451-de61c3da8a4d-kube-api-access-mmx5x\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-j95cn\" (UID: \"45bca15f-243e-425b-b451-de61c3da8a4d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-j95cn" Nov 24 11:29:47 crc kubenswrapper[5072]: I1124 11:29:47.044179 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/45bca15f-243e-425b-b451-de61c3da8a4d-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-j95cn\" (UID: \"45bca15f-243e-425b-b451-de61c3da8a4d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-j95cn" Nov 24 11:29:47 crc kubenswrapper[5072]: I1124 11:29:47.146115 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmx5x\" (UniqueName: \"kubernetes.io/projected/45bca15f-243e-425b-b451-de61c3da8a4d-kube-api-access-mmx5x\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-j95cn\" (UID: \"45bca15f-243e-425b-b451-de61c3da8a4d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-j95cn" Nov 24 11:29:47 crc kubenswrapper[5072]: I1124 11:29:47.146283 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/45bca15f-243e-425b-b451-de61c3da8a4d-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-j95cn\" (UID: \"45bca15f-243e-425b-b451-de61c3da8a4d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-j95cn" Nov 24 11:29:47 crc kubenswrapper[5072]: I1124 11:29:47.146472 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45bca15f-243e-425b-b451-de61c3da8a4d-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-j95cn\" (UID: \"45bca15f-243e-425b-b451-de61c3da8a4d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-j95cn" Nov 24 11:29:47 crc kubenswrapper[5072]: I1124 11:29:47.146603 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/45bca15f-243e-425b-b451-de61c3da8a4d-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-j95cn\" (UID: \"45bca15f-243e-425b-b451-de61c3da8a4d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-j95cn" Nov 24 11:29:47 crc kubenswrapper[5072]: I1124 11:29:47.152037 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/45bca15f-243e-425b-b451-de61c3da8a4d-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-j95cn\" (UID: \"45bca15f-243e-425b-b451-de61c3da8a4d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-j95cn" Nov 24 11:29:47 crc kubenswrapper[5072]: I1124 11:29:47.153481 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/45bca15f-243e-425b-b451-de61c3da8a4d-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-j95cn\" (UID: \"45bca15f-243e-425b-b451-de61c3da8a4d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-j95cn" Nov 24 11:29:47 crc kubenswrapper[5072]: I1124 11:29:47.155159 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45bca15f-243e-425b-b451-de61c3da8a4d-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-j95cn\" (UID: \"45bca15f-243e-425b-b451-de61c3da8a4d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-j95cn" Nov 24 11:29:47 crc kubenswrapper[5072]: I1124 11:29:47.174250 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmx5x\" (UniqueName: \"kubernetes.io/projected/45bca15f-243e-425b-b451-de61c3da8a4d-kube-api-access-mmx5x\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-j95cn\" (UID: \"45bca15f-243e-425b-b451-de61c3da8a4d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-j95cn" Nov 24 11:29:47 crc kubenswrapper[5072]: I1124 11:29:47.195076 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-j95cn" Nov 24 11:29:47 crc kubenswrapper[5072]: I1124 11:29:47.576279 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-j95cn"] Nov 24 11:29:47 crc kubenswrapper[5072]: I1124 11:29:47.585061 5072 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 11:29:47 crc kubenswrapper[5072]: I1124 11:29:47.628413 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-j95cn" event={"ID":"45bca15f-243e-425b-b451-de61c3da8a4d","Type":"ContainerStarted","Data":"dad518787af8bd939b0b7710d695fa40c4085af4ef46e99d4a322da6674601a0"} Nov 24 11:29:48 crc kubenswrapper[5072]: I1124 11:29:48.640807 5072 generic.go:334] "Generic (PLEG): container finished" podID="02112c1c-a6a9-42e6-938e-e3e8d7b7217c" containerID="4bdf92385eff5e4e2cb9f1377d2cefae289ca0e03669249fd6ffadb8aa049f20" exitCode=0 Nov 24 11:29:48 crc kubenswrapper[5072]: I1124 11:29:48.640853 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"02112c1c-a6a9-42e6-938e-e3e8d7b7217c","Type":"ContainerDied","Data":"4bdf92385eff5e4e2cb9f1377d2cefae289ca0e03669249fd6ffadb8aa049f20"} Nov 24 11:29:49 crc kubenswrapper[5072]: I1124 11:29:49.656364 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"02112c1c-a6a9-42e6-938e-e3e8d7b7217c","Type":"ContainerStarted","Data":"7e650747a1514da4122cfa464d103ce09334531a341dba0b1203bba9e57a3e65"} Nov 24 11:29:49 crc kubenswrapper[5072]: I1124 11:29:49.657533 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 24 11:29:49 crc kubenswrapper[5072]: I1124 11:29:49.691796 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.691772255 podStartE2EDuration="36.691772255s" podCreationTimestamp="2025-11-24 11:29:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:29:49.680528153 +0000 UTC m=+1241.392052659" watchObservedRunningTime="2025-11-24 11:29:49.691772255 +0000 UTC m=+1241.403296771" Nov 24 11:29:50 crc kubenswrapper[5072]: I1124 11:29:50.668952 5072 generic.go:334] "Generic (PLEG): container finished" podID="38928c57-6c7d-4fb6-afe8-ed2602e450c3" containerID="13c42ab69159691f8bc124ad84b65b0d42e5bcc79fb205ae488799e2a53fd04b" exitCode=0 Nov 24 11:29:50 crc kubenswrapper[5072]: I1124 11:29:50.669249 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"38928c57-6c7d-4fb6-afe8-ed2602e450c3","Type":"ContainerDied","Data":"13c42ab69159691f8bc124ad84b65b0d42e5bcc79fb205ae488799e2a53fd04b"} Nov 24 11:29:57 crc kubenswrapper[5072]: I1124 11:29:57.751926 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-j95cn" event={"ID":"45bca15f-243e-425b-b451-de61c3da8a4d","Type":"ContainerStarted","Data":"c834356271529c6c1adb078853d64923e8a035431fdb0383ccbbe222234378be"} Nov 24 11:29:57 crc kubenswrapper[5072]: I1124 11:29:57.755952 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"38928c57-6c7d-4fb6-afe8-ed2602e450c3","Type":"ContainerStarted","Data":"c3cec8bbe63a8d6019063a071e289b1d6c2a2940e21b4792ce78fa7cc7f656f9"} Nov 24 11:29:57 crc kubenswrapper[5072]: I1124 11:29:57.756233 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:29:57 crc kubenswrapper[5072]: I1124 11:29:57.774996 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-j95cn" podStartSLOduration=2.244042529 podStartE2EDuration="11.774978721s" podCreationTimestamp="2025-11-24 11:29:46 +0000 UTC" firstStartedPulling="2025-11-24 11:29:47.584594112 +0000 UTC m=+1239.296118628" lastFinishedPulling="2025-11-24 11:29:57.115530344 +0000 UTC m=+1248.827054820" observedRunningTime="2025-11-24 11:29:57.774712474 +0000 UTC m=+1249.486236960" watchObservedRunningTime="2025-11-24 11:29:57.774978721 +0000 UTC m=+1249.486503197" Nov 24 11:29:57 crc kubenswrapper[5072]: I1124 11:29:57.811418 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=42.811396064 podStartE2EDuration="42.811396064s" podCreationTimestamp="2025-11-24 11:29:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:29:57.805724786 +0000 UTC m=+1249.517249262" watchObservedRunningTime="2025-11-24 11:29:57.811396064 +0000 UTC m=+1249.522920550" Nov 24 11:30:00 crc kubenswrapper[5072]: I1124 11:30:00.170983 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399730-5x49b"] Nov 24 11:30:00 crc kubenswrapper[5072]: I1124 11:30:00.173535 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399730-5x49b" Nov 24 11:30:00 crc kubenswrapper[5072]: I1124 11:30:00.177533 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 24 11:30:00 crc kubenswrapper[5072]: I1124 11:30:00.178976 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 11:30:00 crc kubenswrapper[5072]: I1124 11:30:00.183628 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399730-5x49b"] Nov 24 11:30:00 crc kubenswrapper[5072]: I1124 11:30:00.207143 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c99ac2b9-7719-430e-b9f0-6263982af569-config-volume\") pod \"collect-profiles-29399730-5x49b\" (UID: \"c99ac2b9-7719-430e-b9f0-6263982af569\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399730-5x49b" Nov 24 11:30:00 crc kubenswrapper[5072]: I1124 11:30:00.207267 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c99ac2b9-7719-430e-b9f0-6263982af569-secret-volume\") pod \"collect-profiles-29399730-5x49b\" (UID: \"c99ac2b9-7719-430e-b9f0-6263982af569\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399730-5x49b" Nov 24 11:30:00 crc kubenswrapper[5072]: I1124 11:30:00.207471 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sgxn\" (UniqueName: \"kubernetes.io/projected/c99ac2b9-7719-430e-b9f0-6263982af569-kube-api-access-6sgxn\") pod \"collect-profiles-29399730-5x49b\" (UID: \"c99ac2b9-7719-430e-b9f0-6263982af569\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399730-5x49b" Nov 24 11:30:00 crc kubenswrapper[5072]: I1124 11:30:00.309032 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c99ac2b9-7719-430e-b9f0-6263982af569-config-volume\") pod \"collect-profiles-29399730-5x49b\" (UID: \"c99ac2b9-7719-430e-b9f0-6263982af569\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399730-5x49b" Nov 24 11:30:00 crc kubenswrapper[5072]: I1124 11:30:00.309135 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c99ac2b9-7719-430e-b9f0-6263982af569-secret-volume\") pod \"collect-profiles-29399730-5x49b\" (UID: \"c99ac2b9-7719-430e-b9f0-6263982af569\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399730-5x49b" Nov 24 11:30:00 crc kubenswrapper[5072]: I1124 11:30:00.309189 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6sgxn\" (UniqueName: \"kubernetes.io/projected/c99ac2b9-7719-430e-b9f0-6263982af569-kube-api-access-6sgxn\") pod \"collect-profiles-29399730-5x49b\" (UID: \"c99ac2b9-7719-430e-b9f0-6263982af569\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399730-5x49b" Nov 24 11:30:00 crc kubenswrapper[5072]: I1124 11:30:00.309893 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c99ac2b9-7719-430e-b9f0-6263982af569-config-volume\") pod \"collect-profiles-29399730-5x49b\" (UID: \"c99ac2b9-7719-430e-b9f0-6263982af569\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399730-5x49b" Nov 24 11:30:00 crc kubenswrapper[5072]: I1124 11:30:00.320462 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c99ac2b9-7719-430e-b9f0-6263982af569-secret-volume\") pod \"collect-profiles-29399730-5x49b\" (UID: \"c99ac2b9-7719-430e-b9f0-6263982af569\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399730-5x49b" Nov 24 11:30:00 crc kubenswrapper[5072]: I1124 11:30:00.324281 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6sgxn\" (UniqueName: \"kubernetes.io/projected/c99ac2b9-7719-430e-b9f0-6263982af569-kube-api-access-6sgxn\") pod \"collect-profiles-29399730-5x49b\" (UID: \"c99ac2b9-7719-430e-b9f0-6263982af569\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399730-5x49b" Nov 24 11:30:00 crc kubenswrapper[5072]: I1124 11:30:00.491553 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399730-5x49b" Nov 24 11:30:00 crc kubenswrapper[5072]: I1124 11:30:00.976312 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399730-5x49b"] Nov 24 11:30:00 crc kubenswrapper[5072]: W1124 11:30:00.987615 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc99ac2b9_7719_430e_b9f0_6263982af569.slice/crio-2d1136d7b832ae89eb55776fe495e4469c0a2097ed112a7b1a86b0408b685506 WatchSource:0}: Error finding container 2d1136d7b832ae89eb55776fe495e4469c0a2097ed112a7b1a86b0408b685506: Status 404 returned error can't find the container with id 2d1136d7b832ae89eb55776fe495e4469c0a2097ed112a7b1a86b0408b685506 Nov 24 11:30:01 crc kubenswrapper[5072]: I1124 11:30:01.816108 5072 generic.go:334] "Generic (PLEG): container finished" podID="c99ac2b9-7719-430e-b9f0-6263982af569" containerID="0b0cb3684360fc9348a582818d545846e3fe9c5608368c434a21218e947a7fa4" exitCode=0 Nov 24 11:30:01 crc kubenswrapper[5072]: I1124 11:30:01.816221 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399730-5x49b" event={"ID":"c99ac2b9-7719-430e-b9f0-6263982af569","Type":"ContainerDied","Data":"0b0cb3684360fc9348a582818d545846e3fe9c5608368c434a21218e947a7fa4"} Nov 24 11:30:01 crc kubenswrapper[5072]: I1124 11:30:01.816472 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399730-5x49b" event={"ID":"c99ac2b9-7719-430e-b9f0-6263982af569","Type":"ContainerStarted","Data":"2d1136d7b832ae89eb55776fe495e4469c0a2097ed112a7b1a86b0408b685506"} Nov 24 11:30:03 crc kubenswrapper[5072]: I1124 11:30:03.212732 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399730-5x49b" Nov 24 11:30:03 crc kubenswrapper[5072]: I1124 11:30:03.267132 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6sgxn\" (UniqueName: \"kubernetes.io/projected/c99ac2b9-7719-430e-b9f0-6263982af569-kube-api-access-6sgxn\") pod \"c99ac2b9-7719-430e-b9f0-6263982af569\" (UID: \"c99ac2b9-7719-430e-b9f0-6263982af569\") " Nov 24 11:30:03 crc kubenswrapper[5072]: I1124 11:30:03.267179 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c99ac2b9-7719-430e-b9f0-6263982af569-config-volume\") pod \"c99ac2b9-7719-430e-b9f0-6263982af569\" (UID: \"c99ac2b9-7719-430e-b9f0-6263982af569\") " Nov 24 11:30:03 crc kubenswrapper[5072]: I1124 11:30:03.267225 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c99ac2b9-7719-430e-b9f0-6263982af569-secret-volume\") pod \"c99ac2b9-7719-430e-b9f0-6263982af569\" (UID: \"c99ac2b9-7719-430e-b9f0-6263982af569\") " Nov 24 11:30:03 crc kubenswrapper[5072]: I1124 11:30:03.268052 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c99ac2b9-7719-430e-b9f0-6263982af569-config-volume" (OuterVolumeSpecName: "config-volume") pod "c99ac2b9-7719-430e-b9f0-6263982af569" (UID: "c99ac2b9-7719-430e-b9f0-6263982af569"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:30:03 crc kubenswrapper[5072]: I1124 11:30:03.274318 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c99ac2b9-7719-430e-b9f0-6263982af569-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c99ac2b9-7719-430e-b9f0-6263982af569" (UID: "c99ac2b9-7719-430e-b9f0-6263982af569"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:30:03 crc kubenswrapper[5072]: I1124 11:30:03.274447 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c99ac2b9-7719-430e-b9f0-6263982af569-kube-api-access-6sgxn" (OuterVolumeSpecName: "kube-api-access-6sgxn") pod "c99ac2b9-7719-430e-b9f0-6263982af569" (UID: "c99ac2b9-7719-430e-b9f0-6263982af569"). InnerVolumeSpecName "kube-api-access-6sgxn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:30:03 crc kubenswrapper[5072]: I1124 11:30:03.368649 5072 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c99ac2b9-7719-430e-b9f0-6263982af569-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:03 crc kubenswrapper[5072]: I1124 11:30:03.368938 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6sgxn\" (UniqueName: \"kubernetes.io/projected/c99ac2b9-7719-430e-b9f0-6263982af569-kube-api-access-6sgxn\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:03 crc kubenswrapper[5072]: I1124 11:30:03.368947 5072 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c99ac2b9-7719-430e-b9f0-6263982af569-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:03 crc kubenswrapper[5072]: I1124 11:30:03.800650 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 24 11:30:03 crc kubenswrapper[5072]: I1124 11:30:03.840824 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399730-5x49b" event={"ID":"c99ac2b9-7719-430e-b9f0-6263982af569","Type":"ContainerDied","Data":"2d1136d7b832ae89eb55776fe495e4469c0a2097ed112a7b1a86b0408b685506"} Nov 24 11:30:03 crc kubenswrapper[5072]: I1124 11:30:03.840861 5072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d1136d7b832ae89eb55776fe495e4469c0a2097ed112a7b1a86b0408b685506" Nov 24 11:30:03 crc kubenswrapper[5072]: I1124 11:30:03.840908 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399730-5x49b" Nov 24 11:30:09 crc kubenswrapper[5072]: I1124 11:30:09.912869 5072 generic.go:334] "Generic (PLEG): container finished" podID="45bca15f-243e-425b-b451-de61c3da8a4d" containerID="c834356271529c6c1adb078853d64923e8a035431fdb0383ccbbe222234378be" exitCode=0 Nov 24 11:30:09 crc kubenswrapper[5072]: I1124 11:30:09.912955 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-j95cn" event={"ID":"45bca15f-243e-425b-b451-de61c3da8a4d","Type":"ContainerDied","Data":"c834356271529c6c1adb078853d64923e8a035431fdb0383ccbbe222234378be"} Nov 24 11:30:10 crc kubenswrapper[5072]: E1124 11:30:10.078039 5072 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod45bca15f_243e_425b_b451_de61c3da8a4d.slice/crio-c834356271529c6c1adb078853d64923e8a035431fdb0383ccbbe222234378be.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod45bca15f_243e_425b_b451_de61c3da8a4d.slice/crio-conmon-c834356271529c6c1adb078853d64923e8a035431fdb0383ccbbe222234378be.scope\": RecentStats: unable to find data in memory cache]" Nov 24 11:30:11 crc kubenswrapper[5072]: I1124 11:30:11.387709 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-j95cn" Nov 24 11:30:11 crc kubenswrapper[5072]: I1124 11:30:11.572168 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/45bca15f-243e-425b-b451-de61c3da8a4d-ssh-key\") pod \"45bca15f-243e-425b-b451-de61c3da8a4d\" (UID: \"45bca15f-243e-425b-b451-de61c3da8a4d\") " Nov 24 11:30:11 crc kubenswrapper[5072]: I1124 11:30:11.572265 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45bca15f-243e-425b-b451-de61c3da8a4d-repo-setup-combined-ca-bundle\") pod \"45bca15f-243e-425b-b451-de61c3da8a4d\" (UID: \"45bca15f-243e-425b-b451-de61c3da8a4d\") " Nov 24 11:30:11 crc kubenswrapper[5072]: I1124 11:30:11.572316 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/45bca15f-243e-425b-b451-de61c3da8a4d-inventory\") pod \"45bca15f-243e-425b-b451-de61c3da8a4d\" (UID: \"45bca15f-243e-425b-b451-de61c3da8a4d\") " Nov 24 11:30:11 crc kubenswrapper[5072]: I1124 11:30:11.572406 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmx5x\" (UniqueName: \"kubernetes.io/projected/45bca15f-243e-425b-b451-de61c3da8a4d-kube-api-access-mmx5x\") pod \"45bca15f-243e-425b-b451-de61c3da8a4d\" (UID: \"45bca15f-243e-425b-b451-de61c3da8a4d\") " Nov 24 11:30:11 crc kubenswrapper[5072]: I1124 11:30:11.581592 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45bca15f-243e-425b-b451-de61c3da8a4d-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "45bca15f-243e-425b-b451-de61c3da8a4d" (UID: "45bca15f-243e-425b-b451-de61c3da8a4d"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:30:11 crc kubenswrapper[5072]: I1124 11:30:11.584891 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45bca15f-243e-425b-b451-de61c3da8a4d-kube-api-access-mmx5x" (OuterVolumeSpecName: "kube-api-access-mmx5x") pod "45bca15f-243e-425b-b451-de61c3da8a4d" (UID: "45bca15f-243e-425b-b451-de61c3da8a4d"). InnerVolumeSpecName "kube-api-access-mmx5x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:30:11 crc kubenswrapper[5072]: I1124 11:30:11.602276 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45bca15f-243e-425b-b451-de61c3da8a4d-inventory" (OuterVolumeSpecName: "inventory") pod "45bca15f-243e-425b-b451-de61c3da8a4d" (UID: "45bca15f-243e-425b-b451-de61c3da8a4d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:30:11 crc kubenswrapper[5072]: I1124 11:30:11.608508 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45bca15f-243e-425b-b451-de61c3da8a4d-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "45bca15f-243e-425b-b451-de61c3da8a4d" (UID: "45bca15f-243e-425b-b451-de61c3da8a4d"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:30:11 crc kubenswrapper[5072]: I1124 11:30:11.674019 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mmx5x\" (UniqueName: \"kubernetes.io/projected/45bca15f-243e-425b-b451-de61c3da8a4d-kube-api-access-mmx5x\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:11 crc kubenswrapper[5072]: I1124 11:30:11.674062 5072 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/45bca15f-243e-425b-b451-de61c3da8a4d-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:11 crc kubenswrapper[5072]: I1124 11:30:11.674076 5072 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45bca15f-243e-425b-b451-de61c3da8a4d-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:11 crc kubenswrapper[5072]: I1124 11:30:11.674090 5072 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/45bca15f-243e-425b-b451-de61c3da8a4d-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:11 crc kubenswrapper[5072]: I1124 11:30:11.939625 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-j95cn" event={"ID":"45bca15f-243e-425b-b451-de61c3da8a4d","Type":"ContainerDied","Data":"dad518787af8bd939b0b7710d695fa40c4085af4ef46e99d4a322da6674601a0"} Nov 24 11:30:11 crc kubenswrapper[5072]: I1124 11:30:11.939994 5072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dad518787af8bd939b0b7710d695fa40c4085af4ef46e99d4a322da6674601a0" Nov 24 11:30:11 crc kubenswrapper[5072]: I1124 11:30:11.939776 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-j95cn" Nov 24 11:30:12 crc kubenswrapper[5072]: I1124 11:30:12.018220 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-s2k9h"] Nov 24 11:30:12 crc kubenswrapper[5072]: E1124 11:30:12.018823 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45bca15f-243e-425b-b451-de61c3da8a4d" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 24 11:30:12 crc kubenswrapper[5072]: I1124 11:30:12.018908 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="45bca15f-243e-425b-b451-de61c3da8a4d" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 24 11:30:12 crc kubenswrapper[5072]: E1124 11:30:12.018970 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c99ac2b9-7719-430e-b9f0-6263982af569" containerName="collect-profiles" Nov 24 11:30:12 crc kubenswrapper[5072]: I1124 11:30:12.019023 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="c99ac2b9-7719-430e-b9f0-6263982af569" containerName="collect-profiles" Nov 24 11:30:12 crc kubenswrapper[5072]: I1124 11:30:12.019231 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="c99ac2b9-7719-430e-b9f0-6263982af569" containerName="collect-profiles" Nov 24 11:30:12 crc kubenswrapper[5072]: I1124 11:30:12.019296 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="45bca15f-243e-425b-b451-de61c3da8a4d" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 24 11:30:12 crc kubenswrapper[5072]: I1124 11:30:12.019924 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-s2k9h" Nov 24 11:30:12 crc kubenswrapper[5072]: I1124 11:30:12.022809 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 11:30:12 crc kubenswrapper[5072]: I1124 11:30:12.023174 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 11:30:12 crc kubenswrapper[5072]: I1124 11:30:12.023498 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 11:30:12 crc kubenswrapper[5072]: I1124 11:30:12.025092 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-b6s7d" Nov 24 11:30:12 crc kubenswrapper[5072]: I1124 11:30:12.042009 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-s2k9h"] Nov 24 11:30:12 crc kubenswrapper[5072]: I1124 11:30:12.082220 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/55d5c4ad-dbbc-4728-bac4-f12adda414f1-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-s2k9h\" (UID: \"55d5c4ad-dbbc-4728-bac4-f12adda414f1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-s2k9h" Nov 24 11:30:12 crc kubenswrapper[5072]: I1124 11:30:12.082547 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvbmj\" (UniqueName: \"kubernetes.io/projected/55d5c4ad-dbbc-4728-bac4-f12adda414f1-kube-api-access-cvbmj\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-s2k9h\" (UID: \"55d5c4ad-dbbc-4728-bac4-f12adda414f1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-s2k9h" Nov 24 11:30:12 crc kubenswrapper[5072]: I1124 11:30:12.082657 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55d5c4ad-dbbc-4728-bac4-f12adda414f1-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-s2k9h\" (UID: \"55d5c4ad-dbbc-4728-bac4-f12adda414f1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-s2k9h" Nov 24 11:30:12 crc kubenswrapper[5072]: I1124 11:30:12.082903 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/55d5c4ad-dbbc-4728-bac4-f12adda414f1-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-s2k9h\" (UID: \"55d5c4ad-dbbc-4728-bac4-f12adda414f1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-s2k9h" Nov 24 11:30:12 crc kubenswrapper[5072]: I1124 11:30:12.184578 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/55d5c4ad-dbbc-4728-bac4-f12adda414f1-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-s2k9h\" (UID: \"55d5c4ad-dbbc-4728-bac4-f12adda414f1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-s2k9h" Nov 24 11:30:12 crc kubenswrapper[5072]: I1124 11:30:12.184915 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvbmj\" (UniqueName: \"kubernetes.io/projected/55d5c4ad-dbbc-4728-bac4-f12adda414f1-kube-api-access-cvbmj\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-s2k9h\" (UID: \"55d5c4ad-dbbc-4728-bac4-f12adda414f1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-s2k9h" Nov 24 11:30:12 crc kubenswrapper[5072]: I1124 11:30:12.185044 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55d5c4ad-dbbc-4728-bac4-f12adda414f1-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-s2k9h\" (UID: \"55d5c4ad-dbbc-4728-bac4-f12adda414f1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-s2k9h" Nov 24 11:30:12 crc kubenswrapper[5072]: I1124 11:30:12.185273 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/55d5c4ad-dbbc-4728-bac4-f12adda414f1-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-s2k9h\" (UID: \"55d5c4ad-dbbc-4728-bac4-f12adda414f1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-s2k9h" Nov 24 11:30:12 crc kubenswrapper[5072]: I1124 11:30:12.188911 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/55d5c4ad-dbbc-4728-bac4-f12adda414f1-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-s2k9h\" (UID: \"55d5c4ad-dbbc-4728-bac4-f12adda414f1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-s2k9h" Nov 24 11:30:12 crc kubenswrapper[5072]: I1124 11:30:12.189053 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/55d5c4ad-dbbc-4728-bac4-f12adda414f1-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-s2k9h\" (UID: \"55d5c4ad-dbbc-4728-bac4-f12adda414f1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-s2k9h" Nov 24 11:30:12 crc kubenswrapper[5072]: I1124 11:30:12.189845 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55d5c4ad-dbbc-4728-bac4-f12adda414f1-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-s2k9h\" (UID: \"55d5c4ad-dbbc-4728-bac4-f12adda414f1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-s2k9h" Nov 24 11:30:12 crc kubenswrapper[5072]: I1124 11:30:12.207773 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvbmj\" (UniqueName: \"kubernetes.io/projected/55d5c4ad-dbbc-4728-bac4-f12adda414f1-kube-api-access-cvbmj\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-s2k9h\" (UID: \"55d5c4ad-dbbc-4728-bac4-f12adda414f1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-s2k9h" Nov 24 11:30:12 crc kubenswrapper[5072]: I1124 11:30:12.351635 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-s2k9h" Nov 24 11:30:12 crc kubenswrapper[5072]: I1124 11:30:12.863554 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-s2k9h"] Nov 24 11:30:12 crc kubenswrapper[5072]: W1124 11:30:12.868608 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod55d5c4ad_dbbc_4728_bac4_f12adda414f1.slice/crio-b3cb47af55c002d275c5f638396c3afb9a24edd68755f434f433b4d78baf1bab WatchSource:0}: Error finding container b3cb47af55c002d275c5f638396c3afb9a24edd68755f434f433b4d78baf1bab: Status 404 returned error can't find the container with id b3cb47af55c002d275c5f638396c3afb9a24edd68755f434f433b4d78baf1bab Nov 24 11:30:12 crc kubenswrapper[5072]: I1124 11:30:12.954150 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-s2k9h" event={"ID":"55d5c4ad-dbbc-4728-bac4-f12adda414f1","Type":"ContainerStarted","Data":"b3cb47af55c002d275c5f638396c3afb9a24edd68755f434f433b4d78baf1bab"} Nov 24 11:30:13 crc kubenswrapper[5072]: I1124 11:30:13.964322 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-s2k9h" event={"ID":"55d5c4ad-dbbc-4728-bac4-f12adda414f1","Type":"ContainerStarted","Data":"bacde65c0bf7088a571c6dd75c114ac6fdad7e96b5f661ba9978746b8f8f018e"} Nov 24 11:30:13 crc kubenswrapper[5072]: I1124 11:30:13.983344 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-s2k9h" podStartSLOduration=2.551004259 podStartE2EDuration="2.983324259s" podCreationTimestamp="2025-11-24 11:30:11 +0000 UTC" firstStartedPulling="2025-11-24 11:30:12.871530787 +0000 UTC m=+1264.583055263" lastFinishedPulling="2025-11-24 11:30:13.303850787 +0000 UTC m=+1265.015375263" observedRunningTime="2025-11-24 11:30:13.982878639 +0000 UTC m=+1265.694403115" watchObservedRunningTime="2025-11-24 11:30:13.983324259 +0000 UTC m=+1265.694848735" Nov 24 11:30:15 crc kubenswrapper[5072]: I1124 11:30:15.879560 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:30:43 crc kubenswrapper[5072]: I1124 11:30:43.645032 5072 patch_prober.go:28] interesting pod/machine-config-daemon-jfxnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 11:30:43 crc kubenswrapper[5072]: I1124 11:30:43.645695 5072 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 11:31:13 crc kubenswrapper[5072]: I1124 11:31:13.645159 5072 patch_prober.go:28] interesting pod/machine-config-daemon-jfxnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 11:31:13 crc kubenswrapper[5072]: I1124 11:31:13.645823 5072 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 11:31:14 crc kubenswrapper[5072]: I1124 11:31:14.183142 5072 scope.go:117] "RemoveContainer" containerID="2bcce05c4b56d34202a761419d3cefa1ec23b24d985c80289439bbbeb44bab15" Nov 24 11:31:43 crc kubenswrapper[5072]: I1124 11:31:43.645456 5072 patch_prober.go:28] interesting pod/machine-config-daemon-jfxnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 11:31:43 crc kubenswrapper[5072]: I1124 11:31:43.646053 5072 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 11:31:43 crc kubenswrapper[5072]: I1124 11:31:43.646110 5072 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" Nov 24 11:31:43 crc kubenswrapper[5072]: I1124 11:31:43.646825 5072 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6f55c06922e799a9c07f40b576b3a8c5fadc1f87864557b3d2231c8cbac92093"} pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 11:31:43 crc kubenswrapper[5072]: I1124 11:31:43.647473 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" containerName="machine-config-daemon" containerID="cri-o://6f55c06922e799a9c07f40b576b3a8c5fadc1f87864557b3d2231c8cbac92093" gracePeriod=600 Nov 24 11:31:45 crc kubenswrapper[5072]: I1124 11:31:45.031834 5072 generic.go:334] "Generic (PLEG): container finished" podID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" containerID="6f55c06922e799a9c07f40b576b3a8c5fadc1f87864557b3d2231c8cbac92093" exitCode=0 Nov 24 11:31:45 crc kubenswrapper[5072]: I1124 11:31:45.032671 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" event={"ID":"85ee6420-36f0-467c-acf4-ebea8b02c8d5","Type":"ContainerDied","Data":"6f55c06922e799a9c07f40b576b3a8c5fadc1f87864557b3d2231c8cbac92093"} Nov 24 11:31:45 crc kubenswrapper[5072]: I1124 11:31:45.033079 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" event={"ID":"85ee6420-36f0-467c-acf4-ebea8b02c8d5","Type":"ContainerStarted","Data":"f0239aa581e66fddd8c16af420543c1743e09635c9f82c2f13fdce098c99f8ec"} Nov 24 11:31:45 crc kubenswrapper[5072]: I1124 11:31:45.033111 5072 scope.go:117] "RemoveContainer" containerID="b030b14c475fa1e60935020fac8bbc582c34d80ebfa6d2f82381ce67034a5e50" Nov 24 11:31:59 crc kubenswrapper[5072]: I1124 11:31:59.513735 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qtbbs"] Nov 24 11:31:59 crc kubenswrapper[5072]: I1124 11:31:59.518166 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qtbbs" Nov 24 11:31:59 crc kubenswrapper[5072]: I1124 11:31:59.527779 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qtbbs"] Nov 24 11:31:59 crc kubenswrapper[5072]: I1124 11:31:59.600638 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc379dc2-9ff5-418d-a9c0-ec7063725208-utilities\") pod \"redhat-operators-qtbbs\" (UID: \"fc379dc2-9ff5-418d-a9c0-ec7063725208\") " pod="openshift-marketplace/redhat-operators-qtbbs" Nov 24 11:31:59 crc kubenswrapper[5072]: I1124 11:31:59.600747 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc379dc2-9ff5-418d-a9c0-ec7063725208-catalog-content\") pod \"redhat-operators-qtbbs\" (UID: \"fc379dc2-9ff5-418d-a9c0-ec7063725208\") " pod="openshift-marketplace/redhat-operators-qtbbs" Nov 24 11:31:59 crc kubenswrapper[5072]: I1124 11:31:59.600813 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shcrt\" (UniqueName: \"kubernetes.io/projected/fc379dc2-9ff5-418d-a9c0-ec7063725208-kube-api-access-shcrt\") pod \"redhat-operators-qtbbs\" (UID: \"fc379dc2-9ff5-418d-a9c0-ec7063725208\") " pod="openshift-marketplace/redhat-operators-qtbbs" Nov 24 11:31:59 crc kubenswrapper[5072]: I1124 11:31:59.702006 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shcrt\" (UniqueName: \"kubernetes.io/projected/fc379dc2-9ff5-418d-a9c0-ec7063725208-kube-api-access-shcrt\") pod \"redhat-operators-qtbbs\" (UID: \"fc379dc2-9ff5-418d-a9c0-ec7063725208\") " pod="openshift-marketplace/redhat-operators-qtbbs" Nov 24 11:31:59 crc kubenswrapper[5072]: I1124 11:31:59.702109 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc379dc2-9ff5-418d-a9c0-ec7063725208-utilities\") pod \"redhat-operators-qtbbs\" (UID: \"fc379dc2-9ff5-418d-a9c0-ec7063725208\") " pod="openshift-marketplace/redhat-operators-qtbbs" Nov 24 11:31:59 crc kubenswrapper[5072]: I1124 11:31:59.702239 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc379dc2-9ff5-418d-a9c0-ec7063725208-catalog-content\") pod \"redhat-operators-qtbbs\" (UID: \"fc379dc2-9ff5-418d-a9c0-ec7063725208\") " pod="openshift-marketplace/redhat-operators-qtbbs" Nov 24 11:31:59 crc kubenswrapper[5072]: I1124 11:31:59.702710 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc379dc2-9ff5-418d-a9c0-ec7063725208-utilities\") pod \"redhat-operators-qtbbs\" (UID: \"fc379dc2-9ff5-418d-a9c0-ec7063725208\") " pod="openshift-marketplace/redhat-operators-qtbbs" Nov 24 11:31:59 crc kubenswrapper[5072]: I1124 11:31:59.702807 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc379dc2-9ff5-418d-a9c0-ec7063725208-catalog-content\") pod \"redhat-operators-qtbbs\" (UID: \"fc379dc2-9ff5-418d-a9c0-ec7063725208\") " pod="openshift-marketplace/redhat-operators-qtbbs" Nov 24 11:31:59 crc kubenswrapper[5072]: I1124 11:31:59.728625 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shcrt\" (UniqueName: \"kubernetes.io/projected/fc379dc2-9ff5-418d-a9c0-ec7063725208-kube-api-access-shcrt\") pod \"redhat-operators-qtbbs\" (UID: \"fc379dc2-9ff5-418d-a9c0-ec7063725208\") " pod="openshift-marketplace/redhat-operators-qtbbs" Nov 24 11:31:59 crc kubenswrapper[5072]: I1124 11:31:59.843838 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qtbbs" Nov 24 11:32:00 crc kubenswrapper[5072]: I1124 11:32:00.286551 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qtbbs"] Nov 24 11:32:01 crc kubenswrapper[5072]: I1124 11:32:01.192395 5072 generic.go:334] "Generic (PLEG): container finished" podID="fc379dc2-9ff5-418d-a9c0-ec7063725208" containerID="f18be9bc46d77bc89dccbe8c700e0240a695b97e8a1a405946e90e19fd5b34d8" exitCode=0 Nov 24 11:32:01 crc kubenswrapper[5072]: I1124 11:32:01.192619 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qtbbs" event={"ID":"fc379dc2-9ff5-418d-a9c0-ec7063725208","Type":"ContainerDied","Data":"f18be9bc46d77bc89dccbe8c700e0240a695b97e8a1a405946e90e19fd5b34d8"} Nov 24 11:32:01 crc kubenswrapper[5072]: I1124 11:32:01.194526 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qtbbs" event={"ID":"fc379dc2-9ff5-418d-a9c0-ec7063725208","Type":"ContainerStarted","Data":"5a989882e5966e9d038b8521883d4281e73e746d95d6fb8efe896642eb0e04c2"} Nov 24 11:32:04 crc kubenswrapper[5072]: I1124 11:32:04.226173 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qtbbs" event={"ID":"fc379dc2-9ff5-418d-a9c0-ec7063725208","Type":"ContainerStarted","Data":"ae95d27311428ace5cc3f101e8877e059cdb0d556ab8a2289cbc3de38bff5614"} Nov 24 11:32:06 crc kubenswrapper[5072]: I1124 11:32:06.249449 5072 generic.go:334] "Generic (PLEG): container finished" podID="fc379dc2-9ff5-418d-a9c0-ec7063725208" containerID="ae95d27311428ace5cc3f101e8877e059cdb0d556ab8a2289cbc3de38bff5614" exitCode=0 Nov 24 11:32:06 crc kubenswrapper[5072]: I1124 11:32:06.249534 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qtbbs" event={"ID":"fc379dc2-9ff5-418d-a9c0-ec7063725208","Type":"ContainerDied","Data":"ae95d27311428ace5cc3f101e8877e059cdb0d556ab8a2289cbc3de38bff5614"} Nov 24 11:32:07 crc kubenswrapper[5072]: I1124 11:32:07.269398 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qtbbs" event={"ID":"fc379dc2-9ff5-418d-a9c0-ec7063725208","Type":"ContainerStarted","Data":"32c9d2fd71c4499d53c814bc8af5f474381887b6624365e2022e26af028637ba"} Nov 24 11:32:09 crc kubenswrapper[5072]: I1124 11:32:09.844083 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qtbbs" Nov 24 11:32:09 crc kubenswrapper[5072]: I1124 11:32:09.844414 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qtbbs" Nov 24 11:32:10 crc kubenswrapper[5072]: I1124 11:32:10.886299 5072 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qtbbs" podUID="fc379dc2-9ff5-418d-a9c0-ec7063725208" containerName="registry-server" probeResult="failure" output=< Nov 24 11:32:10 crc kubenswrapper[5072]: timeout: failed to connect service ":50051" within 1s Nov 24 11:32:10 crc kubenswrapper[5072]: > Nov 24 11:32:14 crc kubenswrapper[5072]: I1124 11:32:14.293558 5072 scope.go:117] "RemoveContainer" containerID="d47123d9a768cc80969cf1ab5eeb3b37a3f4ba43a727da9cffb6be1900702a41" Nov 24 11:32:19 crc kubenswrapper[5072]: I1124 11:32:19.913850 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qtbbs" Nov 24 11:32:19 crc kubenswrapper[5072]: I1124 11:32:19.947584 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qtbbs" podStartSLOduration=15.470508126 podStartE2EDuration="20.947564522s" podCreationTimestamp="2025-11-24 11:31:59 +0000 UTC" firstStartedPulling="2025-11-24 11:32:01.196008096 +0000 UTC m=+1372.907532592" lastFinishedPulling="2025-11-24 11:32:06.673064472 +0000 UTC m=+1378.384588988" observedRunningTime="2025-11-24 11:32:07.30045081 +0000 UTC m=+1379.011975286" watchObservedRunningTime="2025-11-24 11:32:19.947564522 +0000 UTC m=+1391.659089008" Nov 24 11:32:19 crc kubenswrapper[5072]: I1124 11:32:19.989765 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qtbbs" Nov 24 11:32:20 crc kubenswrapper[5072]: I1124 11:32:20.196774 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qtbbs"] Nov 24 11:32:21 crc kubenswrapper[5072]: I1124 11:32:21.450832 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qtbbs" podUID="fc379dc2-9ff5-418d-a9c0-ec7063725208" containerName="registry-server" containerID="cri-o://32c9d2fd71c4499d53c814bc8af5f474381887b6624365e2022e26af028637ba" gracePeriod=2 Nov 24 11:32:21 crc kubenswrapper[5072]: I1124 11:32:21.945129 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qtbbs" Nov 24 11:32:22 crc kubenswrapper[5072]: I1124 11:32:22.076577 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shcrt\" (UniqueName: \"kubernetes.io/projected/fc379dc2-9ff5-418d-a9c0-ec7063725208-kube-api-access-shcrt\") pod \"fc379dc2-9ff5-418d-a9c0-ec7063725208\" (UID: \"fc379dc2-9ff5-418d-a9c0-ec7063725208\") " Nov 24 11:32:22 crc kubenswrapper[5072]: I1124 11:32:22.076632 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc379dc2-9ff5-418d-a9c0-ec7063725208-catalog-content\") pod \"fc379dc2-9ff5-418d-a9c0-ec7063725208\" (UID: \"fc379dc2-9ff5-418d-a9c0-ec7063725208\") " Nov 24 11:32:22 crc kubenswrapper[5072]: I1124 11:32:22.076853 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc379dc2-9ff5-418d-a9c0-ec7063725208-utilities\") pod \"fc379dc2-9ff5-418d-a9c0-ec7063725208\" (UID: \"fc379dc2-9ff5-418d-a9c0-ec7063725208\") " Nov 24 11:32:22 crc kubenswrapper[5072]: I1124 11:32:22.078018 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc379dc2-9ff5-418d-a9c0-ec7063725208-utilities" (OuterVolumeSpecName: "utilities") pod "fc379dc2-9ff5-418d-a9c0-ec7063725208" (UID: "fc379dc2-9ff5-418d-a9c0-ec7063725208"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:32:22 crc kubenswrapper[5072]: I1124 11:32:22.086691 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc379dc2-9ff5-418d-a9c0-ec7063725208-kube-api-access-shcrt" (OuterVolumeSpecName: "kube-api-access-shcrt") pod "fc379dc2-9ff5-418d-a9c0-ec7063725208" (UID: "fc379dc2-9ff5-418d-a9c0-ec7063725208"). InnerVolumeSpecName "kube-api-access-shcrt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:32:22 crc kubenswrapper[5072]: I1124 11:32:22.166408 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc379dc2-9ff5-418d-a9c0-ec7063725208-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fc379dc2-9ff5-418d-a9c0-ec7063725208" (UID: "fc379dc2-9ff5-418d-a9c0-ec7063725208"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:32:22 crc kubenswrapper[5072]: I1124 11:32:22.178971 5072 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc379dc2-9ff5-418d-a9c0-ec7063725208-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 11:32:22 crc kubenswrapper[5072]: I1124 11:32:22.179001 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shcrt\" (UniqueName: \"kubernetes.io/projected/fc379dc2-9ff5-418d-a9c0-ec7063725208-kube-api-access-shcrt\") on node \"crc\" DevicePath \"\"" Nov 24 11:32:22 crc kubenswrapper[5072]: I1124 11:32:22.179015 5072 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc379dc2-9ff5-418d-a9c0-ec7063725208-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 11:32:22 crc kubenswrapper[5072]: I1124 11:32:22.471326 5072 generic.go:334] "Generic (PLEG): container finished" podID="fc379dc2-9ff5-418d-a9c0-ec7063725208" containerID="32c9d2fd71c4499d53c814bc8af5f474381887b6624365e2022e26af028637ba" exitCode=0 Nov 24 11:32:22 crc kubenswrapper[5072]: I1124 11:32:22.471437 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qtbbs" Nov 24 11:32:22 crc kubenswrapper[5072]: I1124 11:32:22.471447 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qtbbs" event={"ID":"fc379dc2-9ff5-418d-a9c0-ec7063725208","Type":"ContainerDied","Data":"32c9d2fd71c4499d53c814bc8af5f474381887b6624365e2022e26af028637ba"} Nov 24 11:32:22 crc kubenswrapper[5072]: I1124 11:32:22.471541 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qtbbs" event={"ID":"fc379dc2-9ff5-418d-a9c0-ec7063725208","Type":"ContainerDied","Data":"5a989882e5966e9d038b8521883d4281e73e746d95d6fb8efe896642eb0e04c2"} Nov 24 11:32:22 crc kubenswrapper[5072]: I1124 11:32:22.471592 5072 scope.go:117] "RemoveContainer" containerID="32c9d2fd71c4499d53c814bc8af5f474381887b6624365e2022e26af028637ba" Nov 24 11:32:22 crc kubenswrapper[5072]: I1124 11:32:22.507686 5072 scope.go:117] "RemoveContainer" containerID="ae95d27311428ace5cc3f101e8877e059cdb0d556ab8a2289cbc3de38bff5614" Nov 24 11:32:22 crc kubenswrapper[5072]: I1124 11:32:22.537013 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qtbbs"] Nov 24 11:32:22 crc kubenswrapper[5072]: I1124 11:32:22.548743 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qtbbs"] Nov 24 11:32:22 crc kubenswrapper[5072]: I1124 11:32:22.556574 5072 scope.go:117] "RemoveContainer" containerID="f18be9bc46d77bc89dccbe8c700e0240a695b97e8a1a405946e90e19fd5b34d8" Nov 24 11:32:22 crc kubenswrapper[5072]: I1124 11:32:22.603537 5072 scope.go:117] "RemoveContainer" containerID="32c9d2fd71c4499d53c814bc8af5f474381887b6624365e2022e26af028637ba" Nov 24 11:32:22 crc kubenswrapper[5072]: E1124 11:32:22.603990 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32c9d2fd71c4499d53c814bc8af5f474381887b6624365e2022e26af028637ba\": container with ID starting with 32c9d2fd71c4499d53c814bc8af5f474381887b6624365e2022e26af028637ba not found: ID does not exist" containerID="32c9d2fd71c4499d53c814bc8af5f474381887b6624365e2022e26af028637ba" Nov 24 11:32:22 crc kubenswrapper[5072]: I1124 11:32:22.604023 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32c9d2fd71c4499d53c814bc8af5f474381887b6624365e2022e26af028637ba"} err="failed to get container status \"32c9d2fd71c4499d53c814bc8af5f474381887b6624365e2022e26af028637ba\": rpc error: code = NotFound desc = could not find container \"32c9d2fd71c4499d53c814bc8af5f474381887b6624365e2022e26af028637ba\": container with ID starting with 32c9d2fd71c4499d53c814bc8af5f474381887b6624365e2022e26af028637ba not found: ID does not exist" Nov 24 11:32:22 crc kubenswrapper[5072]: I1124 11:32:22.604047 5072 scope.go:117] "RemoveContainer" containerID="ae95d27311428ace5cc3f101e8877e059cdb0d556ab8a2289cbc3de38bff5614" Nov 24 11:32:22 crc kubenswrapper[5072]: E1124 11:32:22.604412 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae95d27311428ace5cc3f101e8877e059cdb0d556ab8a2289cbc3de38bff5614\": container with ID starting with ae95d27311428ace5cc3f101e8877e059cdb0d556ab8a2289cbc3de38bff5614 not found: ID does not exist" containerID="ae95d27311428ace5cc3f101e8877e059cdb0d556ab8a2289cbc3de38bff5614" Nov 24 11:32:22 crc kubenswrapper[5072]: I1124 11:32:22.604436 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae95d27311428ace5cc3f101e8877e059cdb0d556ab8a2289cbc3de38bff5614"} err="failed to get container status \"ae95d27311428ace5cc3f101e8877e059cdb0d556ab8a2289cbc3de38bff5614\": rpc error: code = NotFound desc = could not find container \"ae95d27311428ace5cc3f101e8877e059cdb0d556ab8a2289cbc3de38bff5614\": container with ID starting with ae95d27311428ace5cc3f101e8877e059cdb0d556ab8a2289cbc3de38bff5614 not found: ID does not exist" Nov 24 11:32:22 crc kubenswrapper[5072]: I1124 11:32:22.604452 5072 scope.go:117] "RemoveContainer" containerID="f18be9bc46d77bc89dccbe8c700e0240a695b97e8a1a405946e90e19fd5b34d8" Nov 24 11:32:22 crc kubenswrapper[5072]: E1124 11:32:22.604816 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f18be9bc46d77bc89dccbe8c700e0240a695b97e8a1a405946e90e19fd5b34d8\": container with ID starting with f18be9bc46d77bc89dccbe8c700e0240a695b97e8a1a405946e90e19fd5b34d8 not found: ID does not exist" containerID="f18be9bc46d77bc89dccbe8c700e0240a695b97e8a1a405946e90e19fd5b34d8" Nov 24 11:32:22 crc kubenswrapper[5072]: I1124 11:32:22.604842 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f18be9bc46d77bc89dccbe8c700e0240a695b97e8a1a405946e90e19fd5b34d8"} err="failed to get container status \"f18be9bc46d77bc89dccbe8c700e0240a695b97e8a1a405946e90e19fd5b34d8\": rpc error: code = NotFound desc = could not find container \"f18be9bc46d77bc89dccbe8c700e0240a695b97e8a1a405946e90e19fd5b34d8\": container with ID starting with f18be9bc46d77bc89dccbe8c700e0240a695b97e8a1a405946e90e19fd5b34d8 not found: ID does not exist" Nov 24 11:32:23 crc kubenswrapper[5072]: I1124 11:32:23.036301 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc379dc2-9ff5-418d-a9c0-ec7063725208" path="/var/lib/kubelet/pods/fc379dc2-9ff5-418d-a9c0-ec7063725208/volumes" Nov 24 11:32:34 crc kubenswrapper[5072]: I1124 11:32:34.034125 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ksbqn"] Nov 24 11:32:34 crc kubenswrapper[5072]: E1124 11:32:34.043509 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc379dc2-9ff5-418d-a9c0-ec7063725208" containerName="extract-content" Nov 24 11:32:34 crc kubenswrapper[5072]: I1124 11:32:34.043869 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc379dc2-9ff5-418d-a9c0-ec7063725208" containerName="extract-content" Nov 24 11:32:34 crc kubenswrapper[5072]: E1124 11:32:34.043968 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc379dc2-9ff5-418d-a9c0-ec7063725208" containerName="extract-utilities" Nov 24 11:32:34 crc kubenswrapper[5072]: I1124 11:32:34.044039 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc379dc2-9ff5-418d-a9c0-ec7063725208" containerName="extract-utilities" Nov 24 11:32:34 crc kubenswrapper[5072]: E1124 11:32:34.044142 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc379dc2-9ff5-418d-a9c0-ec7063725208" containerName="registry-server" Nov 24 11:32:34 crc kubenswrapper[5072]: I1124 11:32:34.044219 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc379dc2-9ff5-418d-a9c0-ec7063725208" containerName="registry-server" Nov 24 11:32:34 crc kubenswrapper[5072]: I1124 11:32:34.044522 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc379dc2-9ff5-418d-a9c0-ec7063725208" containerName="registry-server" Nov 24 11:32:34 crc kubenswrapper[5072]: I1124 11:32:34.046422 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ksbqn" Nov 24 11:32:34 crc kubenswrapper[5072]: I1124 11:32:34.049346 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ksbqn"] Nov 24 11:32:34 crc kubenswrapper[5072]: I1124 11:32:34.180683 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd7e9471-bcce-4640-88ea-96274af64768-catalog-content\") pod \"redhat-marketplace-ksbqn\" (UID: \"dd7e9471-bcce-4640-88ea-96274af64768\") " pod="openshift-marketplace/redhat-marketplace-ksbqn" Nov 24 11:32:34 crc kubenswrapper[5072]: I1124 11:32:34.180754 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd7e9471-bcce-4640-88ea-96274af64768-utilities\") pod \"redhat-marketplace-ksbqn\" (UID: \"dd7e9471-bcce-4640-88ea-96274af64768\") " pod="openshift-marketplace/redhat-marketplace-ksbqn" Nov 24 11:32:34 crc kubenswrapper[5072]: I1124 11:32:34.180862 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88g9z\" (UniqueName: \"kubernetes.io/projected/dd7e9471-bcce-4640-88ea-96274af64768-kube-api-access-88g9z\") pod \"redhat-marketplace-ksbqn\" (UID: \"dd7e9471-bcce-4640-88ea-96274af64768\") " pod="openshift-marketplace/redhat-marketplace-ksbqn" Nov 24 11:32:34 crc kubenswrapper[5072]: I1124 11:32:34.282791 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd7e9471-bcce-4640-88ea-96274af64768-catalog-content\") pod \"redhat-marketplace-ksbqn\" (UID: \"dd7e9471-bcce-4640-88ea-96274af64768\") " pod="openshift-marketplace/redhat-marketplace-ksbqn" Nov 24 11:32:34 crc kubenswrapper[5072]: I1124 11:32:34.282846 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd7e9471-bcce-4640-88ea-96274af64768-utilities\") pod \"redhat-marketplace-ksbqn\" (UID: \"dd7e9471-bcce-4640-88ea-96274af64768\") " pod="openshift-marketplace/redhat-marketplace-ksbqn" Nov 24 11:32:34 crc kubenswrapper[5072]: I1124 11:32:34.282919 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88g9z\" (UniqueName: \"kubernetes.io/projected/dd7e9471-bcce-4640-88ea-96274af64768-kube-api-access-88g9z\") pod \"redhat-marketplace-ksbqn\" (UID: \"dd7e9471-bcce-4640-88ea-96274af64768\") " pod="openshift-marketplace/redhat-marketplace-ksbqn" Nov 24 11:32:34 crc kubenswrapper[5072]: I1124 11:32:34.283417 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd7e9471-bcce-4640-88ea-96274af64768-catalog-content\") pod \"redhat-marketplace-ksbqn\" (UID: \"dd7e9471-bcce-4640-88ea-96274af64768\") " pod="openshift-marketplace/redhat-marketplace-ksbqn" Nov 24 11:32:34 crc kubenswrapper[5072]: I1124 11:32:34.283445 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd7e9471-bcce-4640-88ea-96274af64768-utilities\") pod \"redhat-marketplace-ksbqn\" (UID: \"dd7e9471-bcce-4640-88ea-96274af64768\") " pod="openshift-marketplace/redhat-marketplace-ksbqn" Nov 24 11:32:34 crc kubenswrapper[5072]: I1124 11:32:34.304138 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88g9z\" (UniqueName: \"kubernetes.io/projected/dd7e9471-bcce-4640-88ea-96274af64768-kube-api-access-88g9z\") pod \"redhat-marketplace-ksbqn\" (UID: \"dd7e9471-bcce-4640-88ea-96274af64768\") " pod="openshift-marketplace/redhat-marketplace-ksbqn" Nov 24 11:32:34 crc kubenswrapper[5072]: I1124 11:32:34.389265 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ksbqn" Nov 24 11:32:34 crc kubenswrapper[5072]: I1124 11:32:34.826499 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ksbqn"] Nov 24 11:32:35 crc kubenswrapper[5072]: I1124 11:32:35.623332 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ksbqn" event={"ID":"dd7e9471-bcce-4640-88ea-96274af64768","Type":"ContainerDied","Data":"c06170c8bac23077e8cff265ebeb5401c31841be234ad9ce84f259922c41a282"} Nov 24 11:32:35 crc kubenswrapper[5072]: I1124 11:32:35.624311 5072 generic.go:334] "Generic (PLEG): container finished" podID="dd7e9471-bcce-4640-88ea-96274af64768" containerID="c06170c8bac23077e8cff265ebeb5401c31841be234ad9ce84f259922c41a282" exitCode=0 Nov 24 11:32:35 crc kubenswrapper[5072]: I1124 11:32:35.624424 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ksbqn" event={"ID":"dd7e9471-bcce-4640-88ea-96274af64768","Type":"ContainerStarted","Data":"e7e18586d5a42ca49eabcb9a3de63ad1c4eb5a3bb72c267438e1c0b0d7564219"} Nov 24 11:32:36 crc kubenswrapper[5072]: I1124 11:32:36.636676 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ksbqn" event={"ID":"dd7e9471-bcce-4640-88ea-96274af64768","Type":"ContainerStarted","Data":"8416ae11a717c52e36b2d36bf0a84327955131f2a1507da0a636e37c430465c2"} Nov 24 11:32:37 crc kubenswrapper[5072]: I1124 11:32:37.653782 5072 generic.go:334] "Generic (PLEG): container finished" podID="dd7e9471-bcce-4640-88ea-96274af64768" containerID="8416ae11a717c52e36b2d36bf0a84327955131f2a1507da0a636e37c430465c2" exitCode=0 Nov 24 11:32:37 crc kubenswrapper[5072]: I1124 11:32:37.653870 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ksbqn" event={"ID":"dd7e9471-bcce-4640-88ea-96274af64768","Type":"ContainerDied","Data":"8416ae11a717c52e36b2d36bf0a84327955131f2a1507da0a636e37c430465c2"} Nov 24 11:32:38 crc kubenswrapper[5072]: I1124 11:32:38.683281 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ksbqn" event={"ID":"dd7e9471-bcce-4640-88ea-96274af64768","Type":"ContainerStarted","Data":"468021153e1ae51a57d4cadd9afbdc9dfdbbcb79927a4d4f70ddc2d153df551f"} Nov 24 11:32:38 crc kubenswrapper[5072]: I1124 11:32:38.710673 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ksbqn" podStartSLOduration=2.2439915 podStartE2EDuration="4.710653229s" podCreationTimestamp="2025-11-24 11:32:34 +0000 UTC" firstStartedPulling="2025-11-24 11:32:35.626140507 +0000 UTC m=+1407.337665023" lastFinishedPulling="2025-11-24 11:32:38.092802246 +0000 UTC m=+1409.804326752" observedRunningTime="2025-11-24 11:32:38.707078284 +0000 UTC m=+1410.418602810" watchObservedRunningTime="2025-11-24 11:32:38.710653229 +0000 UTC m=+1410.422177715" Nov 24 11:32:44 crc kubenswrapper[5072]: I1124 11:32:44.390235 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ksbqn" Nov 24 11:32:44 crc kubenswrapper[5072]: I1124 11:32:44.469752 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ksbqn" Nov 24 11:32:45 crc kubenswrapper[5072]: I1124 11:32:45.004405 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ksbqn" Nov 24 11:32:45 crc kubenswrapper[5072]: I1124 11:32:45.073311 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ksbqn" Nov 24 11:32:45 crc kubenswrapper[5072]: I1124 11:32:45.149447 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ksbqn"] Nov 24 11:32:47 crc kubenswrapper[5072]: I1124 11:32:47.044967 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ksbqn" podUID="dd7e9471-bcce-4640-88ea-96274af64768" containerName="registry-server" containerID="cri-o://468021153e1ae51a57d4cadd9afbdc9dfdbbcb79927a4d4f70ddc2d153df551f" gracePeriod=2 Nov 24 11:32:47 crc kubenswrapper[5072]: I1124 11:32:47.502403 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ksbqn" Nov 24 11:32:47 crc kubenswrapper[5072]: I1124 11:32:47.674564 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd7e9471-bcce-4640-88ea-96274af64768-catalog-content\") pod \"dd7e9471-bcce-4640-88ea-96274af64768\" (UID: \"dd7e9471-bcce-4640-88ea-96274af64768\") " Nov 24 11:32:47 crc kubenswrapper[5072]: I1124 11:32:47.674863 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88g9z\" (UniqueName: \"kubernetes.io/projected/dd7e9471-bcce-4640-88ea-96274af64768-kube-api-access-88g9z\") pod \"dd7e9471-bcce-4640-88ea-96274af64768\" (UID: \"dd7e9471-bcce-4640-88ea-96274af64768\") " Nov 24 11:32:47 crc kubenswrapper[5072]: I1124 11:32:47.674892 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd7e9471-bcce-4640-88ea-96274af64768-utilities\") pod \"dd7e9471-bcce-4640-88ea-96274af64768\" (UID: \"dd7e9471-bcce-4640-88ea-96274af64768\") " Nov 24 11:32:47 crc kubenswrapper[5072]: I1124 11:32:47.676038 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd7e9471-bcce-4640-88ea-96274af64768-utilities" (OuterVolumeSpecName: "utilities") pod "dd7e9471-bcce-4640-88ea-96274af64768" (UID: "dd7e9471-bcce-4640-88ea-96274af64768"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:32:47 crc kubenswrapper[5072]: I1124 11:32:47.682614 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd7e9471-bcce-4640-88ea-96274af64768-kube-api-access-88g9z" (OuterVolumeSpecName: "kube-api-access-88g9z") pod "dd7e9471-bcce-4640-88ea-96274af64768" (UID: "dd7e9471-bcce-4640-88ea-96274af64768"). InnerVolumeSpecName "kube-api-access-88g9z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:32:47 crc kubenswrapper[5072]: I1124 11:32:47.699426 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd7e9471-bcce-4640-88ea-96274af64768-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dd7e9471-bcce-4640-88ea-96274af64768" (UID: "dd7e9471-bcce-4640-88ea-96274af64768"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:32:47 crc kubenswrapper[5072]: I1124 11:32:47.777357 5072 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd7e9471-bcce-4640-88ea-96274af64768-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 11:32:47 crc kubenswrapper[5072]: I1124 11:32:47.777431 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-88g9z\" (UniqueName: \"kubernetes.io/projected/dd7e9471-bcce-4640-88ea-96274af64768-kube-api-access-88g9z\") on node \"crc\" DevicePath \"\"" Nov 24 11:32:47 crc kubenswrapper[5072]: I1124 11:32:47.777452 5072 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd7e9471-bcce-4640-88ea-96274af64768-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 11:32:48 crc kubenswrapper[5072]: I1124 11:32:48.072206 5072 generic.go:334] "Generic (PLEG): container finished" podID="dd7e9471-bcce-4640-88ea-96274af64768" containerID="468021153e1ae51a57d4cadd9afbdc9dfdbbcb79927a4d4f70ddc2d153df551f" exitCode=0 Nov 24 11:32:48 crc kubenswrapper[5072]: I1124 11:32:48.072421 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ksbqn" Nov 24 11:32:48 crc kubenswrapper[5072]: I1124 11:32:48.072456 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ksbqn" event={"ID":"dd7e9471-bcce-4640-88ea-96274af64768","Type":"ContainerDied","Data":"468021153e1ae51a57d4cadd9afbdc9dfdbbcb79927a4d4f70ddc2d153df551f"} Nov 24 11:32:48 crc kubenswrapper[5072]: I1124 11:32:48.076521 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ksbqn" event={"ID":"dd7e9471-bcce-4640-88ea-96274af64768","Type":"ContainerDied","Data":"e7e18586d5a42ca49eabcb9a3de63ad1c4eb5a3bb72c267438e1c0b0d7564219"} Nov 24 11:32:48 crc kubenswrapper[5072]: I1124 11:32:48.076593 5072 scope.go:117] "RemoveContainer" containerID="468021153e1ae51a57d4cadd9afbdc9dfdbbcb79927a4d4f70ddc2d153df551f" Nov 24 11:32:48 crc kubenswrapper[5072]: I1124 11:32:48.121742 5072 scope.go:117] "RemoveContainer" containerID="8416ae11a717c52e36b2d36bf0a84327955131f2a1507da0a636e37c430465c2" Nov 24 11:32:48 crc kubenswrapper[5072]: I1124 11:32:48.127375 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ksbqn"] Nov 24 11:32:48 crc kubenswrapper[5072]: I1124 11:32:48.138143 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ksbqn"] Nov 24 11:32:48 crc kubenswrapper[5072]: I1124 11:32:48.154816 5072 scope.go:117] "RemoveContainer" containerID="c06170c8bac23077e8cff265ebeb5401c31841be234ad9ce84f259922c41a282" Nov 24 11:32:48 crc kubenswrapper[5072]: I1124 11:32:48.221850 5072 scope.go:117] "RemoveContainer" containerID="468021153e1ae51a57d4cadd9afbdc9dfdbbcb79927a4d4f70ddc2d153df551f" Nov 24 11:32:48 crc kubenswrapper[5072]: E1124 11:32:48.222350 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"468021153e1ae51a57d4cadd9afbdc9dfdbbcb79927a4d4f70ddc2d153df551f\": container with ID starting with 468021153e1ae51a57d4cadd9afbdc9dfdbbcb79927a4d4f70ddc2d153df551f not found: ID does not exist" containerID="468021153e1ae51a57d4cadd9afbdc9dfdbbcb79927a4d4f70ddc2d153df551f" Nov 24 11:32:48 crc kubenswrapper[5072]: I1124 11:32:48.222404 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"468021153e1ae51a57d4cadd9afbdc9dfdbbcb79927a4d4f70ddc2d153df551f"} err="failed to get container status \"468021153e1ae51a57d4cadd9afbdc9dfdbbcb79927a4d4f70ddc2d153df551f\": rpc error: code = NotFound desc = could not find container \"468021153e1ae51a57d4cadd9afbdc9dfdbbcb79927a4d4f70ddc2d153df551f\": container with ID starting with 468021153e1ae51a57d4cadd9afbdc9dfdbbcb79927a4d4f70ddc2d153df551f not found: ID does not exist" Nov 24 11:32:48 crc kubenswrapper[5072]: I1124 11:32:48.222433 5072 scope.go:117] "RemoveContainer" containerID="8416ae11a717c52e36b2d36bf0a84327955131f2a1507da0a636e37c430465c2" Nov 24 11:32:48 crc kubenswrapper[5072]: E1124 11:32:48.222824 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8416ae11a717c52e36b2d36bf0a84327955131f2a1507da0a636e37c430465c2\": container with ID starting with 8416ae11a717c52e36b2d36bf0a84327955131f2a1507da0a636e37c430465c2 not found: ID does not exist" containerID="8416ae11a717c52e36b2d36bf0a84327955131f2a1507da0a636e37c430465c2" Nov 24 11:32:48 crc kubenswrapper[5072]: I1124 11:32:48.222848 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8416ae11a717c52e36b2d36bf0a84327955131f2a1507da0a636e37c430465c2"} err="failed to get container status \"8416ae11a717c52e36b2d36bf0a84327955131f2a1507da0a636e37c430465c2\": rpc error: code = NotFound desc = could not find container \"8416ae11a717c52e36b2d36bf0a84327955131f2a1507da0a636e37c430465c2\": container with ID starting with 8416ae11a717c52e36b2d36bf0a84327955131f2a1507da0a636e37c430465c2 not found: ID does not exist" Nov 24 11:32:48 crc kubenswrapper[5072]: I1124 11:32:48.222863 5072 scope.go:117] "RemoveContainer" containerID="c06170c8bac23077e8cff265ebeb5401c31841be234ad9ce84f259922c41a282" Nov 24 11:32:48 crc kubenswrapper[5072]: E1124 11:32:48.223174 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c06170c8bac23077e8cff265ebeb5401c31841be234ad9ce84f259922c41a282\": container with ID starting with c06170c8bac23077e8cff265ebeb5401c31841be234ad9ce84f259922c41a282 not found: ID does not exist" containerID="c06170c8bac23077e8cff265ebeb5401c31841be234ad9ce84f259922c41a282" Nov 24 11:32:48 crc kubenswrapper[5072]: I1124 11:32:48.223192 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c06170c8bac23077e8cff265ebeb5401c31841be234ad9ce84f259922c41a282"} err="failed to get container status \"c06170c8bac23077e8cff265ebeb5401c31841be234ad9ce84f259922c41a282\": rpc error: code = NotFound desc = could not find container \"c06170c8bac23077e8cff265ebeb5401c31841be234ad9ce84f259922c41a282\": container with ID starting with c06170c8bac23077e8cff265ebeb5401c31841be234ad9ce84f259922c41a282 not found: ID does not exist" Nov 24 11:32:49 crc kubenswrapper[5072]: I1124 11:32:49.035482 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd7e9471-bcce-4640-88ea-96274af64768" path="/var/lib/kubelet/pods/dd7e9471-bcce-4640-88ea-96274af64768/volumes" Nov 24 11:33:21 crc kubenswrapper[5072]: I1124 11:33:21.473716 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gfh92"] Nov 24 11:33:21 crc kubenswrapper[5072]: E1124 11:33:21.475165 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd7e9471-bcce-4640-88ea-96274af64768" containerName="extract-utilities" Nov 24 11:33:21 crc kubenswrapper[5072]: I1124 11:33:21.475183 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd7e9471-bcce-4640-88ea-96274af64768" containerName="extract-utilities" Nov 24 11:33:21 crc kubenswrapper[5072]: E1124 11:33:21.475198 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd7e9471-bcce-4640-88ea-96274af64768" containerName="extract-content" Nov 24 11:33:21 crc kubenswrapper[5072]: I1124 11:33:21.475206 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd7e9471-bcce-4640-88ea-96274af64768" containerName="extract-content" Nov 24 11:33:21 crc kubenswrapper[5072]: E1124 11:33:21.475237 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd7e9471-bcce-4640-88ea-96274af64768" containerName="registry-server" Nov 24 11:33:21 crc kubenswrapper[5072]: I1124 11:33:21.475246 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd7e9471-bcce-4640-88ea-96274af64768" containerName="registry-server" Nov 24 11:33:21 crc kubenswrapper[5072]: I1124 11:33:21.475479 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd7e9471-bcce-4640-88ea-96274af64768" containerName="registry-server" Nov 24 11:33:21 crc kubenswrapper[5072]: I1124 11:33:21.476883 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gfh92" Nov 24 11:33:21 crc kubenswrapper[5072]: I1124 11:33:21.481709 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gfh92"] Nov 24 11:33:21 crc kubenswrapper[5072]: I1124 11:33:21.574335 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8h49\" (UniqueName: \"kubernetes.io/projected/6eb6a6e0-95cf-4ac0-a2c9-ce0ceed5de07-kube-api-access-q8h49\") pod \"community-operators-gfh92\" (UID: \"6eb6a6e0-95cf-4ac0-a2c9-ce0ceed5de07\") " pod="openshift-marketplace/community-operators-gfh92" Nov 24 11:33:21 crc kubenswrapper[5072]: I1124 11:33:21.574830 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6eb6a6e0-95cf-4ac0-a2c9-ce0ceed5de07-utilities\") pod \"community-operators-gfh92\" (UID: \"6eb6a6e0-95cf-4ac0-a2c9-ce0ceed5de07\") " pod="openshift-marketplace/community-operators-gfh92" Nov 24 11:33:21 crc kubenswrapper[5072]: I1124 11:33:21.574904 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6eb6a6e0-95cf-4ac0-a2c9-ce0ceed5de07-catalog-content\") pod \"community-operators-gfh92\" (UID: \"6eb6a6e0-95cf-4ac0-a2c9-ce0ceed5de07\") " pod="openshift-marketplace/community-operators-gfh92" Nov 24 11:33:21 crc kubenswrapper[5072]: I1124 11:33:21.675927 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6eb6a6e0-95cf-4ac0-a2c9-ce0ceed5de07-utilities\") pod \"community-operators-gfh92\" (UID: \"6eb6a6e0-95cf-4ac0-a2c9-ce0ceed5de07\") " pod="openshift-marketplace/community-operators-gfh92" Nov 24 11:33:21 crc kubenswrapper[5072]: I1124 11:33:21.676012 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6eb6a6e0-95cf-4ac0-a2c9-ce0ceed5de07-catalog-content\") pod \"community-operators-gfh92\" (UID: \"6eb6a6e0-95cf-4ac0-a2c9-ce0ceed5de07\") " pod="openshift-marketplace/community-operators-gfh92" Nov 24 11:33:21 crc kubenswrapper[5072]: I1124 11:33:21.676069 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8h49\" (UniqueName: \"kubernetes.io/projected/6eb6a6e0-95cf-4ac0-a2c9-ce0ceed5de07-kube-api-access-q8h49\") pod \"community-operators-gfh92\" (UID: \"6eb6a6e0-95cf-4ac0-a2c9-ce0ceed5de07\") " pod="openshift-marketplace/community-operators-gfh92" Nov 24 11:33:21 crc kubenswrapper[5072]: I1124 11:33:21.676903 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6eb6a6e0-95cf-4ac0-a2c9-ce0ceed5de07-utilities\") pod \"community-operators-gfh92\" (UID: \"6eb6a6e0-95cf-4ac0-a2c9-ce0ceed5de07\") " pod="openshift-marketplace/community-operators-gfh92" Nov 24 11:33:21 crc kubenswrapper[5072]: I1124 11:33:21.677109 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6eb6a6e0-95cf-4ac0-a2c9-ce0ceed5de07-catalog-content\") pod \"community-operators-gfh92\" (UID: \"6eb6a6e0-95cf-4ac0-a2c9-ce0ceed5de07\") " pod="openshift-marketplace/community-operators-gfh92" Nov 24 11:33:21 crc kubenswrapper[5072]: I1124 11:33:21.704661 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8h49\" (UniqueName: \"kubernetes.io/projected/6eb6a6e0-95cf-4ac0-a2c9-ce0ceed5de07-kube-api-access-q8h49\") pod \"community-operators-gfh92\" (UID: \"6eb6a6e0-95cf-4ac0-a2c9-ce0ceed5de07\") " pod="openshift-marketplace/community-operators-gfh92" Nov 24 11:33:21 crc kubenswrapper[5072]: I1124 11:33:21.805960 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gfh92" Nov 24 11:33:22 crc kubenswrapper[5072]: I1124 11:33:22.359974 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gfh92"] Nov 24 11:33:22 crc kubenswrapper[5072]: I1124 11:33:22.425383 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gfh92" event={"ID":"6eb6a6e0-95cf-4ac0-a2c9-ce0ceed5de07","Type":"ContainerStarted","Data":"fbac9b9c3a999b7c55302bda6c48e71082af614209b9c6ff2f08adb920f392c8"} Nov 24 11:33:23 crc kubenswrapper[5072]: I1124 11:33:23.435511 5072 generic.go:334] "Generic (PLEG): container finished" podID="6eb6a6e0-95cf-4ac0-a2c9-ce0ceed5de07" containerID="1ed77df663e498dd5db654bd1d5bde5c613b9ff62f6af0c5269b6f2440bdcc7e" exitCode=0 Nov 24 11:33:23 crc kubenswrapper[5072]: I1124 11:33:23.435574 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gfh92" event={"ID":"6eb6a6e0-95cf-4ac0-a2c9-ce0ceed5de07","Type":"ContainerDied","Data":"1ed77df663e498dd5db654bd1d5bde5c613b9ff62f6af0c5269b6f2440bdcc7e"} Nov 24 11:33:24 crc kubenswrapper[5072]: I1124 11:33:24.447453 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gfh92" event={"ID":"6eb6a6e0-95cf-4ac0-a2c9-ce0ceed5de07","Type":"ContainerStarted","Data":"522351488f02d569dcffd13a39d89c1d83ea369f7239a884125af9082ee811fb"} Nov 24 11:33:25 crc kubenswrapper[5072]: I1124 11:33:25.459096 5072 generic.go:334] "Generic (PLEG): container finished" podID="6eb6a6e0-95cf-4ac0-a2c9-ce0ceed5de07" containerID="522351488f02d569dcffd13a39d89c1d83ea369f7239a884125af9082ee811fb" exitCode=0 Nov 24 11:33:25 crc kubenswrapper[5072]: I1124 11:33:25.459230 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gfh92" event={"ID":"6eb6a6e0-95cf-4ac0-a2c9-ce0ceed5de07","Type":"ContainerDied","Data":"522351488f02d569dcffd13a39d89c1d83ea369f7239a884125af9082ee811fb"} Nov 24 11:33:27 crc kubenswrapper[5072]: I1124 11:33:27.477719 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gfh92" event={"ID":"6eb6a6e0-95cf-4ac0-a2c9-ce0ceed5de07","Type":"ContainerStarted","Data":"03a1468dd9818807db8edafb4fe0863ebc4b79b9449c7e22d63704b93792341d"} Nov 24 11:33:27 crc kubenswrapper[5072]: I1124 11:33:27.504147 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gfh92" podStartSLOduration=3.5173134299999997 podStartE2EDuration="6.504123849s" podCreationTimestamp="2025-11-24 11:33:21 +0000 UTC" firstStartedPulling="2025-11-24 11:33:23.438449376 +0000 UTC m=+1455.149973862" lastFinishedPulling="2025-11-24 11:33:26.425259795 +0000 UTC m=+1458.136784281" observedRunningTime="2025-11-24 11:33:27.502231074 +0000 UTC m=+1459.213755560" watchObservedRunningTime="2025-11-24 11:33:27.504123849 +0000 UTC m=+1459.215648325" Nov 24 11:33:31 crc kubenswrapper[5072]: I1124 11:33:31.526802 5072 generic.go:334] "Generic (PLEG): container finished" podID="55d5c4ad-dbbc-4728-bac4-f12adda414f1" containerID="bacde65c0bf7088a571c6dd75c114ac6fdad7e96b5f661ba9978746b8f8f018e" exitCode=0 Nov 24 11:33:31 crc kubenswrapper[5072]: I1124 11:33:31.526898 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-s2k9h" event={"ID":"55d5c4ad-dbbc-4728-bac4-f12adda414f1","Type":"ContainerDied","Data":"bacde65c0bf7088a571c6dd75c114ac6fdad7e96b5f661ba9978746b8f8f018e"} Nov 24 11:33:31 crc kubenswrapper[5072]: I1124 11:33:31.807529 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gfh92" Nov 24 11:33:31 crc kubenswrapper[5072]: I1124 11:33:31.807620 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gfh92" Nov 24 11:33:31 crc kubenswrapper[5072]: I1124 11:33:31.865300 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gfh92" Nov 24 11:33:32 crc kubenswrapper[5072]: I1124 11:33:32.615424 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gfh92" Nov 24 11:33:32 crc kubenswrapper[5072]: I1124 11:33:32.674320 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gfh92"] Nov 24 11:33:33 crc kubenswrapper[5072]: I1124 11:33:33.008496 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-s2k9h" Nov 24 11:33:33 crc kubenswrapper[5072]: I1124 11:33:33.104881 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55d5c4ad-dbbc-4728-bac4-f12adda414f1-bootstrap-combined-ca-bundle\") pod \"55d5c4ad-dbbc-4728-bac4-f12adda414f1\" (UID: \"55d5c4ad-dbbc-4728-bac4-f12adda414f1\") " Nov 24 11:33:33 crc kubenswrapper[5072]: I1124 11:33:33.105204 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/55d5c4ad-dbbc-4728-bac4-f12adda414f1-ssh-key\") pod \"55d5c4ad-dbbc-4728-bac4-f12adda414f1\" (UID: \"55d5c4ad-dbbc-4728-bac4-f12adda414f1\") " Nov 24 11:33:33 crc kubenswrapper[5072]: I1124 11:33:33.105264 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cvbmj\" (UniqueName: \"kubernetes.io/projected/55d5c4ad-dbbc-4728-bac4-f12adda414f1-kube-api-access-cvbmj\") pod \"55d5c4ad-dbbc-4728-bac4-f12adda414f1\" (UID: \"55d5c4ad-dbbc-4728-bac4-f12adda414f1\") " Nov 24 11:33:33 crc kubenswrapper[5072]: I1124 11:33:33.105321 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/55d5c4ad-dbbc-4728-bac4-f12adda414f1-inventory\") pod \"55d5c4ad-dbbc-4728-bac4-f12adda414f1\" (UID: \"55d5c4ad-dbbc-4728-bac4-f12adda414f1\") " Nov 24 11:33:33 crc kubenswrapper[5072]: I1124 11:33:33.111521 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55d5c4ad-dbbc-4728-bac4-f12adda414f1-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "55d5c4ad-dbbc-4728-bac4-f12adda414f1" (UID: "55d5c4ad-dbbc-4728-bac4-f12adda414f1"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:33:33 crc kubenswrapper[5072]: I1124 11:33:33.111587 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55d5c4ad-dbbc-4728-bac4-f12adda414f1-kube-api-access-cvbmj" (OuterVolumeSpecName: "kube-api-access-cvbmj") pod "55d5c4ad-dbbc-4728-bac4-f12adda414f1" (UID: "55d5c4ad-dbbc-4728-bac4-f12adda414f1"). InnerVolumeSpecName "kube-api-access-cvbmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:33:33 crc kubenswrapper[5072]: I1124 11:33:33.130613 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55d5c4ad-dbbc-4728-bac4-f12adda414f1-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "55d5c4ad-dbbc-4728-bac4-f12adda414f1" (UID: "55d5c4ad-dbbc-4728-bac4-f12adda414f1"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:33:33 crc kubenswrapper[5072]: I1124 11:33:33.133553 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55d5c4ad-dbbc-4728-bac4-f12adda414f1-inventory" (OuterVolumeSpecName: "inventory") pod "55d5c4ad-dbbc-4728-bac4-f12adda414f1" (UID: "55d5c4ad-dbbc-4728-bac4-f12adda414f1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:33:33 crc kubenswrapper[5072]: I1124 11:33:33.208181 5072 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55d5c4ad-dbbc-4728-bac4-f12adda414f1-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:33:33 crc kubenswrapper[5072]: I1124 11:33:33.208213 5072 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/55d5c4ad-dbbc-4728-bac4-f12adda414f1-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 11:33:33 crc kubenswrapper[5072]: I1124 11:33:33.208222 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cvbmj\" (UniqueName: \"kubernetes.io/projected/55d5c4ad-dbbc-4728-bac4-f12adda414f1-kube-api-access-cvbmj\") on node \"crc\" DevicePath \"\"" Nov 24 11:33:33 crc kubenswrapper[5072]: I1124 11:33:33.208231 5072 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/55d5c4ad-dbbc-4728-bac4-f12adda414f1-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 11:33:33 crc kubenswrapper[5072]: I1124 11:33:33.551137 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-s2k9h" event={"ID":"55d5c4ad-dbbc-4728-bac4-f12adda414f1","Type":"ContainerDied","Data":"b3cb47af55c002d275c5f638396c3afb9a24edd68755f434f433b4d78baf1bab"} Nov 24 11:33:33 crc kubenswrapper[5072]: I1124 11:33:33.551176 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-s2k9h" Nov 24 11:33:33 crc kubenswrapper[5072]: I1124 11:33:33.551212 5072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3cb47af55c002d275c5f638396c3afb9a24edd68755f434f433b4d78baf1bab" Nov 24 11:33:33 crc kubenswrapper[5072]: I1124 11:33:33.665030 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-99glp"] Nov 24 11:33:33 crc kubenswrapper[5072]: E1124 11:33:33.665526 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55d5c4ad-dbbc-4728-bac4-f12adda414f1" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 24 11:33:33 crc kubenswrapper[5072]: I1124 11:33:33.665543 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="55d5c4ad-dbbc-4728-bac4-f12adda414f1" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 24 11:33:33 crc kubenswrapper[5072]: I1124 11:33:33.665795 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="55d5c4ad-dbbc-4728-bac4-f12adda414f1" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 24 11:33:33 crc kubenswrapper[5072]: I1124 11:33:33.666480 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-99glp" Nov 24 11:33:33 crc kubenswrapper[5072]: I1124 11:33:33.670122 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 11:33:33 crc kubenswrapper[5072]: I1124 11:33:33.670253 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 11:33:33 crc kubenswrapper[5072]: I1124 11:33:33.673580 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-b6s7d" Nov 24 11:33:33 crc kubenswrapper[5072]: I1124 11:33:33.674222 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 11:33:33 crc kubenswrapper[5072]: I1124 11:33:33.684394 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-99glp"] Nov 24 11:33:33 crc kubenswrapper[5072]: I1124 11:33:33.725490 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcdrp\" (UniqueName: \"kubernetes.io/projected/1b6db25f-182c-4b29-a975-acfa3253dec8-kube-api-access-dcdrp\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-99glp\" (UID: \"1b6db25f-182c-4b29-a975-acfa3253dec8\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-99glp" Nov 24 11:33:33 crc kubenswrapper[5072]: I1124 11:33:33.725612 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1b6db25f-182c-4b29-a975-acfa3253dec8-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-99glp\" (UID: \"1b6db25f-182c-4b29-a975-acfa3253dec8\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-99glp" Nov 24 11:33:33 crc kubenswrapper[5072]: I1124 11:33:33.725732 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1b6db25f-182c-4b29-a975-acfa3253dec8-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-99glp\" (UID: \"1b6db25f-182c-4b29-a975-acfa3253dec8\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-99glp" Nov 24 11:33:33 crc kubenswrapper[5072]: I1124 11:33:33.827005 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1b6db25f-182c-4b29-a975-acfa3253dec8-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-99glp\" (UID: \"1b6db25f-182c-4b29-a975-acfa3253dec8\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-99glp" Nov 24 11:33:33 crc kubenswrapper[5072]: I1124 11:33:33.827103 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1b6db25f-182c-4b29-a975-acfa3253dec8-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-99glp\" (UID: \"1b6db25f-182c-4b29-a975-acfa3253dec8\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-99glp" Nov 24 11:33:33 crc kubenswrapper[5072]: I1124 11:33:33.827162 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcdrp\" (UniqueName: \"kubernetes.io/projected/1b6db25f-182c-4b29-a975-acfa3253dec8-kube-api-access-dcdrp\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-99glp\" (UID: \"1b6db25f-182c-4b29-a975-acfa3253dec8\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-99glp" Nov 24 11:33:33 crc kubenswrapper[5072]: I1124 11:33:33.832449 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1b6db25f-182c-4b29-a975-acfa3253dec8-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-99glp\" (UID: \"1b6db25f-182c-4b29-a975-acfa3253dec8\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-99glp" Nov 24 11:33:33 crc kubenswrapper[5072]: I1124 11:33:33.844444 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcdrp\" (UniqueName: \"kubernetes.io/projected/1b6db25f-182c-4b29-a975-acfa3253dec8-kube-api-access-dcdrp\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-99glp\" (UID: \"1b6db25f-182c-4b29-a975-acfa3253dec8\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-99glp" Nov 24 11:33:33 crc kubenswrapper[5072]: I1124 11:33:33.845884 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1b6db25f-182c-4b29-a975-acfa3253dec8-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-99glp\" (UID: \"1b6db25f-182c-4b29-a975-acfa3253dec8\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-99glp" Nov 24 11:33:33 crc kubenswrapper[5072]: I1124 11:33:33.983481 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-99glp" Nov 24 11:33:34 crc kubenswrapper[5072]: I1124 11:33:34.491612 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-99glp"] Nov 24 11:33:34 crc kubenswrapper[5072]: I1124 11:33:34.563979 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-99glp" event={"ID":"1b6db25f-182c-4b29-a975-acfa3253dec8","Type":"ContainerStarted","Data":"b9a40532decc5fc7e749b7476da62a14d3a673a25bec40c97477fa78761cf333"} Nov 24 11:33:34 crc kubenswrapper[5072]: I1124 11:33:34.564150 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gfh92" podUID="6eb6a6e0-95cf-4ac0-a2c9-ce0ceed5de07" containerName="registry-server" containerID="cri-o://03a1468dd9818807db8edafb4fe0863ebc4b79b9449c7e22d63704b93792341d" gracePeriod=2 Nov 24 11:33:35 crc kubenswrapper[5072]: I1124 11:33:35.093268 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gfh92" Nov 24 11:33:35 crc kubenswrapper[5072]: I1124 11:33:35.148069 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q8h49\" (UniqueName: \"kubernetes.io/projected/6eb6a6e0-95cf-4ac0-a2c9-ce0ceed5de07-kube-api-access-q8h49\") pod \"6eb6a6e0-95cf-4ac0-a2c9-ce0ceed5de07\" (UID: \"6eb6a6e0-95cf-4ac0-a2c9-ce0ceed5de07\") " Nov 24 11:33:35 crc kubenswrapper[5072]: I1124 11:33:35.148587 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6eb6a6e0-95cf-4ac0-a2c9-ce0ceed5de07-catalog-content\") pod \"6eb6a6e0-95cf-4ac0-a2c9-ce0ceed5de07\" (UID: \"6eb6a6e0-95cf-4ac0-a2c9-ce0ceed5de07\") " Nov 24 11:33:35 crc kubenswrapper[5072]: I1124 11:33:35.148639 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6eb6a6e0-95cf-4ac0-a2c9-ce0ceed5de07-utilities\") pod \"6eb6a6e0-95cf-4ac0-a2c9-ce0ceed5de07\" (UID: \"6eb6a6e0-95cf-4ac0-a2c9-ce0ceed5de07\") " Nov 24 11:33:35 crc kubenswrapper[5072]: I1124 11:33:35.152717 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6eb6a6e0-95cf-4ac0-a2c9-ce0ceed5de07-utilities" (OuterVolumeSpecName: "utilities") pod "6eb6a6e0-95cf-4ac0-a2c9-ce0ceed5de07" (UID: "6eb6a6e0-95cf-4ac0-a2c9-ce0ceed5de07"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:33:35 crc kubenswrapper[5072]: I1124 11:33:35.158058 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6eb6a6e0-95cf-4ac0-a2c9-ce0ceed5de07-kube-api-access-q8h49" (OuterVolumeSpecName: "kube-api-access-q8h49") pod "6eb6a6e0-95cf-4ac0-a2c9-ce0ceed5de07" (UID: "6eb6a6e0-95cf-4ac0-a2c9-ce0ceed5de07"). InnerVolumeSpecName "kube-api-access-q8h49". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:33:35 crc kubenswrapper[5072]: I1124 11:33:35.210626 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6eb6a6e0-95cf-4ac0-a2c9-ce0ceed5de07-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6eb6a6e0-95cf-4ac0-a2c9-ce0ceed5de07" (UID: "6eb6a6e0-95cf-4ac0-a2c9-ce0ceed5de07"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:33:35 crc kubenswrapper[5072]: I1124 11:33:35.250434 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q8h49\" (UniqueName: \"kubernetes.io/projected/6eb6a6e0-95cf-4ac0-a2c9-ce0ceed5de07-kube-api-access-q8h49\") on node \"crc\" DevicePath \"\"" Nov 24 11:33:35 crc kubenswrapper[5072]: I1124 11:33:35.250461 5072 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6eb6a6e0-95cf-4ac0-a2c9-ce0ceed5de07-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 11:33:35 crc kubenswrapper[5072]: I1124 11:33:35.250474 5072 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6eb6a6e0-95cf-4ac0-a2c9-ce0ceed5de07-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 11:33:35 crc kubenswrapper[5072]: I1124 11:33:35.576681 5072 generic.go:334] "Generic (PLEG): container finished" podID="6eb6a6e0-95cf-4ac0-a2c9-ce0ceed5de07" containerID="03a1468dd9818807db8edafb4fe0863ebc4b79b9449c7e22d63704b93792341d" exitCode=0 Nov 24 11:33:35 crc kubenswrapper[5072]: I1124 11:33:35.576733 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gfh92" event={"ID":"6eb6a6e0-95cf-4ac0-a2c9-ce0ceed5de07","Type":"ContainerDied","Data":"03a1468dd9818807db8edafb4fe0863ebc4b79b9449c7e22d63704b93792341d"} Nov 24 11:33:35 crc kubenswrapper[5072]: I1124 11:33:35.576740 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gfh92" Nov 24 11:33:35 crc kubenswrapper[5072]: I1124 11:33:35.576779 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gfh92" event={"ID":"6eb6a6e0-95cf-4ac0-a2c9-ce0ceed5de07","Type":"ContainerDied","Data":"fbac9b9c3a999b7c55302bda6c48e71082af614209b9c6ff2f08adb920f392c8"} Nov 24 11:33:35 crc kubenswrapper[5072]: I1124 11:33:35.576808 5072 scope.go:117] "RemoveContainer" containerID="03a1468dd9818807db8edafb4fe0863ebc4b79b9449c7e22d63704b93792341d" Nov 24 11:33:35 crc kubenswrapper[5072]: I1124 11:33:35.602131 5072 scope.go:117] "RemoveContainer" containerID="522351488f02d569dcffd13a39d89c1d83ea369f7239a884125af9082ee811fb" Nov 24 11:33:35 crc kubenswrapper[5072]: I1124 11:33:35.636149 5072 scope.go:117] "RemoveContainer" containerID="1ed77df663e498dd5db654bd1d5bde5c613b9ff62f6af0c5269b6f2440bdcc7e" Nov 24 11:33:35 crc kubenswrapper[5072]: I1124 11:33:35.638283 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gfh92"] Nov 24 11:33:35 crc kubenswrapper[5072]: I1124 11:33:35.647227 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gfh92"] Nov 24 11:33:35 crc kubenswrapper[5072]: I1124 11:33:35.702417 5072 scope.go:117] "RemoveContainer" containerID="03a1468dd9818807db8edafb4fe0863ebc4b79b9449c7e22d63704b93792341d" Nov 24 11:33:35 crc kubenswrapper[5072]: E1124 11:33:35.702987 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03a1468dd9818807db8edafb4fe0863ebc4b79b9449c7e22d63704b93792341d\": container with ID starting with 03a1468dd9818807db8edafb4fe0863ebc4b79b9449c7e22d63704b93792341d not found: ID does not exist" containerID="03a1468dd9818807db8edafb4fe0863ebc4b79b9449c7e22d63704b93792341d" Nov 24 11:33:35 crc kubenswrapper[5072]: I1124 11:33:35.703034 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03a1468dd9818807db8edafb4fe0863ebc4b79b9449c7e22d63704b93792341d"} err="failed to get container status \"03a1468dd9818807db8edafb4fe0863ebc4b79b9449c7e22d63704b93792341d\": rpc error: code = NotFound desc = could not find container \"03a1468dd9818807db8edafb4fe0863ebc4b79b9449c7e22d63704b93792341d\": container with ID starting with 03a1468dd9818807db8edafb4fe0863ebc4b79b9449c7e22d63704b93792341d not found: ID does not exist" Nov 24 11:33:35 crc kubenswrapper[5072]: I1124 11:33:35.703066 5072 scope.go:117] "RemoveContainer" containerID="522351488f02d569dcffd13a39d89c1d83ea369f7239a884125af9082ee811fb" Nov 24 11:33:35 crc kubenswrapper[5072]: E1124 11:33:35.703561 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"522351488f02d569dcffd13a39d89c1d83ea369f7239a884125af9082ee811fb\": container with ID starting with 522351488f02d569dcffd13a39d89c1d83ea369f7239a884125af9082ee811fb not found: ID does not exist" containerID="522351488f02d569dcffd13a39d89c1d83ea369f7239a884125af9082ee811fb" Nov 24 11:33:35 crc kubenswrapper[5072]: I1124 11:33:35.703592 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"522351488f02d569dcffd13a39d89c1d83ea369f7239a884125af9082ee811fb"} err="failed to get container status \"522351488f02d569dcffd13a39d89c1d83ea369f7239a884125af9082ee811fb\": rpc error: code = NotFound desc = could not find container \"522351488f02d569dcffd13a39d89c1d83ea369f7239a884125af9082ee811fb\": container with ID starting with 522351488f02d569dcffd13a39d89c1d83ea369f7239a884125af9082ee811fb not found: ID does not exist" Nov 24 11:33:35 crc kubenswrapper[5072]: I1124 11:33:35.703615 5072 scope.go:117] "RemoveContainer" containerID="1ed77df663e498dd5db654bd1d5bde5c613b9ff62f6af0c5269b6f2440bdcc7e" Nov 24 11:33:35 crc kubenswrapper[5072]: E1124 11:33:35.705827 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ed77df663e498dd5db654bd1d5bde5c613b9ff62f6af0c5269b6f2440bdcc7e\": container with ID starting with 1ed77df663e498dd5db654bd1d5bde5c613b9ff62f6af0c5269b6f2440bdcc7e not found: ID does not exist" containerID="1ed77df663e498dd5db654bd1d5bde5c613b9ff62f6af0c5269b6f2440bdcc7e" Nov 24 11:33:35 crc kubenswrapper[5072]: I1124 11:33:35.705856 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ed77df663e498dd5db654bd1d5bde5c613b9ff62f6af0c5269b6f2440bdcc7e"} err="failed to get container status \"1ed77df663e498dd5db654bd1d5bde5c613b9ff62f6af0c5269b6f2440bdcc7e\": rpc error: code = NotFound desc = could not find container \"1ed77df663e498dd5db654bd1d5bde5c613b9ff62f6af0c5269b6f2440bdcc7e\": container with ID starting with 1ed77df663e498dd5db654bd1d5bde5c613b9ff62f6af0c5269b6f2440bdcc7e not found: ID does not exist" Nov 24 11:33:36 crc kubenswrapper[5072]: I1124 11:33:36.588774 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-99glp" event={"ID":"1b6db25f-182c-4b29-a975-acfa3253dec8","Type":"ContainerStarted","Data":"77a3a39bf85af92b1c834d68dbde8708a949c92eabd566ff7cd8bd1d49cb6f9f"} Nov 24 11:33:36 crc kubenswrapper[5072]: I1124 11:33:36.607874 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-99glp" podStartSLOduration=2.341409216 podStartE2EDuration="3.607854768s" podCreationTimestamp="2025-11-24 11:33:33 +0000 UTC" firstStartedPulling="2025-11-24 11:33:34.494114232 +0000 UTC m=+1466.205638718" lastFinishedPulling="2025-11-24 11:33:35.760559794 +0000 UTC m=+1467.472084270" observedRunningTime="2025-11-24 11:33:36.604268183 +0000 UTC m=+1468.315792699" watchObservedRunningTime="2025-11-24 11:33:36.607854768 +0000 UTC m=+1468.319379244" Nov 24 11:33:37 crc kubenswrapper[5072]: I1124 11:33:37.037862 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6eb6a6e0-95cf-4ac0-a2c9-ce0ceed5de07" path="/var/lib/kubelet/pods/6eb6a6e0-95cf-4ac0-a2c9-ce0ceed5de07/volumes" Nov 24 11:34:13 crc kubenswrapper[5072]: I1124 11:34:13.645276 5072 patch_prober.go:28] interesting pod/machine-config-daemon-jfxnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 11:34:13 crc kubenswrapper[5072]: I1124 11:34:13.645705 5072 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 11:34:43 crc kubenswrapper[5072]: I1124 11:34:43.644899 5072 patch_prober.go:28] interesting pod/machine-config-daemon-jfxnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 11:34:43 crc kubenswrapper[5072]: I1124 11:34:43.645634 5072 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 11:34:51 crc kubenswrapper[5072]: I1124 11:34:51.055485 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-3c02-account-create-q2mpb"] Nov 24 11:34:51 crc kubenswrapper[5072]: I1124 11:34:51.097019 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-zkhhj"] Nov 24 11:34:51 crc kubenswrapper[5072]: I1124 11:34:51.113203 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-3c02-account-create-q2mpb"] Nov 24 11:34:51 crc kubenswrapper[5072]: I1124 11:34:51.125106 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-zkhhj"] Nov 24 11:34:53 crc kubenswrapper[5072]: I1124 11:34:53.036822 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c72a93a-949b-4cdd-ba4f-fbd9371a4b1c" path="/var/lib/kubelet/pods/3c72a93a-949b-4cdd-ba4f-fbd9371a4b1c/volumes" Nov 24 11:34:53 crc kubenswrapper[5072]: I1124 11:34:53.040423 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d740db71-09cb-4511-9491-34292bf95e8f" path="/var/lib/kubelet/pods/d740db71-09cb-4511-9491-34292bf95e8f/volumes" Nov 24 11:34:55 crc kubenswrapper[5072]: I1124 11:34:55.039285 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-jq922"] Nov 24 11:34:55 crc kubenswrapper[5072]: I1124 11:34:55.047271 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-jq922"] Nov 24 11:34:55 crc kubenswrapper[5072]: I1124 11:34:55.054935 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-5244-account-create-wtbzl"] Nov 24 11:34:55 crc kubenswrapper[5072]: I1124 11:34:55.062321 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-5244-account-create-wtbzl"] Nov 24 11:34:55 crc kubenswrapper[5072]: I1124 11:34:55.398958 5072 generic.go:334] "Generic (PLEG): container finished" podID="1b6db25f-182c-4b29-a975-acfa3253dec8" containerID="77a3a39bf85af92b1c834d68dbde8708a949c92eabd566ff7cd8bd1d49cb6f9f" exitCode=0 Nov 24 11:34:55 crc kubenswrapper[5072]: I1124 11:34:55.399005 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-99glp" event={"ID":"1b6db25f-182c-4b29-a975-acfa3253dec8","Type":"ContainerDied","Data":"77a3a39bf85af92b1c834d68dbde8708a949c92eabd566ff7cd8bd1d49cb6f9f"} Nov 24 11:34:56 crc kubenswrapper[5072]: I1124 11:34:56.028976 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-c6np9"] Nov 24 11:34:56 crc kubenswrapper[5072]: I1124 11:34:56.035812 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-7d4a-account-create-vqdtq"] Nov 24 11:34:56 crc kubenswrapper[5072]: I1124 11:34:56.044547 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-c6np9"] Nov 24 11:34:56 crc kubenswrapper[5072]: I1124 11:34:56.051174 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-7d4a-account-create-vqdtq"] Nov 24 11:34:56 crc kubenswrapper[5072]: I1124 11:34:56.944878 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-99glp" Nov 24 11:34:57 crc kubenswrapper[5072]: I1124 11:34:57.034619 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d72b502-87c9-475a-93b4-739816ea7f7e" path="/var/lib/kubelet/pods/0d72b502-87c9-475a-93b4-739816ea7f7e/volumes" Nov 24 11:34:57 crc kubenswrapper[5072]: I1124 11:34:57.035360 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="195a7abe-4729-4b77-8198-3eca911c2d84" path="/var/lib/kubelet/pods/195a7abe-4729-4b77-8198-3eca911c2d84/volumes" Nov 24 11:34:57 crc kubenswrapper[5072]: I1124 11:34:57.036063 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="295f55cf-b9ac-454a-a715-b48c901a8f34" path="/var/lib/kubelet/pods/295f55cf-b9ac-454a-a715-b48c901a8f34/volumes" Nov 24 11:34:57 crc kubenswrapper[5072]: I1124 11:34:57.036339 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1b6db25f-182c-4b29-a975-acfa3253dec8-inventory\") pod \"1b6db25f-182c-4b29-a975-acfa3253dec8\" (UID: \"1b6db25f-182c-4b29-a975-acfa3253dec8\") " Nov 24 11:34:57 crc kubenswrapper[5072]: I1124 11:34:57.036849 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1b6db25f-182c-4b29-a975-acfa3253dec8-ssh-key\") pod \"1b6db25f-182c-4b29-a975-acfa3253dec8\" (UID: \"1b6db25f-182c-4b29-a975-acfa3253dec8\") " Nov 24 11:34:57 crc kubenswrapper[5072]: I1124 11:34:57.036999 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dcdrp\" (UniqueName: \"kubernetes.io/projected/1b6db25f-182c-4b29-a975-acfa3253dec8-kube-api-access-dcdrp\") pod \"1b6db25f-182c-4b29-a975-acfa3253dec8\" (UID: \"1b6db25f-182c-4b29-a975-acfa3253dec8\") " Nov 24 11:34:57 crc kubenswrapper[5072]: I1124 11:34:57.038119 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e39b3a7c-db7f-4d96-bbb1-1293b0432659" path="/var/lib/kubelet/pods/e39b3a7c-db7f-4d96-bbb1-1293b0432659/volumes" Nov 24 11:34:57 crc kubenswrapper[5072]: I1124 11:34:57.044921 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b6db25f-182c-4b29-a975-acfa3253dec8-kube-api-access-dcdrp" (OuterVolumeSpecName: "kube-api-access-dcdrp") pod "1b6db25f-182c-4b29-a975-acfa3253dec8" (UID: "1b6db25f-182c-4b29-a975-acfa3253dec8"). InnerVolumeSpecName "kube-api-access-dcdrp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:34:57 crc kubenswrapper[5072]: I1124 11:34:57.065718 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b6db25f-182c-4b29-a975-acfa3253dec8-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "1b6db25f-182c-4b29-a975-acfa3253dec8" (UID: "1b6db25f-182c-4b29-a975-acfa3253dec8"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:34:57 crc kubenswrapper[5072]: I1124 11:34:57.078926 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b6db25f-182c-4b29-a975-acfa3253dec8-inventory" (OuterVolumeSpecName: "inventory") pod "1b6db25f-182c-4b29-a975-acfa3253dec8" (UID: "1b6db25f-182c-4b29-a975-acfa3253dec8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:34:57 crc kubenswrapper[5072]: I1124 11:34:57.139736 5072 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1b6db25f-182c-4b29-a975-acfa3253dec8-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 11:34:57 crc kubenswrapper[5072]: I1124 11:34:57.139970 5072 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1b6db25f-182c-4b29-a975-acfa3253dec8-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 11:34:57 crc kubenswrapper[5072]: I1124 11:34:57.139980 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dcdrp\" (UniqueName: \"kubernetes.io/projected/1b6db25f-182c-4b29-a975-acfa3253dec8-kube-api-access-dcdrp\") on node \"crc\" DevicePath \"\"" Nov 24 11:34:57 crc kubenswrapper[5072]: I1124 11:34:57.422852 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-99glp" event={"ID":"1b6db25f-182c-4b29-a975-acfa3253dec8","Type":"ContainerDied","Data":"b9a40532decc5fc7e749b7476da62a14d3a673a25bec40c97477fa78761cf333"} Nov 24 11:34:57 crc kubenswrapper[5072]: I1124 11:34:57.422900 5072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9a40532decc5fc7e749b7476da62a14d3a673a25bec40c97477fa78761cf333" Nov 24 11:34:57 crc kubenswrapper[5072]: I1124 11:34:57.422925 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-99glp" Nov 24 11:34:57 crc kubenswrapper[5072]: I1124 11:34:57.515804 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m7pfg"] Nov 24 11:34:57 crc kubenswrapper[5072]: E1124 11:34:57.516275 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6eb6a6e0-95cf-4ac0-a2c9-ce0ceed5de07" containerName="registry-server" Nov 24 11:34:57 crc kubenswrapper[5072]: I1124 11:34:57.516292 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="6eb6a6e0-95cf-4ac0-a2c9-ce0ceed5de07" containerName="registry-server" Nov 24 11:34:57 crc kubenswrapper[5072]: E1124 11:34:57.516323 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6eb6a6e0-95cf-4ac0-a2c9-ce0ceed5de07" containerName="extract-content" Nov 24 11:34:57 crc kubenswrapper[5072]: I1124 11:34:57.516330 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="6eb6a6e0-95cf-4ac0-a2c9-ce0ceed5de07" containerName="extract-content" Nov 24 11:34:57 crc kubenswrapper[5072]: E1124 11:34:57.516340 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6eb6a6e0-95cf-4ac0-a2c9-ce0ceed5de07" containerName="extract-utilities" Nov 24 11:34:57 crc kubenswrapper[5072]: I1124 11:34:57.516348 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="6eb6a6e0-95cf-4ac0-a2c9-ce0ceed5de07" containerName="extract-utilities" Nov 24 11:34:57 crc kubenswrapper[5072]: E1124 11:34:57.516355 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b6db25f-182c-4b29-a975-acfa3253dec8" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 24 11:34:57 crc kubenswrapper[5072]: I1124 11:34:57.516362 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b6db25f-182c-4b29-a975-acfa3253dec8" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 24 11:34:57 crc kubenswrapper[5072]: I1124 11:34:57.516539 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b6db25f-182c-4b29-a975-acfa3253dec8" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 24 11:34:57 crc kubenswrapper[5072]: I1124 11:34:57.516563 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="6eb6a6e0-95cf-4ac0-a2c9-ce0ceed5de07" containerName="registry-server" Nov 24 11:34:57 crc kubenswrapper[5072]: I1124 11:34:57.517162 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m7pfg" Nov 24 11:34:57 crc kubenswrapper[5072]: I1124 11:34:57.521317 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 11:34:57 crc kubenswrapper[5072]: I1124 11:34:57.521712 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 11:34:57 crc kubenswrapper[5072]: I1124 11:34:57.522725 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 11:34:57 crc kubenswrapper[5072]: I1124 11:34:57.524172 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-b6s7d" Nov 24 11:34:57 crc kubenswrapper[5072]: I1124 11:34:57.526053 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m7pfg"] Nov 24 11:34:57 crc kubenswrapper[5072]: I1124 11:34:57.648827 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2t4wg\" (UniqueName: \"kubernetes.io/projected/6cf1de62-84ec-42cd-8354-14d52eb4e29b-kube-api-access-2t4wg\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-m7pfg\" (UID: \"6cf1de62-84ec-42cd-8354-14d52eb4e29b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m7pfg" Nov 24 11:34:57 crc kubenswrapper[5072]: I1124 11:34:57.648878 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6cf1de62-84ec-42cd-8354-14d52eb4e29b-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-m7pfg\" (UID: \"6cf1de62-84ec-42cd-8354-14d52eb4e29b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m7pfg" Nov 24 11:34:57 crc kubenswrapper[5072]: I1124 11:34:57.649062 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6cf1de62-84ec-42cd-8354-14d52eb4e29b-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-m7pfg\" (UID: \"6cf1de62-84ec-42cd-8354-14d52eb4e29b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m7pfg" Nov 24 11:34:57 crc kubenswrapper[5072]: I1124 11:34:57.751167 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2t4wg\" (UniqueName: \"kubernetes.io/projected/6cf1de62-84ec-42cd-8354-14d52eb4e29b-kube-api-access-2t4wg\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-m7pfg\" (UID: \"6cf1de62-84ec-42cd-8354-14d52eb4e29b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m7pfg" Nov 24 11:34:57 crc kubenswrapper[5072]: I1124 11:34:57.751230 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6cf1de62-84ec-42cd-8354-14d52eb4e29b-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-m7pfg\" (UID: \"6cf1de62-84ec-42cd-8354-14d52eb4e29b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m7pfg" Nov 24 11:34:57 crc kubenswrapper[5072]: I1124 11:34:57.751306 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6cf1de62-84ec-42cd-8354-14d52eb4e29b-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-m7pfg\" (UID: \"6cf1de62-84ec-42cd-8354-14d52eb4e29b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m7pfg" Nov 24 11:34:57 crc kubenswrapper[5072]: I1124 11:34:57.759361 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6cf1de62-84ec-42cd-8354-14d52eb4e29b-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-m7pfg\" (UID: \"6cf1de62-84ec-42cd-8354-14d52eb4e29b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m7pfg" Nov 24 11:34:57 crc kubenswrapper[5072]: I1124 11:34:57.760663 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6cf1de62-84ec-42cd-8354-14d52eb4e29b-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-m7pfg\" (UID: \"6cf1de62-84ec-42cd-8354-14d52eb4e29b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m7pfg" Nov 24 11:34:57 crc kubenswrapper[5072]: I1124 11:34:57.781585 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2t4wg\" (UniqueName: \"kubernetes.io/projected/6cf1de62-84ec-42cd-8354-14d52eb4e29b-kube-api-access-2t4wg\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-m7pfg\" (UID: \"6cf1de62-84ec-42cd-8354-14d52eb4e29b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m7pfg" Nov 24 11:34:57 crc kubenswrapper[5072]: I1124 11:34:57.839640 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m7pfg" Nov 24 11:34:58 crc kubenswrapper[5072]: I1124 11:34:58.441063 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m7pfg"] Nov 24 11:34:58 crc kubenswrapper[5072]: I1124 11:34:58.456427 5072 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 11:34:59 crc kubenswrapper[5072]: I1124 11:34:59.450205 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m7pfg" event={"ID":"6cf1de62-84ec-42cd-8354-14d52eb4e29b","Type":"ContainerStarted","Data":"88a8cf97c05f80035492dc10257e6a33e7c8316097c95fd7fbc33d1e4c88ae5f"} Nov 24 11:34:59 crc kubenswrapper[5072]: I1124 11:34:59.450714 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m7pfg" event={"ID":"6cf1de62-84ec-42cd-8354-14d52eb4e29b","Type":"ContainerStarted","Data":"decd8866cacc9f19d82a9de3efd5cddc9daf64b8230ad169aec3ce63c4009bd0"} Nov 24 11:34:59 crc kubenswrapper[5072]: I1124 11:34:59.474094 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m7pfg" podStartSLOduration=1.826255818 podStartE2EDuration="2.474075023s" podCreationTimestamp="2025-11-24 11:34:57 +0000 UTC" firstStartedPulling="2025-11-24 11:34:58.456194607 +0000 UTC m=+1550.167719083" lastFinishedPulling="2025-11-24 11:34:59.104013772 +0000 UTC m=+1550.815538288" observedRunningTime="2025-11-24 11:34:59.472171066 +0000 UTC m=+1551.183695552" watchObservedRunningTime="2025-11-24 11:34:59.474075023 +0000 UTC m=+1551.185599509" Nov 24 11:35:04 crc kubenswrapper[5072]: I1124 11:35:04.498362 5072 generic.go:334] "Generic (PLEG): container finished" podID="6cf1de62-84ec-42cd-8354-14d52eb4e29b" containerID="88a8cf97c05f80035492dc10257e6a33e7c8316097c95fd7fbc33d1e4c88ae5f" exitCode=0 Nov 24 11:35:04 crc kubenswrapper[5072]: I1124 11:35:04.498724 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m7pfg" event={"ID":"6cf1de62-84ec-42cd-8354-14d52eb4e29b","Type":"ContainerDied","Data":"88a8cf97c05f80035492dc10257e6a33e7c8316097c95fd7fbc33d1e4c88ae5f"} Nov 24 11:35:05 crc kubenswrapper[5072]: I1124 11:35:05.965850 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m7pfg" Nov 24 11:35:06 crc kubenswrapper[5072]: I1124 11:35:06.007123 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6cf1de62-84ec-42cd-8354-14d52eb4e29b-inventory\") pod \"6cf1de62-84ec-42cd-8354-14d52eb4e29b\" (UID: \"6cf1de62-84ec-42cd-8354-14d52eb4e29b\") " Nov 24 11:35:06 crc kubenswrapper[5072]: I1124 11:35:06.007166 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2t4wg\" (UniqueName: \"kubernetes.io/projected/6cf1de62-84ec-42cd-8354-14d52eb4e29b-kube-api-access-2t4wg\") pod \"6cf1de62-84ec-42cd-8354-14d52eb4e29b\" (UID: \"6cf1de62-84ec-42cd-8354-14d52eb4e29b\") " Nov 24 11:35:06 crc kubenswrapper[5072]: I1124 11:35:06.007184 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6cf1de62-84ec-42cd-8354-14d52eb4e29b-ssh-key\") pod \"6cf1de62-84ec-42cd-8354-14d52eb4e29b\" (UID: \"6cf1de62-84ec-42cd-8354-14d52eb4e29b\") " Nov 24 11:35:06 crc kubenswrapper[5072]: I1124 11:35:06.014042 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cf1de62-84ec-42cd-8354-14d52eb4e29b-kube-api-access-2t4wg" (OuterVolumeSpecName: "kube-api-access-2t4wg") pod "6cf1de62-84ec-42cd-8354-14d52eb4e29b" (UID: "6cf1de62-84ec-42cd-8354-14d52eb4e29b"). InnerVolumeSpecName "kube-api-access-2t4wg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:35:06 crc kubenswrapper[5072]: I1124 11:35:06.040548 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cf1de62-84ec-42cd-8354-14d52eb4e29b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "6cf1de62-84ec-42cd-8354-14d52eb4e29b" (UID: "6cf1de62-84ec-42cd-8354-14d52eb4e29b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:35:06 crc kubenswrapper[5072]: I1124 11:35:06.064234 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cf1de62-84ec-42cd-8354-14d52eb4e29b-inventory" (OuterVolumeSpecName: "inventory") pod "6cf1de62-84ec-42cd-8354-14d52eb4e29b" (UID: "6cf1de62-84ec-42cd-8354-14d52eb4e29b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:35:06 crc kubenswrapper[5072]: I1124 11:35:06.109791 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2t4wg\" (UniqueName: \"kubernetes.io/projected/6cf1de62-84ec-42cd-8354-14d52eb4e29b-kube-api-access-2t4wg\") on node \"crc\" DevicePath \"\"" Nov 24 11:35:06 crc kubenswrapper[5072]: I1124 11:35:06.109836 5072 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6cf1de62-84ec-42cd-8354-14d52eb4e29b-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 11:35:06 crc kubenswrapper[5072]: I1124 11:35:06.109849 5072 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6cf1de62-84ec-42cd-8354-14d52eb4e29b-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 11:35:06 crc kubenswrapper[5072]: I1124 11:35:06.520776 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m7pfg" event={"ID":"6cf1de62-84ec-42cd-8354-14d52eb4e29b","Type":"ContainerDied","Data":"decd8866cacc9f19d82a9de3efd5cddc9daf64b8230ad169aec3ce63c4009bd0"} Nov 24 11:35:06 crc kubenswrapper[5072]: I1124 11:35:06.520822 5072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="decd8866cacc9f19d82a9de3efd5cddc9daf64b8230ad169aec3ce63c4009bd0" Nov 24 11:35:06 crc kubenswrapper[5072]: I1124 11:35:06.520826 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m7pfg" Nov 24 11:35:06 crc kubenswrapper[5072]: I1124 11:35:06.596506 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-n2x4l"] Nov 24 11:35:06 crc kubenswrapper[5072]: E1124 11:35:06.596892 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cf1de62-84ec-42cd-8354-14d52eb4e29b" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 24 11:35:06 crc kubenswrapper[5072]: I1124 11:35:06.596910 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cf1de62-84ec-42cd-8354-14d52eb4e29b" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 24 11:35:06 crc kubenswrapper[5072]: I1124 11:35:06.597112 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cf1de62-84ec-42cd-8354-14d52eb4e29b" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 24 11:35:06 crc kubenswrapper[5072]: I1124 11:35:06.599044 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-n2x4l" Nov 24 11:35:06 crc kubenswrapper[5072]: I1124 11:35:06.601863 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 11:35:06 crc kubenswrapper[5072]: I1124 11:35:06.603331 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 11:35:06 crc kubenswrapper[5072]: I1124 11:35:06.606296 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-b6s7d" Nov 24 11:35:06 crc kubenswrapper[5072]: I1124 11:35:06.609122 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 11:35:06 crc kubenswrapper[5072]: I1124 11:35:06.612808 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-n2x4l"] Nov 24 11:35:06 crc kubenswrapper[5072]: I1124 11:35:06.719273 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2e327a89-b7a4-4e57-bc77-bb3a64afce6d-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-n2x4l\" (UID: \"2e327a89-b7a4-4e57-bc77-bb3a64afce6d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-n2x4l" Nov 24 11:35:06 crc kubenswrapper[5072]: I1124 11:35:06.719367 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72prg\" (UniqueName: \"kubernetes.io/projected/2e327a89-b7a4-4e57-bc77-bb3a64afce6d-kube-api-access-72prg\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-n2x4l\" (UID: \"2e327a89-b7a4-4e57-bc77-bb3a64afce6d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-n2x4l" Nov 24 11:35:06 crc kubenswrapper[5072]: I1124 11:35:06.719428 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2e327a89-b7a4-4e57-bc77-bb3a64afce6d-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-n2x4l\" (UID: \"2e327a89-b7a4-4e57-bc77-bb3a64afce6d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-n2x4l" Nov 24 11:35:06 crc kubenswrapper[5072]: I1124 11:35:06.821897 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2e327a89-b7a4-4e57-bc77-bb3a64afce6d-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-n2x4l\" (UID: \"2e327a89-b7a4-4e57-bc77-bb3a64afce6d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-n2x4l" Nov 24 11:35:06 crc kubenswrapper[5072]: I1124 11:35:06.822002 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72prg\" (UniqueName: \"kubernetes.io/projected/2e327a89-b7a4-4e57-bc77-bb3a64afce6d-kube-api-access-72prg\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-n2x4l\" (UID: \"2e327a89-b7a4-4e57-bc77-bb3a64afce6d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-n2x4l" Nov 24 11:35:06 crc kubenswrapper[5072]: I1124 11:35:06.822040 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2e327a89-b7a4-4e57-bc77-bb3a64afce6d-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-n2x4l\" (UID: \"2e327a89-b7a4-4e57-bc77-bb3a64afce6d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-n2x4l" Nov 24 11:35:06 crc kubenswrapper[5072]: I1124 11:35:06.829147 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2e327a89-b7a4-4e57-bc77-bb3a64afce6d-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-n2x4l\" (UID: \"2e327a89-b7a4-4e57-bc77-bb3a64afce6d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-n2x4l" Nov 24 11:35:06 crc kubenswrapper[5072]: I1124 11:35:06.829536 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2e327a89-b7a4-4e57-bc77-bb3a64afce6d-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-n2x4l\" (UID: \"2e327a89-b7a4-4e57-bc77-bb3a64afce6d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-n2x4l" Nov 24 11:35:06 crc kubenswrapper[5072]: I1124 11:35:06.840361 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72prg\" (UniqueName: \"kubernetes.io/projected/2e327a89-b7a4-4e57-bc77-bb3a64afce6d-kube-api-access-72prg\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-n2x4l\" (UID: \"2e327a89-b7a4-4e57-bc77-bb3a64afce6d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-n2x4l" Nov 24 11:35:06 crc kubenswrapper[5072]: I1124 11:35:06.918659 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-n2x4l" Nov 24 11:35:07 crc kubenswrapper[5072]: I1124 11:35:07.501687 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-n2x4l"] Nov 24 11:35:07 crc kubenswrapper[5072]: I1124 11:35:07.547915 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-n2x4l" event={"ID":"2e327a89-b7a4-4e57-bc77-bb3a64afce6d","Type":"ContainerStarted","Data":"a64721840154b5d11d12ffbb2893b0cb85f575a9c7042831aa68e1426f66516f"} Nov 24 11:35:08 crc kubenswrapper[5072]: I1124 11:35:08.565005 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-n2x4l" event={"ID":"2e327a89-b7a4-4e57-bc77-bb3a64afce6d","Type":"ContainerStarted","Data":"a8e5f07e17bf328e8092b0d7be49c38dfe1062e29b1e94e4b268ed6581a78740"} Nov 24 11:35:08 crc kubenswrapper[5072]: I1124 11:35:08.596434 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-n2x4l" podStartSLOduration=2.121873009 podStartE2EDuration="2.596407489s" podCreationTimestamp="2025-11-24 11:35:06 +0000 UTC" firstStartedPulling="2025-11-24 11:35:07.522145839 +0000 UTC m=+1559.233670325" lastFinishedPulling="2025-11-24 11:35:07.996680329 +0000 UTC m=+1559.708204805" observedRunningTime="2025-11-24 11:35:08.589042271 +0000 UTC m=+1560.300566777" watchObservedRunningTime="2025-11-24 11:35:08.596407489 +0000 UTC m=+1560.307931985" Nov 24 11:35:13 crc kubenswrapper[5072]: I1124 11:35:13.645250 5072 patch_prober.go:28] interesting pod/machine-config-daemon-jfxnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 11:35:13 crc kubenswrapper[5072]: I1124 11:35:13.646010 5072 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 11:35:13 crc kubenswrapper[5072]: I1124 11:35:13.646087 5072 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" Nov 24 11:35:13 crc kubenswrapper[5072]: I1124 11:35:13.647280 5072 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f0239aa581e66fddd8c16af420543c1743e09635c9f82c2f13fdce098c99f8ec"} pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 11:35:13 crc kubenswrapper[5072]: I1124 11:35:13.647402 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" containerName="machine-config-daemon" containerID="cri-o://f0239aa581e66fddd8c16af420543c1743e09635c9f82c2f13fdce098c99f8ec" gracePeriod=600 Nov 24 11:35:13 crc kubenswrapper[5072]: E1124 11:35:13.779907 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 11:35:14 crc kubenswrapper[5072]: I1124 11:35:14.059969 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-hdh5p"] Nov 24 11:35:14 crc kubenswrapper[5072]: I1124 11:35:14.068947 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-hdh5p"] Nov 24 11:35:14 crc kubenswrapper[5072]: I1124 11:35:14.526843 5072 scope.go:117] "RemoveContainer" containerID="30bd6a20ad532d4ca9c20ae128f77136b7b249a19b3b00ae583f9d48f4c04316" Nov 24 11:35:14 crc kubenswrapper[5072]: I1124 11:35:14.559867 5072 scope.go:117] "RemoveContainer" containerID="3ffb029530f8c0960bdb88fdef4fee7e32a9264d54b36eae6daf9c001e91b67c" Nov 24 11:35:14 crc kubenswrapper[5072]: I1124 11:35:14.597795 5072 scope.go:117] "RemoveContainer" containerID="141c95e10db41e165a541f4d33e3fb431d956449fd956ad2cbd8c4930ff2f384" Nov 24 11:35:14 crc kubenswrapper[5072]: I1124 11:35:14.632939 5072 generic.go:334] "Generic (PLEG): container finished" podID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" containerID="f0239aa581e66fddd8c16af420543c1743e09635c9f82c2f13fdce098c99f8ec" exitCode=0 Nov 24 11:35:14 crc kubenswrapper[5072]: I1124 11:35:14.633029 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" event={"ID":"85ee6420-36f0-467c-acf4-ebea8b02c8d5","Type":"ContainerDied","Data":"f0239aa581e66fddd8c16af420543c1743e09635c9f82c2f13fdce098c99f8ec"} Nov 24 11:35:14 crc kubenswrapper[5072]: I1124 11:35:14.633107 5072 scope.go:117] "RemoveContainer" containerID="6f55c06922e799a9c07f40b576b3a8c5fadc1f87864557b3d2231c8cbac92093" Nov 24 11:35:14 crc kubenswrapper[5072]: I1124 11:35:14.634132 5072 scope.go:117] "RemoveContainer" containerID="f0239aa581e66fddd8c16af420543c1743e09635c9f82c2f13fdce098c99f8ec" Nov 24 11:35:14 crc kubenswrapper[5072]: E1124 11:35:14.634897 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 11:35:14 crc kubenswrapper[5072]: I1124 11:35:14.639186 5072 scope.go:117] "RemoveContainer" containerID="38a91d6105e41bc4396681ef576d9b1524064107275d3860cf0d95485d50d468" Nov 24 11:35:14 crc kubenswrapper[5072]: I1124 11:35:14.673343 5072 scope.go:117] "RemoveContainer" containerID="9f500e95440ace3eec82f42e3fa443b276bb52c188ece8717e9c03a4315994d4" Nov 24 11:35:14 crc kubenswrapper[5072]: I1124 11:35:14.720045 5072 scope.go:117] "RemoveContainer" containerID="7cdac74e617cd61ac7bdf1c71b05601211f9e58cb768e5d05b407be135413980" Nov 24 11:35:14 crc kubenswrapper[5072]: I1124 11:35:14.746819 5072 scope.go:117] "RemoveContainer" containerID="c6ae9fdf337c178e542c8bc87178d1fbfe2dc0bd1fce6fc30fa1181524b456a8" Nov 24 11:35:14 crc kubenswrapper[5072]: I1124 11:35:14.764578 5072 scope.go:117] "RemoveContainer" containerID="c694f6acf6af52396dcde2b546f3f28759ac132a2761d7971341b73f0f435f17" Nov 24 11:35:15 crc kubenswrapper[5072]: I1124 11:35:15.044625 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76bdb5be-3864-4599-9ac5-7475f63290a3" path="/var/lib/kubelet/pods/76bdb5be-3864-4599-9ac5-7475f63290a3/volumes" Nov 24 11:35:29 crc kubenswrapper[5072]: I1124 11:35:29.023220 5072 scope.go:117] "RemoveContainer" containerID="f0239aa581e66fddd8c16af420543c1743e09635c9f82c2f13fdce098c99f8ec" Nov 24 11:35:29 crc kubenswrapper[5072]: E1124 11:35:29.023948 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 11:35:33 crc kubenswrapper[5072]: I1124 11:35:33.102529 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-h4ncm"] Nov 24 11:35:33 crc kubenswrapper[5072]: I1124 11:35:33.106494 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-a3c3-account-create-24pwx"] Nov 24 11:35:33 crc kubenswrapper[5072]: I1124 11:35:33.113157 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-mj6kc"] Nov 24 11:35:33 crc kubenswrapper[5072]: I1124 11:35:33.119800 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-a3c3-account-create-24pwx"] Nov 24 11:35:33 crc kubenswrapper[5072]: I1124 11:35:33.125936 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-mj6kc"] Nov 24 11:35:33 crc kubenswrapper[5072]: I1124 11:35:33.132705 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-h4ncm"] Nov 24 11:35:33 crc kubenswrapper[5072]: I1124 11:35:33.139423 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-wp6ws"] Nov 24 11:35:33 crc kubenswrapper[5072]: I1124 11:35:33.145307 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-wp6ws"] Nov 24 11:35:35 crc kubenswrapper[5072]: I1124 11:35:35.031591 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79a97b6f-0aa6-4059-8495-23ceff788793" path="/var/lib/kubelet/pods/79a97b6f-0aa6-4059-8495-23ceff788793/volumes" Nov 24 11:35:35 crc kubenswrapper[5072]: I1124 11:35:35.033215 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc652e4a-54d1-43f7-b547-d86b30ae0797" path="/var/lib/kubelet/pods/bc652e4a-54d1-43f7-b547-d86b30ae0797/volumes" Nov 24 11:35:35 crc kubenswrapper[5072]: I1124 11:35:35.034277 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bffbb2ab-3908-425a-ba38-80a69a37a16a" path="/var/lib/kubelet/pods/bffbb2ab-3908-425a-ba38-80a69a37a16a/volumes" Nov 24 11:35:35 crc kubenswrapper[5072]: I1124 11:35:35.035456 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f64dc57b-2fb4-4ad8-99a9-f9756664b3c4" path="/var/lib/kubelet/pods/f64dc57b-2fb4-4ad8-99a9-f9756664b3c4/volumes" Nov 24 11:35:36 crc kubenswrapper[5072]: I1124 11:35:36.049963 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-a502-account-create-z6jg6"] Nov 24 11:35:36 crc kubenswrapper[5072]: I1124 11:35:36.065200 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-a502-account-create-z6jg6"] Nov 24 11:35:36 crc kubenswrapper[5072]: I1124 11:35:36.077250 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-95b4-account-create-x4sc7"] Nov 24 11:35:36 crc kubenswrapper[5072]: I1124 11:35:36.088643 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-95b4-account-create-x4sc7"] Nov 24 11:35:37 crc kubenswrapper[5072]: I1124 11:35:37.037130 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="647daeca-7489-478d-930c-3a780336be49" path="/var/lib/kubelet/pods/647daeca-7489-478d-930c-3a780336be49/volumes" Nov 24 11:35:37 crc kubenswrapper[5072]: I1124 11:35:37.039003 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0b8deb9-6451-4091-bc77-884a3581af75" path="/var/lib/kubelet/pods/d0b8deb9-6451-4091-bc77-884a3581af75/volumes" Nov 24 11:35:40 crc kubenswrapper[5072]: I1124 11:35:40.016487 5072 scope.go:117] "RemoveContainer" containerID="f0239aa581e66fddd8c16af420543c1743e09635c9f82c2f13fdce098c99f8ec" Nov 24 11:35:40 crc kubenswrapper[5072]: E1124 11:35:40.016984 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 11:35:40 crc kubenswrapper[5072]: I1124 11:35:40.034360 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-sh9kr"] Nov 24 11:35:40 crc kubenswrapper[5072]: I1124 11:35:40.048297 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-sh9kr"] Nov 24 11:35:41 crc kubenswrapper[5072]: I1124 11:35:41.030564 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4f41a09-fa7a-4077-8502-58295771132e" path="/var/lib/kubelet/pods/d4f41a09-fa7a-4077-8502-58295771132e/volumes" Nov 24 11:35:49 crc kubenswrapper[5072]: I1124 11:35:49.207684 5072 generic.go:334] "Generic (PLEG): container finished" podID="2e327a89-b7a4-4e57-bc77-bb3a64afce6d" containerID="a8e5f07e17bf328e8092b0d7be49c38dfe1062e29b1e94e4b268ed6581a78740" exitCode=0 Nov 24 11:35:49 crc kubenswrapper[5072]: I1124 11:35:49.207759 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-n2x4l" event={"ID":"2e327a89-b7a4-4e57-bc77-bb3a64afce6d","Type":"ContainerDied","Data":"a8e5f07e17bf328e8092b0d7be49c38dfe1062e29b1e94e4b268ed6581a78740"} Nov 24 11:35:50 crc kubenswrapper[5072]: I1124 11:35:50.616755 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-n2x4l" Nov 24 11:35:50 crc kubenswrapper[5072]: I1124 11:35:50.741195 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72prg\" (UniqueName: \"kubernetes.io/projected/2e327a89-b7a4-4e57-bc77-bb3a64afce6d-kube-api-access-72prg\") pod \"2e327a89-b7a4-4e57-bc77-bb3a64afce6d\" (UID: \"2e327a89-b7a4-4e57-bc77-bb3a64afce6d\") " Nov 24 11:35:50 crc kubenswrapper[5072]: I1124 11:35:50.741284 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2e327a89-b7a4-4e57-bc77-bb3a64afce6d-ssh-key\") pod \"2e327a89-b7a4-4e57-bc77-bb3a64afce6d\" (UID: \"2e327a89-b7a4-4e57-bc77-bb3a64afce6d\") " Nov 24 11:35:50 crc kubenswrapper[5072]: I1124 11:35:50.741418 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2e327a89-b7a4-4e57-bc77-bb3a64afce6d-inventory\") pod \"2e327a89-b7a4-4e57-bc77-bb3a64afce6d\" (UID: \"2e327a89-b7a4-4e57-bc77-bb3a64afce6d\") " Nov 24 11:35:50 crc kubenswrapper[5072]: I1124 11:35:50.747345 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e327a89-b7a4-4e57-bc77-bb3a64afce6d-kube-api-access-72prg" (OuterVolumeSpecName: "kube-api-access-72prg") pod "2e327a89-b7a4-4e57-bc77-bb3a64afce6d" (UID: "2e327a89-b7a4-4e57-bc77-bb3a64afce6d"). InnerVolumeSpecName "kube-api-access-72prg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:35:50 crc kubenswrapper[5072]: I1124 11:35:50.768312 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e327a89-b7a4-4e57-bc77-bb3a64afce6d-inventory" (OuterVolumeSpecName: "inventory") pod "2e327a89-b7a4-4e57-bc77-bb3a64afce6d" (UID: "2e327a89-b7a4-4e57-bc77-bb3a64afce6d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:35:50 crc kubenswrapper[5072]: I1124 11:35:50.769277 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e327a89-b7a4-4e57-bc77-bb3a64afce6d-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "2e327a89-b7a4-4e57-bc77-bb3a64afce6d" (UID: "2e327a89-b7a4-4e57-bc77-bb3a64afce6d"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:35:50 crc kubenswrapper[5072]: I1124 11:35:50.843320 5072 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2e327a89-b7a4-4e57-bc77-bb3a64afce6d-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 11:35:50 crc kubenswrapper[5072]: I1124 11:35:50.843353 5072 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2e327a89-b7a4-4e57-bc77-bb3a64afce6d-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 11:35:50 crc kubenswrapper[5072]: I1124 11:35:50.843365 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72prg\" (UniqueName: \"kubernetes.io/projected/2e327a89-b7a4-4e57-bc77-bb3a64afce6d-kube-api-access-72prg\") on node \"crc\" DevicePath \"\"" Nov 24 11:35:51 crc kubenswrapper[5072]: I1124 11:35:51.225700 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-n2x4l" event={"ID":"2e327a89-b7a4-4e57-bc77-bb3a64afce6d","Type":"ContainerDied","Data":"a64721840154b5d11d12ffbb2893b0cb85f575a9c7042831aa68e1426f66516f"} Nov 24 11:35:51 crc kubenswrapper[5072]: I1124 11:35:51.225736 5072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a64721840154b5d11d12ffbb2893b0cb85f575a9c7042831aa68e1426f66516f" Nov 24 11:35:51 crc kubenswrapper[5072]: I1124 11:35:51.225768 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-n2x4l" Nov 24 11:35:51 crc kubenswrapper[5072]: I1124 11:35:51.318133 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-77zw7"] Nov 24 11:35:51 crc kubenswrapper[5072]: E1124 11:35:51.318549 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e327a89-b7a4-4e57-bc77-bb3a64afce6d" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 24 11:35:51 crc kubenswrapper[5072]: I1124 11:35:51.318571 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e327a89-b7a4-4e57-bc77-bb3a64afce6d" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 24 11:35:51 crc kubenswrapper[5072]: I1124 11:35:51.318794 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e327a89-b7a4-4e57-bc77-bb3a64afce6d" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 24 11:35:51 crc kubenswrapper[5072]: I1124 11:35:51.319509 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-77zw7" Nov 24 11:35:51 crc kubenswrapper[5072]: I1124 11:35:51.323040 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 11:35:51 crc kubenswrapper[5072]: I1124 11:35:51.323169 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-b6s7d" Nov 24 11:35:51 crc kubenswrapper[5072]: I1124 11:35:51.323364 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 11:35:51 crc kubenswrapper[5072]: I1124 11:35:51.327487 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 11:35:51 crc kubenswrapper[5072]: I1124 11:35:51.333643 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-77zw7"] Nov 24 11:35:51 crc kubenswrapper[5072]: I1124 11:35:51.458787 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b41cd94b-9e44-431e-b3f9-76655cda4c0f-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-77zw7\" (UID: \"b41cd94b-9e44-431e-b3f9-76655cda4c0f\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-77zw7" Nov 24 11:35:51 crc kubenswrapper[5072]: I1124 11:35:51.459266 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2qfg\" (UniqueName: \"kubernetes.io/projected/b41cd94b-9e44-431e-b3f9-76655cda4c0f-kube-api-access-r2qfg\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-77zw7\" (UID: \"b41cd94b-9e44-431e-b3f9-76655cda4c0f\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-77zw7" Nov 24 11:35:51 crc kubenswrapper[5072]: I1124 11:35:51.459308 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b41cd94b-9e44-431e-b3f9-76655cda4c0f-ssh-key\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-77zw7\" (UID: \"b41cd94b-9e44-431e-b3f9-76655cda4c0f\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-77zw7" Nov 24 11:35:51 crc kubenswrapper[5072]: I1124 11:35:51.561459 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b41cd94b-9e44-431e-b3f9-76655cda4c0f-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-77zw7\" (UID: \"b41cd94b-9e44-431e-b3f9-76655cda4c0f\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-77zw7" Nov 24 11:35:51 crc kubenswrapper[5072]: I1124 11:35:51.561645 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2qfg\" (UniqueName: \"kubernetes.io/projected/b41cd94b-9e44-431e-b3f9-76655cda4c0f-kube-api-access-r2qfg\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-77zw7\" (UID: \"b41cd94b-9e44-431e-b3f9-76655cda4c0f\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-77zw7" Nov 24 11:35:51 crc kubenswrapper[5072]: I1124 11:35:51.561682 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b41cd94b-9e44-431e-b3f9-76655cda4c0f-ssh-key\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-77zw7\" (UID: \"b41cd94b-9e44-431e-b3f9-76655cda4c0f\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-77zw7" Nov 24 11:35:51 crc kubenswrapper[5072]: I1124 11:35:51.567657 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b41cd94b-9e44-431e-b3f9-76655cda4c0f-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-77zw7\" (UID: \"b41cd94b-9e44-431e-b3f9-76655cda4c0f\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-77zw7" Nov 24 11:35:51 crc kubenswrapper[5072]: I1124 11:35:51.582192 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b41cd94b-9e44-431e-b3f9-76655cda4c0f-ssh-key\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-77zw7\" (UID: \"b41cd94b-9e44-431e-b3f9-76655cda4c0f\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-77zw7" Nov 24 11:35:51 crc kubenswrapper[5072]: I1124 11:35:51.592016 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2qfg\" (UniqueName: \"kubernetes.io/projected/b41cd94b-9e44-431e-b3f9-76655cda4c0f-kube-api-access-r2qfg\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-77zw7\" (UID: \"b41cd94b-9e44-431e-b3f9-76655cda4c0f\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-77zw7" Nov 24 11:35:51 crc kubenswrapper[5072]: I1124 11:35:51.644820 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-77zw7" Nov 24 11:35:52 crc kubenswrapper[5072]: I1124 11:35:52.150625 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-77zw7"] Nov 24 11:35:52 crc kubenswrapper[5072]: I1124 11:35:52.235749 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-77zw7" event={"ID":"b41cd94b-9e44-431e-b3f9-76655cda4c0f","Type":"ContainerStarted","Data":"96900651fa6525188b5d0f793e994fabe9012d02676f93026cd67d3642ee1b3c"} Nov 24 11:35:53 crc kubenswrapper[5072]: I1124 11:35:53.250840 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-77zw7" event={"ID":"b41cd94b-9e44-431e-b3f9-76655cda4c0f","Type":"ContainerStarted","Data":"f450160e093e287116986029ff191cc07b2f5fb5c29a036fd60bf3b4fb4b79cf"} Nov 24 11:35:53 crc kubenswrapper[5072]: I1124 11:35:53.278592 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-77zw7" podStartSLOduration=1.808849898 podStartE2EDuration="2.278567471s" podCreationTimestamp="2025-11-24 11:35:51 +0000 UTC" firstStartedPulling="2025-11-24 11:35:52.152119687 +0000 UTC m=+1603.863644163" lastFinishedPulling="2025-11-24 11:35:52.62183721 +0000 UTC m=+1604.333361736" observedRunningTime="2025-11-24 11:35:53.274150644 +0000 UTC m=+1604.985675160" watchObservedRunningTime="2025-11-24 11:35:53.278567471 +0000 UTC m=+1604.990091967" Nov 24 11:35:54 crc kubenswrapper[5072]: I1124 11:35:54.017061 5072 scope.go:117] "RemoveContainer" containerID="f0239aa581e66fddd8c16af420543c1743e09635c9f82c2f13fdce098c99f8ec" Nov 24 11:35:54 crc kubenswrapper[5072]: E1124 11:35:54.017951 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 11:35:57 crc kubenswrapper[5072]: I1124 11:35:57.292903 5072 generic.go:334] "Generic (PLEG): container finished" podID="b41cd94b-9e44-431e-b3f9-76655cda4c0f" containerID="f450160e093e287116986029ff191cc07b2f5fb5c29a036fd60bf3b4fb4b79cf" exitCode=0 Nov 24 11:35:57 crc kubenswrapper[5072]: I1124 11:35:57.292965 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-77zw7" event={"ID":"b41cd94b-9e44-431e-b3f9-76655cda4c0f","Type":"ContainerDied","Data":"f450160e093e287116986029ff191cc07b2f5fb5c29a036fd60bf3b4fb4b79cf"} Nov 24 11:35:58 crc kubenswrapper[5072]: I1124 11:35:58.722647 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-77zw7" Nov 24 11:35:58 crc kubenswrapper[5072]: I1124 11:35:58.900595 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r2qfg\" (UniqueName: \"kubernetes.io/projected/b41cd94b-9e44-431e-b3f9-76655cda4c0f-kube-api-access-r2qfg\") pod \"b41cd94b-9e44-431e-b3f9-76655cda4c0f\" (UID: \"b41cd94b-9e44-431e-b3f9-76655cda4c0f\") " Nov 24 11:35:58 crc kubenswrapper[5072]: I1124 11:35:58.900674 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b41cd94b-9e44-431e-b3f9-76655cda4c0f-inventory\") pod \"b41cd94b-9e44-431e-b3f9-76655cda4c0f\" (UID: \"b41cd94b-9e44-431e-b3f9-76655cda4c0f\") " Nov 24 11:35:58 crc kubenswrapper[5072]: I1124 11:35:58.900695 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b41cd94b-9e44-431e-b3f9-76655cda4c0f-ssh-key\") pod \"b41cd94b-9e44-431e-b3f9-76655cda4c0f\" (UID: \"b41cd94b-9e44-431e-b3f9-76655cda4c0f\") " Nov 24 11:35:58 crc kubenswrapper[5072]: I1124 11:35:58.909834 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b41cd94b-9e44-431e-b3f9-76655cda4c0f-kube-api-access-r2qfg" (OuterVolumeSpecName: "kube-api-access-r2qfg") pod "b41cd94b-9e44-431e-b3f9-76655cda4c0f" (UID: "b41cd94b-9e44-431e-b3f9-76655cda4c0f"). InnerVolumeSpecName "kube-api-access-r2qfg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:35:58 crc kubenswrapper[5072]: I1124 11:35:58.930211 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b41cd94b-9e44-431e-b3f9-76655cda4c0f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b41cd94b-9e44-431e-b3f9-76655cda4c0f" (UID: "b41cd94b-9e44-431e-b3f9-76655cda4c0f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:35:58 crc kubenswrapper[5072]: I1124 11:35:58.939505 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b41cd94b-9e44-431e-b3f9-76655cda4c0f-inventory" (OuterVolumeSpecName: "inventory") pod "b41cd94b-9e44-431e-b3f9-76655cda4c0f" (UID: "b41cd94b-9e44-431e-b3f9-76655cda4c0f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:35:59 crc kubenswrapper[5072]: I1124 11:35:59.002805 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r2qfg\" (UniqueName: \"kubernetes.io/projected/b41cd94b-9e44-431e-b3f9-76655cda4c0f-kube-api-access-r2qfg\") on node \"crc\" DevicePath \"\"" Nov 24 11:35:59 crc kubenswrapper[5072]: I1124 11:35:59.002855 5072 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b41cd94b-9e44-431e-b3f9-76655cda4c0f-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 11:35:59 crc kubenswrapper[5072]: I1124 11:35:59.002873 5072 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b41cd94b-9e44-431e-b3f9-76655cda4c0f-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 11:35:59 crc kubenswrapper[5072]: I1124 11:35:59.313889 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-77zw7" event={"ID":"b41cd94b-9e44-431e-b3f9-76655cda4c0f","Type":"ContainerDied","Data":"96900651fa6525188b5d0f793e994fabe9012d02676f93026cd67d3642ee1b3c"} Nov 24 11:35:59 crc kubenswrapper[5072]: I1124 11:35:59.313931 5072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96900651fa6525188b5d0f793e994fabe9012d02676f93026cd67d3642ee1b3c" Nov 24 11:35:59 crc kubenswrapper[5072]: I1124 11:35:59.313971 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-77zw7" Nov 24 11:35:59 crc kubenswrapper[5072]: I1124 11:35:59.414066 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w6jhm"] Nov 24 11:35:59 crc kubenswrapper[5072]: E1124 11:35:59.414574 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b41cd94b-9e44-431e-b3f9-76655cda4c0f" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Nov 24 11:35:59 crc kubenswrapper[5072]: I1124 11:35:59.414599 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="b41cd94b-9e44-431e-b3f9-76655cda4c0f" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Nov 24 11:35:59 crc kubenswrapper[5072]: I1124 11:35:59.414845 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="b41cd94b-9e44-431e-b3f9-76655cda4c0f" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Nov 24 11:35:59 crc kubenswrapper[5072]: I1124 11:35:59.415623 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w6jhm" Nov 24 11:35:59 crc kubenswrapper[5072]: I1124 11:35:59.417901 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 11:35:59 crc kubenswrapper[5072]: I1124 11:35:59.418226 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-b6s7d" Nov 24 11:35:59 crc kubenswrapper[5072]: I1124 11:35:59.418621 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 11:35:59 crc kubenswrapper[5072]: I1124 11:35:59.419667 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 11:35:59 crc kubenswrapper[5072]: I1124 11:35:59.427756 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w6jhm"] Nov 24 11:35:59 crc kubenswrapper[5072]: I1124 11:35:59.512858 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a3c835dc-ad27-4cd1-a28b-4875b1e87d8c-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-w6jhm\" (UID: \"a3c835dc-ad27-4cd1-a28b-4875b1e87d8c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w6jhm" Nov 24 11:35:59 crc kubenswrapper[5072]: I1124 11:35:59.512945 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a3c835dc-ad27-4cd1-a28b-4875b1e87d8c-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-w6jhm\" (UID: \"a3c835dc-ad27-4cd1-a28b-4875b1e87d8c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w6jhm" Nov 24 11:35:59 crc kubenswrapper[5072]: I1124 11:35:59.513054 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dl8sb\" (UniqueName: \"kubernetes.io/projected/a3c835dc-ad27-4cd1-a28b-4875b1e87d8c-kube-api-access-dl8sb\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-w6jhm\" (UID: \"a3c835dc-ad27-4cd1-a28b-4875b1e87d8c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w6jhm" Nov 24 11:35:59 crc kubenswrapper[5072]: I1124 11:35:59.614026 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a3c835dc-ad27-4cd1-a28b-4875b1e87d8c-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-w6jhm\" (UID: \"a3c835dc-ad27-4cd1-a28b-4875b1e87d8c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w6jhm" Nov 24 11:35:59 crc kubenswrapper[5072]: I1124 11:35:59.614099 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a3c835dc-ad27-4cd1-a28b-4875b1e87d8c-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-w6jhm\" (UID: \"a3c835dc-ad27-4cd1-a28b-4875b1e87d8c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w6jhm" Nov 24 11:35:59 crc kubenswrapper[5072]: I1124 11:35:59.614153 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dl8sb\" (UniqueName: \"kubernetes.io/projected/a3c835dc-ad27-4cd1-a28b-4875b1e87d8c-kube-api-access-dl8sb\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-w6jhm\" (UID: \"a3c835dc-ad27-4cd1-a28b-4875b1e87d8c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w6jhm" Nov 24 11:35:59 crc kubenswrapper[5072]: I1124 11:35:59.620683 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a3c835dc-ad27-4cd1-a28b-4875b1e87d8c-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-w6jhm\" (UID: \"a3c835dc-ad27-4cd1-a28b-4875b1e87d8c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w6jhm" Nov 24 11:35:59 crc kubenswrapper[5072]: I1124 11:35:59.622554 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a3c835dc-ad27-4cd1-a28b-4875b1e87d8c-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-w6jhm\" (UID: \"a3c835dc-ad27-4cd1-a28b-4875b1e87d8c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w6jhm" Nov 24 11:35:59 crc kubenswrapper[5072]: I1124 11:35:59.641860 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dl8sb\" (UniqueName: \"kubernetes.io/projected/a3c835dc-ad27-4cd1-a28b-4875b1e87d8c-kube-api-access-dl8sb\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-w6jhm\" (UID: \"a3c835dc-ad27-4cd1-a28b-4875b1e87d8c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w6jhm" Nov 24 11:35:59 crc kubenswrapper[5072]: I1124 11:35:59.739136 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w6jhm" Nov 24 11:36:00 crc kubenswrapper[5072]: I1124 11:36:00.283710 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w6jhm"] Nov 24 11:36:00 crc kubenswrapper[5072]: I1124 11:36:00.326537 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w6jhm" event={"ID":"a3c835dc-ad27-4cd1-a28b-4875b1e87d8c","Type":"ContainerStarted","Data":"50e62610ad6ea558f617ee4ae7d7abfc3f58925fba9705534d2c612b057d2301"} Nov 24 11:36:01 crc kubenswrapper[5072]: I1124 11:36:01.342582 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w6jhm" event={"ID":"a3c835dc-ad27-4cd1-a28b-4875b1e87d8c","Type":"ContainerStarted","Data":"ebaa4b9965366c6c8a7732aed495cc04d83610061550042df64b483ae56e7edb"} Nov 24 11:36:01 crc kubenswrapper[5072]: I1124 11:36:01.365743 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w6jhm" podStartSLOduration=1.614854304 podStartE2EDuration="2.365715525s" podCreationTimestamp="2025-11-24 11:35:59 +0000 UTC" firstStartedPulling="2025-11-24 11:36:00.286285459 +0000 UTC m=+1611.997809935" lastFinishedPulling="2025-11-24 11:36:01.03714666 +0000 UTC m=+1612.748671156" observedRunningTime="2025-11-24 11:36:01.362289652 +0000 UTC m=+1613.073814168" watchObservedRunningTime="2025-11-24 11:36:01.365715525 +0000 UTC m=+1613.077240041" Nov 24 11:36:03 crc kubenswrapper[5072]: I1124 11:36:03.046988 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-w6mv2"] Nov 24 11:36:03 crc kubenswrapper[5072]: I1124 11:36:03.063016 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-w6mv2"] Nov 24 11:36:05 crc kubenswrapper[5072]: I1124 11:36:05.032145 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb192e24-d3b0-4e96-8bbf-edb5b93ecf64" path="/var/lib/kubelet/pods/bb192e24-d3b0-4e96-8bbf-edb5b93ecf64/volumes" Nov 24 11:36:07 crc kubenswrapper[5072]: I1124 11:36:07.017326 5072 scope.go:117] "RemoveContainer" containerID="f0239aa581e66fddd8c16af420543c1743e09635c9f82c2f13fdce098c99f8ec" Nov 24 11:36:07 crc kubenswrapper[5072]: E1124 11:36:07.018109 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 11:36:08 crc kubenswrapper[5072]: I1124 11:36:08.033345 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-6wkj4"] Nov 24 11:36:08 crc kubenswrapper[5072]: I1124 11:36:08.040843 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-6wkj4"] Nov 24 11:36:09 crc kubenswrapper[5072]: I1124 11:36:09.049486 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68f6d27e-d239-4e24-8381-872893433a07" path="/var/lib/kubelet/pods/68f6d27e-d239-4e24-8381-872893433a07/volumes" Nov 24 11:36:12 crc kubenswrapper[5072]: I1124 11:36:12.054893 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-jrmwr"] Nov 24 11:36:12 crc kubenswrapper[5072]: I1124 11:36:12.070788 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-jrmwr"] Nov 24 11:36:13 crc kubenswrapper[5072]: I1124 11:36:13.040538 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9d9bdb5-a7d6-4caf-9212-4707da33f459" path="/var/lib/kubelet/pods/b9d9bdb5-a7d6-4caf-9212-4707da33f459/volumes" Nov 24 11:36:14 crc kubenswrapper[5072]: I1124 11:36:14.978758 5072 scope.go:117] "RemoveContainer" containerID="95efdc3d4ac893766dbae25cc0770efd6934b697c873d7eb81fc63d472f44a96" Nov 24 11:36:15 crc kubenswrapper[5072]: I1124 11:36:15.053587 5072 scope.go:117] "RemoveContainer" containerID="01682fdca88f8d5d594c3f26d4e2b74dcece45edb2e28f32c44602dfccc2f459" Nov 24 11:36:15 crc kubenswrapper[5072]: I1124 11:36:15.134813 5072 scope.go:117] "RemoveContainer" containerID="d2c1dbb6da557058d66a82d8c7443c22025921dd8c1281cc02d33575ed58d7a9" Nov 24 11:36:15 crc kubenswrapper[5072]: I1124 11:36:15.171646 5072 scope.go:117] "RemoveContainer" containerID="515c2d277fdb1783a233f9ecda35204f257df0e932af496a6631c73337ca0924" Nov 24 11:36:15 crc kubenswrapper[5072]: I1124 11:36:15.206762 5072 scope.go:117] "RemoveContainer" containerID="f6344617c92e0a271ec3297865b802c61af6300042ac6404db0c92e563bbc952" Nov 24 11:36:15 crc kubenswrapper[5072]: I1124 11:36:15.262842 5072 scope.go:117] "RemoveContainer" containerID="7d6b84973fd5541609924ca765899daaaa67f701c20299c73f35e8c6a1ccfc28" Nov 24 11:36:15 crc kubenswrapper[5072]: I1124 11:36:15.299298 5072 scope.go:117] "RemoveContainer" containerID="6f0aee7456017afe4c9bdab4835d829de82ab09d8479737a6f5ff3ba41e709f2" Nov 24 11:36:15 crc kubenswrapper[5072]: I1124 11:36:15.336189 5072 scope.go:117] "RemoveContainer" containerID="f0564c23ecc9f7d6844b1de314693700c94f1744400d7a1f1d3ca65508eadd4c" Nov 24 11:36:15 crc kubenswrapper[5072]: I1124 11:36:15.359893 5072 scope.go:117] "RemoveContainer" containerID="79bcd35dd6d76a99b90dfd2d188142a7036a6a9bf0d2ee9b43a613e8080e0c46" Nov 24 11:36:15 crc kubenswrapper[5072]: I1124 11:36:15.383747 5072 scope.go:117] "RemoveContainer" containerID="4936f31cc6e34607b415a33f58a9dd3596dd27fc84aacd1c3707abf92fcca017" Nov 24 11:36:15 crc kubenswrapper[5072]: I1124 11:36:15.405701 5072 scope.go:117] "RemoveContainer" containerID="7661cbea52672967aab7f54dd6d29e802a68ce4065f8db181b7e3e2de73f8240" Nov 24 11:36:21 crc kubenswrapper[5072]: I1124 11:36:21.017799 5072 scope.go:117] "RemoveContainer" containerID="f0239aa581e66fddd8c16af420543c1743e09635c9f82c2f13fdce098c99f8ec" Nov 24 11:36:21 crc kubenswrapper[5072]: E1124 11:36:21.019752 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 11:36:24 crc kubenswrapper[5072]: I1124 11:36:24.049160 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-g5npx"] Nov 24 11:36:24 crc kubenswrapper[5072]: I1124 11:36:24.062514 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-g5npx"] Nov 24 11:36:25 crc kubenswrapper[5072]: I1124 11:36:25.032752 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="feff4031-5012-468f-8dd6-d58c5dae8d29" path="/var/lib/kubelet/pods/feff4031-5012-468f-8dd6-d58c5dae8d29/volumes" Nov 24 11:36:27 crc kubenswrapper[5072]: I1124 11:36:27.050196 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-8npk7"] Nov 24 11:36:27 crc kubenswrapper[5072]: I1124 11:36:27.065301 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-8npk7"] Nov 24 11:36:29 crc kubenswrapper[5072]: I1124 11:36:29.038233 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab063039-b4d9-45d8-9336-35316fd1ab08" path="/var/lib/kubelet/pods/ab063039-b4d9-45d8-9336-35316fd1ab08/volumes" Nov 24 11:36:32 crc kubenswrapper[5072]: I1124 11:36:32.016748 5072 scope.go:117] "RemoveContainer" containerID="f0239aa581e66fddd8c16af420543c1743e09635c9f82c2f13fdce098c99f8ec" Nov 24 11:36:32 crc kubenswrapper[5072]: E1124 11:36:32.017608 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 11:36:44 crc kubenswrapper[5072]: I1124 11:36:44.016154 5072 scope.go:117] "RemoveContainer" containerID="f0239aa581e66fddd8c16af420543c1743e09635c9f82c2f13fdce098c99f8ec" Nov 24 11:36:44 crc kubenswrapper[5072]: E1124 11:36:44.016855 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 11:36:57 crc kubenswrapper[5072]: I1124 11:36:57.950255 5072 generic.go:334] "Generic (PLEG): container finished" podID="a3c835dc-ad27-4cd1-a28b-4875b1e87d8c" containerID="ebaa4b9965366c6c8a7732aed495cc04d83610061550042df64b483ae56e7edb" exitCode=0 Nov 24 11:36:57 crc kubenswrapper[5072]: I1124 11:36:57.950410 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w6jhm" event={"ID":"a3c835dc-ad27-4cd1-a28b-4875b1e87d8c","Type":"ContainerDied","Data":"ebaa4b9965366c6c8a7732aed495cc04d83610061550042df64b483ae56e7edb"} Nov 24 11:36:58 crc kubenswrapper[5072]: I1124 11:36:58.016844 5072 scope.go:117] "RemoveContainer" containerID="f0239aa581e66fddd8c16af420543c1743e09635c9f82c2f13fdce098c99f8ec" Nov 24 11:36:58 crc kubenswrapper[5072]: E1124 11:36:58.017163 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 11:36:59 crc kubenswrapper[5072]: I1124 11:36:59.323730 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w6jhm" Nov 24 11:36:59 crc kubenswrapper[5072]: I1124 11:36:59.437350 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a3c835dc-ad27-4cd1-a28b-4875b1e87d8c-ssh-key\") pod \"a3c835dc-ad27-4cd1-a28b-4875b1e87d8c\" (UID: \"a3c835dc-ad27-4cd1-a28b-4875b1e87d8c\") " Nov 24 11:36:59 crc kubenswrapper[5072]: I1124 11:36:59.437468 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a3c835dc-ad27-4cd1-a28b-4875b1e87d8c-inventory\") pod \"a3c835dc-ad27-4cd1-a28b-4875b1e87d8c\" (UID: \"a3c835dc-ad27-4cd1-a28b-4875b1e87d8c\") " Nov 24 11:36:59 crc kubenswrapper[5072]: I1124 11:36:59.437602 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dl8sb\" (UniqueName: \"kubernetes.io/projected/a3c835dc-ad27-4cd1-a28b-4875b1e87d8c-kube-api-access-dl8sb\") pod \"a3c835dc-ad27-4cd1-a28b-4875b1e87d8c\" (UID: \"a3c835dc-ad27-4cd1-a28b-4875b1e87d8c\") " Nov 24 11:36:59 crc kubenswrapper[5072]: I1124 11:36:59.443210 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3c835dc-ad27-4cd1-a28b-4875b1e87d8c-kube-api-access-dl8sb" (OuterVolumeSpecName: "kube-api-access-dl8sb") pod "a3c835dc-ad27-4cd1-a28b-4875b1e87d8c" (UID: "a3c835dc-ad27-4cd1-a28b-4875b1e87d8c"). InnerVolumeSpecName "kube-api-access-dl8sb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:36:59 crc kubenswrapper[5072]: I1124 11:36:59.479404 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3c835dc-ad27-4cd1-a28b-4875b1e87d8c-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a3c835dc-ad27-4cd1-a28b-4875b1e87d8c" (UID: "a3c835dc-ad27-4cd1-a28b-4875b1e87d8c"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:36:59 crc kubenswrapper[5072]: I1124 11:36:59.483270 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3c835dc-ad27-4cd1-a28b-4875b1e87d8c-inventory" (OuterVolumeSpecName: "inventory") pod "a3c835dc-ad27-4cd1-a28b-4875b1e87d8c" (UID: "a3c835dc-ad27-4cd1-a28b-4875b1e87d8c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:36:59 crc kubenswrapper[5072]: I1124 11:36:59.539533 5072 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a3c835dc-ad27-4cd1-a28b-4875b1e87d8c-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 11:36:59 crc kubenswrapper[5072]: I1124 11:36:59.539565 5072 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a3c835dc-ad27-4cd1-a28b-4875b1e87d8c-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 11:36:59 crc kubenswrapper[5072]: I1124 11:36:59.539578 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dl8sb\" (UniqueName: \"kubernetes.io/projected/a3c835dc-ad27-4cd1-a28b-4875b1e87d8c-kube-api-access-dl8sb\") on node \"crc\" DevicePath \"\"" Nov 24 11:36:59 crc kubenswrapper[5072]: I1124 11:36:59.968814 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w6jhm" event={"ID":"a3c835dc-ad27-4cd1-a28b-4875b1e87d8c","Type":"ContainerDied","Data":"50e62610ad6ea558f617ee4ae7d7abfc3f58925fba9705534d2c612b057d2301"} Nov 24 11:36:59 crc kubenswrapper[5072]: I1124 11:36:59.968867 5072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50e62610ad6ea558f617ee4ae7d7abfc3f58925fba9705534d2c612b057d2301" Nov 24 11:36:59 crc kubenswrapper[5072]: I1124 11:36:59.968936 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w6jhm" Nov 24 11:37:00 crc kubenswrapper[5072]: I1124 11:37:00.054992 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-rl64x"] Nov 24 11:37:00 crc kubenswrapper[5072]: E1124 11:37:00.055776 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3c835dc-ad27-4cd1-a28b-4875b1e87d8c" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 24 11:37:00 crc kubenswrapper[5072]: I1124 11:37:00.055842 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3c835dc-ad27-4cd1-a28b-4875b1e87d8c" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 24 11:37:00 crc kubenswrapper[5072]: I1124 11:37:00.056096 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3c835dc-ad27-4cd1-a28b-4875b1e87d8c" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 24 11:37:00 crc kubenswrapper[5072]: I1124 11:37:00.058984 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-rl64x" Nov 24 11:37:00 crc kubenswrapper[5072]: I1124 11:37:00.061479 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-b6s7d" Nov 24 11:37:00 crc kubenswrapper[5072]: I1124 11:37:00.061480 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 11:37:00 crc kubenswrapper[5072]: I1124 11:37:00.061612 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 11:37:00 crc kubenswrapper[5072]: I1124 11:37:00.063771 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 11:37:00 crc kubenswrapper[5072]: I1124 11:37:00.075782 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-rl64x"] Nov 24 11:37:00 crc kubenswrapper[5072]: I1124 11:37:00.250696 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/9d9d2c85-76f2-4a51-be9c-7f2436ae35f1-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-rl64x\" (UID: \"9d9d2c85-76f2-4a51-be9c-7f2436ae35f1\") " pod="openstack/ssh-known-hosts-edpm-deployment-rl64x" Nov 24 11:37:00 crc kubenswrapper[5072]: I1124 11:37:00.250769 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsp95\" (UniqueName: \"kubernetes.io/projected/9d9d2c85-76f2-4a51-be9c-7f2436ae35f1-kube-api-access-rsp95\") pod \"ssh-known-hosts-edpm-deployment-rl64x\" (UID: \"9d9d2c85-76f2-4a51-be9c-7f2436ae35f1\") " pod="openstack/ssh-known-hosts-edpm-deployment-rl64x" Nov 24 11:37:00 crc kubenswrapper[5072]: I1124 11:37:00.250831 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9d9d2c85-76f2-4a51-be9c-7f2436ae35f1-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-rl64x\" (UID: \"9d9d2c85-76f2-4a51-be9c-7f2436ae35f1\") " pod="openstack/ssh-known-hosts-edpm-deployment-rl64x" Nov 24 11:37:00 crc kubenswrapper[5072]: I1124 11:37:00.352416 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/9d9d2c85-76f2-4a51-be9c-7f2436ae35f1-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-rl64x\" (UID: \"9d9d2c85-76f2-4a51-be9c-7f2436ae35f1\") " pod="openstack/ssh-known-hosts-edpm-deployment-rl64x" Nov 24 11:37:00 crc kubenswrapper[5072]: I1124 11:37:00.352481 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rsp95\" (UniqueName: \"kubernetes.io/projected/9d9d2c85-76f2-4a51-be9c-7f2436ae35f1-kube-api-access-rsp95\") pod \"ssh-known-hosts-edpm-deployment-rl64x\" (UID: \"9d9d2c85-76f2-4a51-be9c-7f2436ae35f1\") " pod="openstack/ssh-known-hosts-edpm-deployment-rl64x" Nov 24 11:37:00 crc kubenswrapper[5072]: I1124 11:37:00.352569 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9d9d2c85-76f2-4a51-be9c-7f2436ae35f1-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-rl64x\" (UID: \"9d9d2c85-76f2-4a51-be9c-7f2436ae35f1\") " pod="openstack/ssh-known-hosts-edpm-deployment-rl64x" Nov 24 11:37:00 crc kubenswrapper[5072]: I1124 11:37:00.360983 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9d9d2c85-76f2-4a51-be9c-7f2436ae35f1-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-rl64x\" (UID: \"9d9d2c85-76f2-4a51-be9c-7f2436ae35f1\") " pod="openstack/ssh-known-hosts-edpm-deployment-rl64x" Nov 24 11:37:00 crc kubenswrapper[5072]: I1124 11:37:00.361675 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/9d9d2c85-76f2-4a51-be9c-7f2436ae35f1-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-rl64x\" (UID: \"9d9d2c85-76f2-4a51-be9c-7f2436ae35f1\") " pod="openstack/ssh-known-hosts-edpm-deployment-rl64x" Nov 24 11:37:00 crc kubenswrapper[5072]: I1124 11:37:00.375128 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsp95\" (UniqueName: \"kubernetes.io/projected/9d9d2c85-76f2-4a51-be9c-7f2436ae35f1-kube-api-access-rsp95\") pod \"ssh-known-hosts-edpm-deployment-rl64x\" (UID: \"9d9d2c85-76f2-4a51-be9c-7f2436ae35f1\") " pod="openstack/ssh-known-hosts-edpm-deployment-rl64x" Nov 24 11:37:00 crc kubenswrapper[5072]: I1124 11:37:00.391179 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-rl64x" Nov 24 11:37:00 crc kubenswrapper[5072]: I1124 11:37:00.683013 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-rl64x"] Nov 24 11:37:00 crc kubenswrapper[5072]: I1124 11:37:00.984746 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-rl64x" event={"ID":"9d9d2c85-76f2-4a51-be9c-7f2436ae35f1","Type":"ContainerStarted","Data":"f396a3b807a0bb94a2dee30aec92dba0474eeab8d5063983b6acadb517b69cfc"} Nov 24 11:37:01 crc kubenswrapper[5072]: I1124 11:37:01.998526 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-rl64x" event={"ID":"9d9d2c85-76f2-4a51-be9c-7f2436ae35f1","Type":"ContainerStarted","Data":"66afa1d556fec312d4ddeb598933ce847e0b18eec852dc9e5974621983e42561"} Nov 24 11:37:02 crc kubenswrapper[5072]: I1124 11:37:02.035178 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-rl64x" podStartSLOduration=1.499657901 podStartE2EDuration="2.035152817s" podCreationTimestamp="2025-11-24 11:37:00 +0000 UTC" firstStartedPulling="2025-11-24 11:37:00.697200901 +0000 UTC m=+1672.408725387" lastFinishedPulling="2025-11-24 11:37:01.232695817 +0000 UTC m=+1672.944220303" observedRunningTime="2025-11-24 11:37:02.025810051 +0000 UTC m=+1673.737334527" watchObservedRunningTime="2025-11-24 11:37:02.035152817 +0000 UTC m=+1673.746677303" Nov 24 11:37:05 crc kubenswrapper[5072]: I1124 11:37:05.067162 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-7cpcc"] Nov 24 11:37:05 crc kubenswrapper[5072]: I1124 11:37:05.075339 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-fa17-account-create-6k8xl"] Nov 24 11:37:05 crc kubenswrapper[5072]: I1124 11:37:05.083088 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-bc2xz"] Nov 24 11:37:05 crc kubenswrapper[5072]: I1124 11:37:05.090223 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-47a1-account-create-w245w"] Nov 24 11:37:05 crc kubenswrapper[5072]: I1124 11:37:05.097247 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-bf4a-account-create-st8r6"] Nov 24 11:37:05 crc kubenswrapper[5072]: I1124 11:37:05.105497 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-d9mv6"] Nov 24 11:37:05 crc kubenswrapper[5072]: I1124 11:37:05.112257 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-47a1-account-create-w245w"] Nov 24 11:37:05 crc kubenswrapper[5072]: I1124 11:37:05.118757 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-bf4a-account-create-st8r6"] Nov 24 11:37:05 crc kubenswrapper[5072]: I1124 11:37:05.125549 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-bc2xz"] Nov 24 11:37:05 crc kubenswrapper[5072]: I1124 11:37:05.133112 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-d9mv6"] Nov 24 11:37:05 crc kubenswrapper[5072]: I1124 11:37:05.140139 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-fa17-account-create-6k8xl"] Nov 24 11:37:05 crc kubenswrapper[5072]: I1124 11:37:05.147034 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-7cpcc"] Nov 24 11:37:07 crc kubenswrapper[5072]: I1124 11:37:07.032884 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="784a74b5-3431-4fc5-ac75-d759b1f2a4cb" path="/var/lib/kubelet/pods/784a74b5-3431-4fc5-ac75-d759b1f2a4cb/volumes" Nov 24 11:37:07 crc kubenswrapper[5072]: I1124 11:37:07.034628 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a645a183-9f0b-4761-89d5-9ed93d898c5d" path="/var/lib/kubelet/pods/a645a183-9f0b-4761-89d5-9ed93d898c5d/volumes" Nov 24 11:37:07 crc kubenswrapper[5072]: I1124 11:37:07.035707 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ecd15413-8bab-481f-869c-02b3fd9fadc2" path="/var/lib/kubelet/pods/ecd15413-8bab-481f-869c-02b3fd9fadc2/volumes" Nov 24 11:37:07 crc kubenswrapper[5072]: I1124 11:37:07.037110 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef0ae516-a614-4d41-b48e-6ec7544ecc8b" path="/var/lib/kubelet/pods/ef0ae516-a614-4d41-b48e-6ec7544ecc8b/volumes" Nov 24 11:37:07 crc kubenswrapper[5072]: I1124 11:37:07.039231 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f47541bf-a131-46fe-81d9-30eb49272885" path="/var/lib/kubelet/pods/f47541bf-a131-46fe-81d9-30eb49272885/volumes" Nov 24 11:37:07 crc kubenswrapper[5072]: I1124 11:37:07.040297 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6cf63fa-6157-4ba4-96fb-2b72065bbab7" path="/var/lib/kubelet/pods/f6cf63fa-6157-4ba4-96fb-2b72065bbab7/volumes" Nov 24 11:37:09 crc kubenswrapper[5072]: I1124 11:37:09.070663 5072 generic.go:334] "Generic (PLEG): container finished" podID="9d9d2c85-76f2-4a51-be9c-7f2436ae35f1" containerID="66afa1d556fec312d4ddeb598933ce847e0b18eec852dc9e5974621983e42561" exitCode=0 Nov 24 11:37:09 crc kubenswrapper[5072]: I1124 11:37:09.070782 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-rl64x" event={"ID":"9d9d2c85-76f2-4a51-be9c-7f2436ae35f1","Type":"ContainerDied","Data":"66afa1d556fec312d4ddeb598933ce847e0b18eec852dc9e5974621983e42561"} Nov 24 11:37:10 crc kubenswrapper[5072]: I1124 11:37:10.561443 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-rl64x" Nov 24 11:37:10 crc kubenswrapper[5072]: I1124 11:37:10.646872 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9d9d2c85-76f2-4a51-be9c-7f2436ae35f1-ssh-key-openstack-edpm-ipam\") pod \"9d9d2c85-76f2-4a51-be9c-7f2436ae35f1\" (UID: \"9d9d2c85-76f2-4a51-be9c-7f2436ae35f1\") " Nov 24 11:37:10 crc kubenswrapper[5072]: I1124 11:37:10.647332 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/9d9d2c85-76f2-4a51-be9c-7f2436ae35f1-inventory-0\") pod \"9d9d2c85-76f2-4a51-be9c-7f2436ae35f1\" (UID: \"9d9d2c85-76f2-4a51-be9c-7f2436ae35f1\") " Nov 24 11:37:10 crc kubenswrapper[5072]: I1124 11:37:10.647605 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rsp95\" (UniqueName: \"kubernetes.io/projected/9d9d2c85-76f2-4a51-be9c-7f2436ae35f1-kube-api-access-rsp95\") pod \"9d9d2c85-76f2-4a51-be9c-7f2436ae35f1\" (UID: \"9d9d2c85-76f2-4a51-be9c-7f2436ae35f1\") " Nov 24 11:37:10 crc kubenswrapper[5072]: I1124 11:37:10.651923 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d9d2c85-76f2-4a51-be9c-7f2436ae35f1-kube-api-access-rsp95" (OuterVolumeSpecName: "kube-api-access-rsp95") pod "9d9d2c85-76f2-4a51-be9c-7f2436ae35f1" (UID: "9d9d2c85-76f2-4a51-be9c-7f2436ae35f1"). InnerVolumeSpecName "kube-api-access-rsp95". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:37:10 crc kubenswrapper[5072]: I1124 11:37:10.672500 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d9d2c85-76f2-4a51-be9c-7f2436ae35f1-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9d9d2c85-76f2-4a51-be9c-7f2436ae35f1" (UID: "9d9d2c85-76f2-4a51-be9c-7f2436ae35f1"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:37:10 crc kubenswrapper[5072]: I1124 11:37:10.700348 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d9d2c85-76f2-4a51-be9c-7f2436ae35f1-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "9d9d2c85-76f2-4a51-be9c-7f2436ae35f1" (UID: "9d9d2c85-76f2-4a51-be9c-7f2436ae35f1"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:37:10 crc kubenswrapper[5072]: I1124 11:37:10.749864 5072 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9d9d2c85-76f2-4a51-be9c-7f2436ae35f1-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Nov 24 11:37:10 crc kubenswrapper[5072]: I1124 11:37:10.749904 5072 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/9d9d2c85-76f2-4a51-be9c-7f2436ae35f1-inventory-0\") on node \"crc\" DevicePath \"\"" Nov 24 11:37:10 crc kubenswrapper[5072]: I1124 11:37:10.749916 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rsp95\" (UniqueName: \"kubernetes.io/projected/9d9d2c85-76f2-4a51-be9c-7f2436ae35f1-kube-api-access-rsp95\") on node \"crc\" DevicePath \"\"" Nov 24 11:37:11 crc kubenswrapper[5072]: I1124 11:37:11.093341 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-rl64x" event={"ID":"9d9d2c85-76f2-4a51-be9c-7f2436ae35f1","Type":"ContainerDied","Data":"f396a3b807a0bb94a2dee30aec92dba0474eeab8d5063983b6acadb517b69cfc"} Nov 24 11:37:11 crc kubenswrapper[5072]: I1124 11:37:11.093695 5072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f396a3b807a0bb94a2dee30aec92dba0474eeab8d5063983b6acadb517b69cfc" Nov 24 11:37:11 crc kubenswrapper[5072]: I1124 11:37:11.093502 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-rl64x" Nov 24 11:37:11 crc kubenswrapper[5072]: I1124 11:37:11.163780 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-lrz9p"] Nov 24 11:37:11 crc kubenswrapper[5072]: E1124 11:37:11.164567 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d9d2c85-76f2-4a51-be9c-7f2436ae35f1" containerName="ssh-known-hosts-edpm-deployment" Nov 24 11:37:11 crc kubenswrapper[5072]: I1124 11:37:11.164648 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d9d2c85-76f2-4a51-be9c-7f2436ae35f1" containerName="ssh-known-hosts-edpm-deployment" Nov 24 11:37:11 crc kubenswrapper[5072]: I1124 11:37:11.164929 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d9d2c85-76f2-4a51-be9c-7f2436ae35f1" containerName="ssh-known-hosts-edpm-deployment" Nov 24 11:37:11 crc kubenswrapper[5072]: I1124 11:37:11.165731 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lrz9p" Nov 24 11:37:11 crc kubenswrapper[5072]: I1124 11:37:11.169034 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 11:37:11 crc kubenswrapper[5072]: I1124 11:37:11.169055 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 11:37:11 crc kubenswrapper[5072]: I1124 11:37:11.169324 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 11:37:11 crc kubenswrapper[5072]: I1124 11:37:11.169161 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-b6s7d" Nov 24 11:37:11 crc kubenswrapper[5072]: I1124 11:37:11.182290 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-lrz9p"] Nov 24 11:37:11 crc kubenswrapper[5072]: I1124 11:37:11.259989 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/50e2848c-d753-449b-ad0d-2b8a862cd800-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lrz9p\" (UID: \"50e2848c-d753-449b-ad0d-2b8a862cd800\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lrz9p" Nov 24 11:37:11 crc kubenswrapper[5072]: I1124 11:37:11.260121 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/50e2848c-d753-449b-ad0d-2b8a862cd800-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lrz9p\" (UID: \"50e2848c-d753-449b-ad0d-2b8a862cd800\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lrz9p" Nov 24 11:37:11 crc kubenswrapper[5072]: I1124 11:37:11.260159 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfbk2\" (UniqueName: \"kubernetes.io/projected/50e2848c-d753-449b-ad0d-2b8a862cd800-kube-api-access-kfbk2\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lrz9p\" (UID: \"50e2848c-d753-449b-ad0d-2b8a862cd800\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lrz9p" Nov 24 11:37:11 crc kubenswrapper[5072]: I1124 11:37:11.361913 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/50e2848c-d753-449b-ad0d-2b8a862cd800-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lrz9p\" (UID: \"50e2848c-d753-449b-ad0d-2b8a862cd800\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lrz9p" Nov 24 11:37:11 crc kubenswrapper[5072]: I1124 11:37:11.362097 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/50e2848c-d753-449b-ad0d-2b8a862cd800-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lrz9p\" (UID: \"50e2848c-d753-449b-ad0d-2b8a862cd800\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lrz9p" Nov 24 11:37:11 crc kubenswrapper[5072]: I1124 11:37:11.362144 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfbk2\" (UniqueName: \"kubernetes.io/projected/50e2848c-d753-449b-ad0d-2b8a862cd800-kube-api-access-kfbk2\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lrz9p\" (UID: \"50e2848c-d753-449b-ad0d-2b8a862cd800\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lrz9p" Nov 24 11:37:11 crc kubenswrapper[5072]: I1124 11:37:11.368162 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/50e2848c-d753-449b-ad0d-2b8a862cd800-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lrz9p\" (UID: \"50e2848c-d753-449b-ad0d-2b8a862cd800\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lrz9p" Nov 24 11:37:11 crc kubenswrapper[5072]: I1124 11:37:11.368256 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/50e2848c-d753-449b-ad0d-2b8a862cd800-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lrz9p\" (UID: \"50e2848c-d753-449b-ad0d-2b8a862cd800\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lrz9p" Nov 24 11:37:11 crc kubenswrapper[5072]: I1124 11:37:11.379794 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfbk2\" (UniqueName: \"kubernetes.io/projected/50e2848c-d753-449b-ad0d-2b8a862cd800-kube-api-access-kfbk2\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lrz9p\" (UID: \"50e2848c-d753-449b-ad0d-2b8a862cd800\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lrz9p" Nov 24 11:37:11 crc kubenswrapper[5072]: I1124 11:37:11.481304 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lrz9p" Nov 24 11:37:12 crc kubenswrapper[5072]: I1124 11:37:12.016321 5072 scope.go:117] "RemoveContainer" containerID="f0239aa581e66fddd8c16af420543c1743e09635c9f82c2f13fdce098c99f8ec" Nov 24 11:37:12 crc kubenswrapper[5072]: E1124 11:37:12.016939 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 11:37:12 crc kubenswrapper[5072]: I1124 11:37:12.022739 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-lrz9p"] Nov 24 11:37:12 crc kubenswrapper[5072]: I1124 11:37:12.101354 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lrz9p" event={"ID":"50e2848c-d753-449b-ad0d-2b8a862cd800","Type":"ContainerStarted","Data":"d0270ab042317af573b03425d797a18d373e6ee61cb5df19d51f8b66dc41c2d7"} Nov 24 11:37:14 crc kubenswrapper[5072]: I1124 11:37:14.126914 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lrz9p" event={"ID":"50e2848c-d753-449b-ad0d-2b8a862cd800","Type":"ContainerStarted","Data":"1d88395c2efe70f24a107df6739293c60105543b4e4229e74e8b0a5b99430513"} Nov 24 11:37:14 crc kubenswrapper[5072]: I1124 11:37:14.153407 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lrz9p" podStartSLOduration=1.731979526 podStartE2EDuration="3.153349602s" podCreationTimestamp="2025-11-24 11:37:11 +0000 UTC" firstStartedPulling="2025-11-24 11:37:12.027196122 +0000 UTC m=+1683.738720598" lastFinishedPulling="2025-11-24 11:37:13.448566168 +0000 UTC m=+1685.160090674" observedRunningTime="2025-11-24 11:37:14.1479111 +0000 UTC m=+1685.859435656" watchObservedRunningTime="2025-11-24 11:37:14.153349602 +0000 UTC m=+1685.864874098" Nov 24 11:37:15 crc kubenswrapper[5072]: I1124 11:37:15.600525 5072 scope.go:117] "RemoveContainer" containerID="bf3f982100274b1acee0560a68188bef797f3b326e9cf87408db76488ed1a3af" Nov 24 11:37:15 crc kubenswrapper[5072]: I1124 11:37:15.639219 5072 scope.go:117] "RemoveContainer" containerID="f45b14f3baa514b53d006808a7fdbd82018d32f2ec7c97828a784ba48a03e010" Nov 24 11:37:15 crc kubenswrapper[5072]: I1124 11:37:15.719925 5072 scope.go:117] "RemoveContainer" containerID="87b7bfc7260ad355aa3429eec6df1b3d0b7dc0772906030b9f5e6aa32d3ba454" Nov 24 11:37:15 crc kubenswrapper[5072]: I1124 11:37:15.780120 5072 scope.go:117] "RemoveContainer" containerID="e6128dea18b58d4ec75aa109a5be0e46d0a423c1617596295d8068649a5c1861" Nov 24 11:37:15 crc kubenswrapper[5072]: I1124 11:37:15.826157 5072 scope.go:117] "RemoveContainer" containerID="7b5f998e1d6d141763d629ea2f6fd478be5fc98c84edfbe115f2f0f6c5753d93" Nov 24 11:37:15 crc kubenswrapper[5072]: I1124 11:37:15.863183 5072 scope.go:117] "RemoveContainer" containerID="74b37c494113b92a10313ea1622c376c7a0a02fd275104771a80623e25cc0d31" Nov 24 11:37:15 crc kubenswrapper[5072]: I1124 11:37:15.899580 5072 scope.go:117] "RemoveContainer" containerID="8a22f32584c45f6be5f8cd8133d0159b79ad525fbafc02835bd59e52937a16e9" Nov 24 11:37:15 crc kubenswrapper[5072]: I1124 11:37:15.935785 5072 scope.go:117] "RemoveContainer" containerID="177d910126f83504bed2ff81ce80cbea56bdbb20d350d92a1c83d12f5b98f316" Nov 24 11:37:22 crc kubenswrapper[5072]: I1124 11:37:22.218506 5072 generic.go:334] "Generic (PLEG): container finished" podID="50e2848c-d753-449b-ad0d-2b8a862cd800" containerID="1d88395c2efe70f24a107df6739293c60105543b4e4229e74e8b0a5b99430513" exitCode=0 Nov 24 11:37:22 crc kubenswrapper[5072]: I1124 11:37:22.218549 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lrz9p" event={"ID":"50e2848c-d753-449b-ad0d-2b8a862cd800","Type":"ContainerDied","Data":"1d88395c2efe70f24a107df6739293c60105543b4e4229e74e8b0a5b99430513"} Nov 24 11:37:23 crc kubenswrapper[5072]: I1124 11:37:23.662500 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lrz9p" Nov 24 11:37:23 crc kubenswrapper[5072]: I1124 11:37:23.816433 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/50e2848c-d753-449b-ad0d-2b8a862cd800-inventory\") pod \"50e2848c-d753-449b-ad0d-2b8a862cd800\" (UID: \"50e2848c-d753-449b-ad0d-2b8a862cd800\") " Nov 24 11:37:23 crc kubenswrapper[5072]: I1124 11:37:23.816739 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/50e2848c-d753-449b-ad0d-2b8a862cd800-ssh-key\") pod \"50e2848c-d753-449b-ad0d-2b8a862cd800\" (UID: \"50e2848c-d753-449b-ad0d-2b8a862cd800\") " Nov 24 11:37:23 crc kubenswrapper[5072]: I1124 11:37:23.816795 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfbk2\" (UniqueName: \"kubernetes.io/projected/50e2848c-d753-449b-ad0d-2b8a862cd800-kube-api-access-kfbk2\") pod \"50e2848c-d753-449b-ad0d-2b8a862cd800\" (UID: \"50e2848c-d753-449b-ad0d-2b8a862cd800\") " Nov 24 11:37:23 crc kubenswrapper[5072]: I1124 11:37:23.824111 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50e2848c-d753-449b-ad0d-2b8a862cd800-kube-api-access-kfbk2" (OuterVolumeSpecName: "kube-api-access-kfbk2") pod "50e2848c-d753-449b-ad0d-2b8a862cd800" (UID: "50e2848c-d753-449b-ad0d-2b8a862cd800"). InnerVolumeSpecName "kube-api-access-kfbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:37:23 crc kubenswrapper[5072]: I1124 11:37:23.856707 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50e2848c-d753-449b-ad0d-2b8a862cd800-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "50e2848c-d753-449b-ad0d-2b8a862cd800" (UID: "50e2848c-d753-449b-ad0d-2b8a862cd800"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:37:23 crc kubenswrapper[5072]: I1124 11:37:23.857751 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50e2848c-d753-449b-ad0d-2b8a862cd800-inventory" (OuterVolumeSpecName: "inventory") pod "50e2848c-d753-449b-ad0d-2b8a862cd800" (UID: "50e2848c-d753-449b-ad0d-2b8a862cd800"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:37:23 crc kubenswrapper[5072]: I1124 11:37:23.919888 5072 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/50e2848c-d753-449b-ad0d-2b8a862cd800-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 11:37:23 crc kubenswrapper[5072]: I1124 11:37:23.919940 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfbk2\" (UniqueName: \"kubernetes.io/projected/50e2848c-d753-449b-ad0d-2b8a862cd800-kube-api-access-kfbk2\") on node \"crc\" DevicePath \"\"" Nov 24 11:37:23 crc kubenswrapper[5072]: I1124 11:37:23.919953 5072 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/50e2848c-d753-449b-ad0d-2b8a862cd800-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 11:37:24 crc kubenswrapper[5072]: I1124 11:37:24.016550 5072 scope.go:117] "RemoveContainer" containerID="f0239aa581e66fddd8c16af420543c1743e09635c9f82c2f13fdce098c99f8ec" Nov 24 11:37:24 crc kubenswrapper[5072]: E1124 11:37:24.016868 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 11:37:24 crc kubenswrapper[5072]: I1124 11:37:24.290086 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lrz9p" event={"ID":"50e2848c-d753-449b-ad0d-2b8a862cd800","Type":"ContainerDied","Data":"d0270ab042317af573b03425d797a18d373e6ee61cb5df19d51f8b66dc41c2d7"} Nov 24 11:37:24 crc kubenswrapper[5072]: I1124 11:37:24.290148 5072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0270ab042317af573b03425d797a18d373e6ee61cb5df19d51f8b66dc41c2d7" Nov 24 11:37:24 crc kubenswrapper[5072]: I1124 11:37:24.290248 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lrz9p" Nov 24 11:37:24 crc kubenswrapper[5072]: I1124 11:37:24.336409 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-w24sd"] Nov 24 11:37:24 crc kubenswrapper[5072]: E1124 11:37:24.336813 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50e2848c-d753-449b-ad0d-2b8a862cd800" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 24 11:37:24 crc kubenswrapper[5072]: I1124 11:37:24.336833 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="50e2848c-d753-449b-ad0d-2b8a862cd800" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 24 11:37:24 crc kubenswrapper[5072]: I1124 11:37:24.337076 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="50e2848c-d753-449b-ad0d-2b8a862cd800" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 24 11:37:24 crc kubenswrapper[5072]: I1124 11:37:24.337838 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-w24sd" Nov 24 11:37:24 crc kubenswrapper[5072]: I1124 11:37:24.339925 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 11:37:24 crc kubenswrapper[5072]: I1124 11:37:24.340286 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 11:37:24 crc kubenswrapper[5072]: I1124 11:37:24.340775 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 11:37:24 crc kubenswrapper[5072]: I1124 11:37:24.341048 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-b6s7d" Nov 24 11:37:24 crc kubenswrapper[5072]: I1124 11:37:24.348967 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-w24sd"] Nov 24 11:37:24 crc kubenswrapper[5072]: I1124 11:37:24.429953 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/98ed5522-ccf3-4c2a-81c3-d3013af6442b-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-w24sd\" (UID: \"98ed5522-ccf3-4c2a-81c3-d3013af6442b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-w24sd" Nov 24 11:37:24 crc kubenswrapper[5072]: I1124 11:37:24.430001 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/98ed5522-ccf3-4c2a-81c3-d3013af6442b-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-w24sd\" (UID: \"98ed5522-ccf3-4c2a-81c3-d3013af6442b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-w24sd" Nov 24 11:37:24 crc kubenswrapper[5072]: I1124 11:37:24.430031 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nt2bj\" (UniqueName: \"kubernetes.io/projected/98ed5522-ccf3-4c2a-81c3-d3013af6442b-kube-api-access-nt2bj\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-w24sd\" (UID: \"98ed5522-ccf3-4c2a-81c3-d3013af6442b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-w24sd" Nov 24 11:37:24 crc kubenswrapper[5072]: I1124 11:37:24.532848 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/98ed5522-ccf3-4c2a-81c3-d3013af6442b-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-w24sd\" (UID: \"98ed5522-ccf3-4c2a-81c3-d3013af6442b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-w24sd" Nov 24 11:37:24 crc kubenswrapper[5072]: I1124 11:37:24.533360 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/98ed5522-ccf3-4c2a-81c3-d3013af6442b-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-w24sd\" (UID: \"98ed5522-ccf3-4c2a-81c3-d3013af6442b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-w24sd" Nov 24 11:37:24 crc kubenswrapper[5072]: I1124 11:37:24.533435 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nt2bj\" (UniqueName: \"kubernetes.io/projected/98ed5522-ccf3-4c2a-81c3-d3013af6442b-kube-api-access-nt2bj\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-w24sd\" (UID: \"98ed5522-ccf3-4c2a-81c3-d3013af6442b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-w24sd" Nov 24 11:37:24 crc kubenswrapper[5072]: I1124 11:37:24.537924 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/98ed5522-ccf3-4c2a-81c3-d3013af6442b-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-w24sd\" (UID: \"98ed5522-ccf3-4c2a-81c3-d3013af6442b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-w24sd" Nov 24 11:37:24 crc kubenswrapper[5072]: I1124 11:37:24.539117 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/98ed5522-ccf3-4c2a-81c3-d3013af6442b-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-w24sd\" (UID: \"98ed5522-ccf3-4c2a-81c3-d3013af6442b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-w24sd" Nov 24 11:37:24 crc kubenswrapper[5072]: I1124 11:37:24.551165 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nt2bj\" (UniqueName: \"kubernetes.io/projected/98ed5522-ccf3-4c2a-81c3-d3013af6442b-kube-api-access-nt2bj\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-w24sd\" (UID: \"98ed5522-ccf3-4c2a-81c3-d3013af6442b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-w24sd" Nov 24 11:37:24 crc kubenswrapper[5072]: I1124 11:37:24.710672 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-w24sd" Nov 24 11:37:25 crc kubenswrapper[5072]: I1124 11:37:25.045656 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-vp5q8"] Nov 24 11:37:25 crc kubenswrapper[5072]: I1124 11:37:25.052135 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-vp5q8"] Nov 24 11:37:25 crc kubenswrapper[5072]: W1124 11:37:25.126654 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod98ed5522_ccf3_4c2a_81c3_d3013af6442b.slice/crio-673ec2542806e44075a64c39861a63dd4582af93a5a85ab651d0f6bd69b52ba3 WatchSource:0}: Error finding container 673ec2542806e44075a64c39861a63dd4582af93a5a85ab651d0f6bd69b52ba3: Status 404 returned error can't find the container with id 673ec2542806e44075a64c39861a63dd4582af93a5a85ab651d0f6bd69b52ba3 Nov 24 11:37:25 crc kubenswrapper[5072]: I1124 11:37:25.129212 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-w24sd"] Nov 24 11:37:25 crc kubenswrapper[5072]: I1124 11:37:25.303130 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-w24sd" event={"ID":"98ed5522-ccf3-4c2a-81c3-d3013af6442b","Type":"ContainerStarted","Data":"673ec2542806e44075a64c39861a63dd4582af93a5a85ab651d0f6bd69b52ba3"} Nov 24 11:37:26 crc kubenswrapper[5072]: I1124 11:37:26.315180 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-w24sd" event={"ID":"98ed5522-ccf3-4c2a-81c3-d3013af6442b","Type":"ContainerStarted","Data":"c3fb37aaeb9e5ac882e6158fdd7359f212f5dbfaa3d7e6da67936447484f7258"} Nov 24 11:37:26 crc kubenswrapper[5072]: I1124 11:37:26.333238 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-w24sd" podStartSLOduration=1.755780971 podStartE2EDuration="2.333216342s" podCreationTimestamp="2025-11-24 11:37:24 +0000 UTC" firstStartedPulling="2025-11-24 11:37:25.128751088 +0000 UTC m=+1696.840275564" lastFinishedPulling="2025-11-24 11:37:25.706186459 +0000 UTC m=+1697.417710935" observedRunningTime="2025-11-24 11:37:26.331759216 +0000 UTC m=+1698.043283692" watchObservedRunningTime="2025-11-24 11:37:26.333216342 +0000 UTC m=+1698.044740818" Nov 24 11:37:27 crc kubenswrapper[5072]: I1124 11:37:27.032050 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da16f5d0-f121-4388-983a-caca760fa5c6" path="/var/lib/kubelet/pods/da16f5d0-f121-4388-983a-caca760fa5c6/volumes" Nov 24 11:37:32 crc kubenswrapper[5072]: I1124 11:37:32.701593 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-pwqtd"] Nov 24 11:37:32 crc kubenswrapper[5072]: I1124 11:37:32.758854 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pwqtd" Nov 24 11:37:32 crc kubenswrapper[5072]: I1124 11:37:32.760257 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pwqtd"] Nov 24 11:37:32 crc kubenswrapper[5072]: I1124 11:37:32.910547 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nfpq\" (UniqueName: \"kubernetes.io/projected/815a6f38-93cc-4a99-9c61-1102103a6dfe-kube-api-access-2nfpq\") pod \"certified-operators-pwqtd\" (UID: \"815a6f38-93cc-4a99-9c61-1102103a6dfe\") " pod="openshift-marketplace/certified-operators-pwqtd" Nov 24 11:37:32 crc kubenswrapper[5072]: I1124 11:37:32.910891 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/815a6f38-93cc-4a99-9c61-1102103a6dfe-catalog-content\") pod \"certified-operators-pwqtd\" (UID: \"815a6f38-93cc-4a99-9c61-1102103a6dfe\") " pod="openshift-marketplace/certified-operators-pwqtd" Nov 24 11:37:32 crc kubenswrapper[5072]: I1124 11:37:32.911054 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/815a6f38-93cc-4a99-9c61-1102103a6dfe-utilities\") pod \"certified-operators-pwqtd\" (UID: \"815a6f38-93cc-4a99-9c61-1102103a6dfe\") " pod="openshift-marketplace/certified-operators-pwqtd" Nov 24 11:37:33 crc kubenswrapper[5072]: I1124 11:37:33.013634 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/815a6f38-93cc-4a99-9c61-1102103a6dfe-utilities\") pod \"certified-operators-pwqtd\" (UID: \"815a6f38-93cc-4a99-9c61-1102103a6dfe\") " pod="openshift-marketplace/certified-operators-pwqtd" Nov 24 11:37:33 crc kubenswrapper[5072]: I1124 11:37:33.014264 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/815a6f38-93cc-4a99-9c61-1102103a6dfe-utilities\") pod \"certified-operators-pwqtd\" (UID: \"815a6f38-93cc-4a99-9c61-1102103a6dfe\") " pod="openshift-marketplace/certified-operators-pwqtd" Nov 24 11:37:33 crc kubenswrapper[5072]: I1124 11:37:33.014505 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nfpq\" (UniqueName: \"kubernetes.io/projected/815a6f38-93cc-4a99-9c61-1102103a6dfe-kube-api-access-2nfpq\") pod \"certified-operators-pwqtd\" (UID: \"815a6f38-93cc-4a99-9c61-1102103a6dfe\") " pod="openshift-marketplace/certified-operators-pwqtd" Nov 24 11:37:33 crc kubenswrapper[5072]: I1124 11:37:33.014618 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/815a6f38-93cc-4a99-9c61-1102103a6dfe-catalog-content\") pod \"certified-operators-pwqtd\" (UID: \"815a6f38-93cc-4a99-9c61-1102103a6dfe\") " pod="openshift-marketplace/certified-operators-pwqtd" Nov 24 11:37:33 crc kubenswrapper[5072]: I1124 11:37:33.015313 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/815a6f38-93cc-4a99-9c61-1102103a6dfe-catalog-content\") pod \"certified-operators-pwqtd\" (UID: \"815a6f38-93cc-4a99-9c61-1102103a6dfe\") " pod="openshift-marketplace/certified-operators-pwqtd" Nov 24 11:37:33 crc kubenswrapper[5072]: I1124 11:37:33.034582 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nfpq\" (UniqueName: \"kubernetes.io/projected/815a6f38-93cc-4a99-9c61-1102103a6dfe-kube-api-access-2nfpq\") pod \"certified-operators-pwqtd\" (UID: \"815a6f38-93cc-4a99-9c61-1102103a6dfe\") " pod="openshift-marketplace/certified-operators-pwqtd" Nov 24 11:37:33 crc kubenswrapper[5072]: I1124 11:37:33.096998 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pwqtd" Nov 24 11:37:33 crc kubenswrapper[5072]: I1124 11:37:33.564870 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pwqtd"] Nov 24 11:37:34 crc kubenswrapper[5072]: I1124 11:37:34.407211 5072 generic.go:334] "Generic (PLEG): container finished" podID="815a6f38-93cc-4a99-9c61-1102103a6dfe" containerID="7f464e895b06c50aa9e096337e79a9b0c5e2d7afc0aa6c0fb212448df0516f0f" exitCode=0 Nov 24 11:37:34 crc kubenswrapper[5072]: I1124 11:37:34.407583 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pwqtd" event={"ID":"815a6f38-93cc-4a99-9c61-1102103a6dfe","Type":"ContainerDied","Data":"7f464e895b06c50aa9e096337e79a9b0c5e2d7afc0aa6c0fb212448df0516f0f"} Nov 24 11:37:34 crc kubenswrapper[5072]: I1124 11:37:34.407658 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pwqtd" event={"ID":"815a6f38-93cc-4a99-9c61-1102103a6dfe","Type":"ContainerStarted","Data":"6f432640668112f859a999a7a97a0fa995e338f42d67ae88d00f12582feca0e5"} Nov 24 11:37:35 crc kubenswrapper[5072]: I1124 11:37:35.017079 5072 scope.go:117] "RemoveContainer" containerID="f0239aa581e66fddd8c16af420543c1743e09635c9f82c2f13fdce098c99f8ec" Nov 24 11:37:35 crc kubenswrapper[5072]: E1124 11:37:35.017889 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 11:37:35 crc kubenswrapper[5072]: I1124 11:37:35.422307 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pwqtd" event={"ID":"815a6f38-93cc-4a99-9c61-1102103a6dfe","Type":"ContainerStarted","Data":"1b43756b714d0e2825504e694273a32cfc80184e1c23835d77c6487a495137e8"} Nov 24 11:37:36 crc kubenswrapper[5072]: I1124 11:37:36.438682 5072 generic.go:334] "Generic (PLEG): container finished" podID="98ed5522-ccf3-4c2a-81c3-d3013af6442b" containerID="c3fb37aaeb9e5ac882e6158fdd7359f212f5dbfaa3d7e6da67936447484f7258" exitCode=0 Nov 24 11:37:36 crc kubenswrapper[5072]: I1124 11:37:36.438814 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-w24sd" event={"ID":"98ed5522-ccf3-4c2a-81c3-d3013af6442b","Type":"ContainerDied","Data":"c3fb37aaeb9e5ac882e6158fdd7359f212f5dbfaa3d7e6da67936447484f7258"} Nov 24 11:37:36 crc kubenswrapper[5072]: I1124 11:37:36.442064 5072 generic.go:334] "Generic (PLEG): container finished" podID="815a6f38-93cc-4a99-9c61-1102103a6dfe" containerID="1b43756b714d0e2825504e694273a32cfc80184e1c23835d77c6487a495137e8" exitCode=0 Nov 24 11:37:36 crc kubenswrapper[5072]: I1124 11:37:36.442141 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pwqtd" event={"ID":"815a6f38-93cc-4a99-9c61-1102103a6dfe","Type":"ContainerDied","Data":"1b43756b714d0e2825504e694273a32cfc80184e1c23835d77c6487a495137e8"} Nov 24 11:37:37 crc kubenswrapper[5072]: I1124 11:37:37.454757 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pwqtd" event={"ID":"815a6f38-93cc-4a99-9c61-1102103a6dfe","Type":"ContainerStarted","Data":"e43869071d1170ca93a77ad808f13197850bf968c662933edb9e3ef350ef2ef6"} Nov 24 11:37:37 crc kubenswrapper[5072]: I1124 11:37:37.488101 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-pwqtd" podStartSLOduration=2.979889392 podStartE2EDuration="5.488073002s" podCreationTimestamp="2025-11-24 11:37:32 +0000 UTC" firstStartedPulling="2025-11-24 11:37:34.413050938 +0000 UTC m=+1706.124575454" lastFinishedPulling="2025-11-24 11:37:36.921234588 +0000 UTC m=+1708.632759064" observedRunningTime="2025-11-24 11:37:37.470722192 +0000 UTC m=+1709.182246678" watchObservedRunningTime="2025-11-24 11:37:37.488073002 +0000 UTC m=+1709.199597518" Nov 24 11:37:37 crc kubenswrapper[5072]: I1124 11:37:37.925535 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-w24sd" Nov 24 11:37:38 crc kubenswrapper[5072]: I1124 11:37:38.021180 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/98ed5522-ccf3-4c2a-81c3-d3013af6442b-inventory\") pod \"98ed5522-ccf3-4c2a-81c3-d3013af6442b\" (UID: \"98ed5522-ccf3-4c2a-81c3-d3013af6442b\") " Nov 24 11:37:38 crc kubenswrapper[5072]: I1124 11:37:38.021248 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/98ed5522-ccf3-4c2a-81c3-d3013af6442b-ssh-key\") pod \"98ed5522-ccf3-4c2a-81c3-d3013af6442b\" (UID: \"98ed5522-ccf3-4c2a-81c3-d3013af6442b\") " Nov 24 11:37:38 crc kubenswrapper[5072]: I1124 11:37:38.021302 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nt2bj\" (UniqueName: \"kubernetes.io/projected/98ed5522-ccf3-4c2a-81c3-d3013af6442b-kube-api-access-nt2bj\") pod \"98ed5522-ccf3-4c2a-81c3-d3013af6442b\" (UID: \"98ed5522-ccf3-4c2a-81c3-d3013af6442b\") " Nov 24 11:37:38 crc kubenswrapper[5072]: I1124 11:37:38.028401 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98ed5522-ccf3-4c2a-81c3-d3013af6442b-kube-api-access-nt2bj" (OuterVolumeSpecName: "kube-api-access-nt2bj") pod "98ed5522-ccf3-4c2a-81c3-d3013af6442b" (UID: "98ed5522-ccf3-4c2a-81c3-d3013af6442b"). InnerVolumeSpecName "kube-api-access-nt2bj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:37:38 crc kubenswrapper[5072]: I1124 11:37:38.052514 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98ed5522-ccf3-4c2a-81c3-d3013af6442b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "98ed5522-ccf3-4c2a-81c3-d3013af6442b" (UID: "98ed5522-ccf3-4c2a-81c3-d3013af6442b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:37:38 crc kubenswrapper[5072]: I1124 11:37:38.060261 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98ed5522-ccf3-4c2a-81c3-d3013af6442b-inventory" (OuterVolumeSpecName: "inventory") pod "98ed5522-ccf3-4c2a-81c3-d3013af6442b" (UID: "98ed5522-ccf3-4c2a-81c3-d3013af6442b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:37:38 crc kubenswrapper[5072]: I1124 11:37:38.123304 5072 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/98ed5522-ccf3-4c2a-81c3-d3013af6442b-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 11:37:38 crc kubenswrapper[5072]: I1124 11:37:38.123337 5072 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/98ed5522-ccf3-4c2a-81c3-d3013af6442b-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 11:37:38 crc kubenswrapper[5072]: I1124 11:37:38.123349 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nt2bj\" (UniqueName: \"kubernetes.io/projected/98ed5522-ccf3-4c2a-81c3-d3013af6442b-kube-api-access-nt2bj\") on node \"crc\" DevicePath \"\"" Nov 24 11:37:38 crc kubenswrapper[5072]: I1124 11:37:38.468506 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-w24sd" Nov 24 11:37:38 crc kubenswrapper[5072]: I1124 11:37:38.470508 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-w24sd" event={"ID":"98ed5522-ccf3-4c2a-81c3-d3013af6442b","Type":"ContainerDied","Data":"673ec2542806e44075a64c39861a63dd4582af93a5a85ab651d0f6bd69b52ba3"} Nov 24 11:37:38 crc kubenswrapper[5072]: I1124 11:37:38.470556 5072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="673ec2542806e44075a64c39861a63dd4582af93a5a85ab651d0f6bd69b52ba3" Nov 24 11:37:43 crc kubenswrapper[5072]: I1124 11:37:43.064912 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-cd5bg"] Nov 24 11:37:43 crc kubenswrapper[5072]: I1124 11:37:43.078005 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-28tkc"] Nov 24 11:37:43 crc kubenswrapper[5072]: I1124 11:37:43.086218 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-cd5bg"] Nov 24 11:37:43 crc kubenswrapper[5072]: I1124 11:37:43.096299 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-28tkc"] Nov 24 11:37:43 crc kubenswrapper[5072]: I1124 11:37:43.097607 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-pwqtd" Nov 24 11:37:43 crc kubenswrapper[5072]: I1124 11:37:43.097963 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-pwqtd" Nov 24 11:37:43 crc kubenswrapper[5072]: I1124 11:37:43.152868 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-pwqtd" Nov 24 11:37:43 crc kubenswrapper[5072]: I1124 11:37:43.594875 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-pwqtd" Nov 24 11:37:43 crc kubenswrapper[5072]: I1124 11:37:43.656554 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pwqtd"] Nov 24 11:37:45 crc kubenswrapper[5072]: I1124 11:37:45.033564 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08555f6e-e089-44c2-9193-b40a03e6f2f5" path="/var/lib/kubelet/pods/08555f6e-e089-44c2-9193-b40a03e6f2f5/volumes" Nov 24 11:37:45 crc kubenswrapper[5072]: I1124 11:37:45.034768 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1dfc861-93be-4798-b474-eab29b57c56b" path="/var/lib/kubelet/pods/f1dfc861-93be-4798-b474-eab29b57c56b/volumes" Nov 24 11:37:45 crc kubenswrapper[5072]: I1124 11:37:45.540140 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-pwqtd" podUID="815a6f38-93cc-4a99-9c61-1102103a6dfe" containerName="registry-server" containerID="cri-o://e43869071d1170ca93a77ad808f13197850bf968c662933edb9e3ef350ef2ef6" gracePeriod=2 Nov 24 11:37:46 crc kubenswrapper[5072]: I1124 11:37:46.003202 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pwqtd" Nov 24 11:37:46 crc kubenswrapper[5072]: I1124 11:37:46.083611 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2nfpq\" (UniqueName: \"kubernetes.io/projected/815a6f38-93cc-4a99-9c61-1102103a6dfe-kube-api-access-2nfpq\") pod \"815a6f38-93cc-4a99-9c61-1102103a6dfe\" (UID: \"815a6f38-93cc-4a99-9c61-1102103a6dfe\") " Nov 24 11:37:46 crc kubenswrapper[5072]: I1124 11:37:46.083757 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/815a6f38-93cc-4a99-9c61-1102103a6dfe-utilities\") pod \"815a6f38-93cc-4a99-9c61-1102103a6dfe\" (UID: \"815a6f38-93cc-4a99-9c61-1102103a6dfe\") " Nov 24 11:37:46 crc kubenswrapper[5072]: I1124 11:37:46.083884 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/815a6f38-93cc-4a99-9c61-1102103a6dfe-catalog-content\") pod \"815a6f38-93cc-4a99-9c61-1102103a6dfe\" (UID: \"815a6f38-93cc-4a99-9c61-1102103a6dfe\") " Nov 24 11:37:46 crc kubenswrapper[5072]: I1124 11:37:46.085410 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/815a6f38-93cc-4a99-9c61-1102103a6dfe-utilities" (OuterVolumeSpecName: "utilities") pod "815a6f38-93cc-4a99-9c61-1102103a6dfe" (UID: "815a6f38-93cc-4a99-9c61-1102103a6dfe"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:37:46 crc kubenswrapper[5072]: I1124 11:37:46.092524 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/815a6f38-93cc-4a99-9c61-1102103a6dfe-kube-api-access-2nfpq" (OuterVolumeSpecName: "kube-api-access-2nfpq") pod "815a6f38-93cc-4a99-9c61-1102103a6dfe" (UID: "815a6f38-93cc-4a99-9c61-1102103a6dfe"). InnerVolumeSpecName "kube-api-access-2nfpq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:37:46 crc kubenswrapper[5072]: I1124 11:37:46.138060 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/815a6f38-93cc-4a99-9c61-1102103a6dfe-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "815a6f38-93cc-4a99-9c61-1102103a6dfe" (UID: "815a6f38-93cc-4a99-9c61-1102103a6dfe"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:37:46 crc kubenswrapper[5072]: I1124 11:37:46.186029 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2nfpq\" (UniqueName: \"kubernetes.io/projected/815a6f38-93cc-4a99-9c61-1102103a6dfe-kube-api-access-2nfpq\") on node \"crc\" DevicePath \"\"" Nov 24 11:37:46 crc kubenswrapper[5072]: I1124 11:37:46.186065 5072 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/815a6f38-93cc-4a99-9c61-1102103a6dfe-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 11:37:46 crc kubenswrapper[5072]: I1124 11:37:46.186075 5072 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/815a6f38-93cc-4a99-9c61-1102103a6dfe-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 11:37:46 crc kubenswrapper[5072]: I1124 11:37:46.552876 5072 generic.go:334] "Generic (PLEG): container finished" podID="815a6f38-93cc-4a99-9c61-1102103a6dfe" containerID="e43869071d1170ca93a77ad808f13197850bf968c662933edb9e3ef350ef2ef6" exitCode=0 Nov 24 11:37:46 crc kubenswrapper[5072]: I1124 11:37:46.552955 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pwqtd" event={"ID":"815a6f38-93cc-4a99-9c61-1102103a6dfe","Type":"ContainerDied","Data":"e43869071d1170ca93a77ad808f13197850bf968c662933edb9e3ef350ef2ef6"} Nov 24 11:37:46 crc kubenswrapper[5072]: I1124 11:37:46.553004 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pwqtd" event={"ID":"815a6f38-93cc-4a99-9c61-1102103a6dfe","Type":"ContainerDied","Data":"6f432640668112f859a999a7a97a0fa995e338f42d67ae88d00f12582feca0e5"} Nov 24 11:37:46 crc kubenswrapper[5072]: I1124 11:37:46.553040 5072 scope.go:117] "RemoveContainer" containerID="e43869071d1170ca93a77ad808f13197850bf968c662933edb9e3ef350ef2ef6" Nov 24 11:37:46 crc kubenswrapper[5072]: I1124 11:37:46.553224 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pwqtd" Nov 24 11:37:46 crc kubenswrapper[5072]: I1124 11:37:46.574145 5072 scope.go:117] "RemoveContainer" containerID="1b43756b714d0e2825504e694273a32cfc80184e1c23835d77c6487a495137e8" Nov 24 11:37:46 crc kubenswrapper[5072]: I1124 11:37:46.604489 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pwqtd"] Nov 24 11:37:46 crc kubenswrapper[5072]: I1124 11:37:46.614870 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-pwqtd"] Nov 24 11:37:46 crc kubenswrapper[5072]: I1124 11:37:46.615228 5072 scope.go:117] "RemoveContainer" containerID="7f464e895b06c50aa9e096337e79a9b0c5e2d7afc0aa6c0fb212448df0516f0f" Nov 24 11:37:46 crc kubenswrapper[5072]: I1124 11:37:46.660015 5072 scope.go:117] "RemoveContainer" containerID="e43869071d1170ca93a77ad808f13197850bf968c662933edb9e3ef350ef2ef6" Nov 24 11:37:46 crc kubenswrapper[5072]: E1124 11:37:46.661964 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e43869071d1170ca93a77ad808f13197850bf968c662933edb9e3ef350ef2ef6\": container with ID starting with e43869071d1170ca93a77ad808f13197850bf968c662933edb9e3ef350ef2ef6 not found: ID does not exist" containerID="e43869071d1170ca93a77ad808f13197850bf968c662933edb9e3ef350ef2ef6" Nov 24 11:37:46 crc kubenswrapper[5072]: I1124 11:37:46.662027 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e43869071d1170ca93a77ad808f13197850bf968c662933edb9e3ef350ef2ef6"} err="failed to get container status \"e43869071d1170ca93a77ad808f13197850bf968c662933edb9e3ef350ef2ef6\": rpc error: code = NotFound desc = could not find container \"e43869071d1170ca93a77ad808f13197850bf968c662933edb9e3ef350ef2ef6\": container with ID starting with e43869071d1170ca93a77ad808f13197850bf968c662933edb9e3ef350ef2ef6 not found: ID does not exist" Nov 24 11:37:46 crc kubenswrapper[5072]: I1124 11:37:46.662075 5072 scope.go:117] "RemoveContainer" containerID="1b43756b714d0e2825504e694273a32cfc80184e1c23835d77c6487a495137e8" Nov 24 11:37:46 crc kubenswrapper[5072]: E1124 11:37:46.662554 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b43756b714d0e2825504e694273a32cfc80184e1c23835d77c6487a495137e8\": container with ID starting with 1b43756b714d0e2825504e694273a32cfc80184e1c23835d77c6487a495137e8 not found: ID does not exist" containerID="1b43756b714d0e2825504e694273a32cfc80184e1c23835d77c6487a495137e8" Nov 24 11:37:46 crc kubenswrapper[5072]: I1124 11:37:46.662604 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b43756b714d0e2825504e694273a32cfc80184e1c23835d77c6487a495137e8"} err="failed to get container status \"1b43756b714d0e2825504e694273a32cfc80184e1c23835d77c6487a495137e8\": rpc error: code = NotFound desc = could not find container \"1b43756b714d0e2825504e694273a32cfc80184e1c23835d77c6487a495137e8\": container with ID starting with 1b43756b714d0e2825504e694273a32cfc80184e1c23835d77c6487a495137e8 not found: ID does not exist" Nov 24 11:37:46 crc kubenswrapper[5072]: I1124 11:37:46.662644 5072 scope.go:117] "RemoveContainer" containerID="7f464e895b06c50aa9e096337e79a9b0c5e2d7afc0aa6c0fb212448df0516f0f" Nov 24 11:37:46 crc kubenswrapper[5072]: E1124 11:37:46.663162 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f464e895b06c50aa9e096337e79a9b0c5e2d7afc0aa6c0fb212448df0516f0f\": container with ID starting with 7f464e895b06c50aa9e096337e79a9b0c5e2d7afc0aa6c0fb212448df0516f0f not found: ID does not exist" containerID="7f464e895b06c50aa9e096337e79a9b0c5e2d7afc0aa6c0fb212448df0516f0f" Nov 24 11:37:46 crc kubenswrapper[5072]: I1124 11:37:46.663207 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f464e895b06c50aa9e096337e79a9b0c5e2d7afc0aa6c0fb212448df0516f0f"} err="failed to get container status \"7f464e895b06c50aa9e096337e79a9b0c5e2d7afc0aa6c0fb212448df0516f0f\": rpc error: code = NotFound desc = could not find container \"7f464e895b06c50aa9e096337e79a9b0c5e2d7afc0aa6c0fb212448df0516f0f\": container with ID starting with 7f464e895b06c50aa9e096337e79a9b0c5e2d7afc0aa6c0fb212448df0516f0f not found: ID does not exist" Nov 24 11:37:47 crc kubenswrapper[5072]: I1124 11:37:47.038648 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="815a6f38-93cc-4a99-9c61-1102103a6dfe" path="/var/lib/kubelet/pods/815a6f38-93cc-4a99-9c61-1102103a6dfe/volumes" Nov 24 11:37:48 crc kubenswrapper[5072]: I1124 11:37:48.017072 5072 scope.go:117] "RemoveContainer" containerID="f0239aa581e66fddd8c16af420543c1743e09635c9f82c2f13fdce098c99f8ec" Nov 24 11:37:48 crc kubenswrapper[5072]: E1124 11:37:48.017568 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 11:38:00 crc kubenswrapper[5072]: I1124 11:38:00.017284 5072 scope.go:117] "RemoveContainer" containerID="f0239aa581e66fddd8c16af420543c1743e09635c9f82c2f13fdce098c99f8ec" Nov 24 11:38:00 crc kubenswrapper[5072]: E1124 11:38:00.018343 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 11:38:12 crc kubenswrapper[5072]: I1124 11:38:12.016544 5072 scope.go:117] "RemoveContainer" containerID="f0239aa581e66fddd8c16af420543c1743e09635c9f82c2f13fdce098c99f8ec" Nov 24 11:38:12 crc kubenswrapper[5072]: E1124 11:38:12.017448 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 11:38:16 crc kubenswrapper[5072]: I1124 11:38:16.128562 5072 scope.go:117] "RemoveContainer" containerID="c075a0b6c571df3a9da3865213dc0fdfafca0e85fcc958bd975825b331cd7639" Nov 24 11:38:16 crc kubenswrapper[5072]: I1124 11:38:16.198802 5072 scope.go:117] "RemoveContainer" containerID="2a0b31b06b87bbc624e6f5a2b7b21d3dcc46b487c372cb54d650bc6017fdd911" Nov 24 11:38:16 crc kubenswrapper[5072]: I1124 11:38:16.255643 5072 scope.go:117] "RemoveContainer" containerID="dd9b1d0df5faeef81f5840dd58ed4436962ca833cf0b88f5779837a365ae20aa" Nov 24 11:38:24 crc kubenswrapper[5072]: I1124 11:38:24.016056 5072 scope.go:117] "RemoveContainer" containerID="f0239aa581e66fddd8c16af420543c1743e09635c9f82c2f13fdce098c99f8ec" Nov 24 11:38:24 crc kubenswrapper[5072]: E1124 11:38:24.017911 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 11:38:27 crc kubenswrapper[5072]: I1124 11:38:27.059408 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-ghzbb"] Nov 24 11:38:27 crc kubenswrapper[5072]: I1124 11:38:27.066475 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-ghzbb"] Nov 24 11:38:29 crc kubenswrapper[5072]: I1124 11:38:29.036110 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4d90486-6954-484a-aa10-2ffa6789cdc7" path="/var/lib/kubelet/pods/e4d90486-6954-484a-aa10-2ffa6789cdc7/volumes" Nov 24 11:38:37 crc kubenswrapper[5072]: I1124 11:38:37.017796 5072 scope.go:117] "RemoveContainer" containerID="f0239aa581e66fddd8c16af420543c1743e09635c9f82c2f13fdce098c99f8ec" Nov 24 11:38:37 crc kubenswrapper[5072]: E1124 11:38:37.019018 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 11:38:51 crc kubenswrapper[5072]: I1124 11:38:51.021138 5072 scope.go:117] "RemoveContainer" containerID="f0239aa581e66fddd8c16af420543c1743e09635c9f82c2f13fdce098c99f8ec" Nov 24 11:38:51 crc kubenswrapper[5072]: E1124 11:38:51.022226 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 11:39:06 crc kubenswrapper[5072]: I1124 11:39:06.017326 5072 scope.go:117] "RemoveContainer" containerID="f0239aa581e66fddd8c16af420543c1743e09635c9f82c2f13fdce098c99f8ec" Nov 24 11:39:06 crc kubenswrapper[5072]: E1124 11:39:06.018636 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 11:39:16 crc kubenswrapper[5072]: I1124 11:39:16.392016 5072 scope.go:117] "RemoveContainer" containerID="adbbafa7dba3ea0127645167357936a6a57585ed79b55e0b0d66b94e6662c686" Nov 24 11:39:18 crc kubenswrapper[5072]: I1124 11:39:18.016206 5072 scope.go:117] "RemoveContainer" containerID="f0239aa581e66fddd8c16af420543c1743e09635c9f82c2f13fdce098c99f8ec" Nov 24 11:39:18 crc kubenswrapper[5072]: E1124 11:39:18.016941 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 11:39:29 crc kubenswrapper[5072]: I1124 11:39:29.028460 5072 scope.go:117] "RemoveContainer" containerID="f0239aa581e66fddd8c16af420543c1743e09635c9f82c2f13fdce098c99f8ec" Nov 24 11:39:29 crc kubenswrapper[5072]: E1124 11:39:29.029801 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 11:39:40 crc kubenswrapper[5072]: I1124 11:39:40.016305 5072 scope.go:117] "RemoveContainer" containerID="f0239aa581e66fddd8c16af420543c1743e09635c9f82c2f13fdce098c99f8ec" Nov 24 11:39:40 crc kubenswrapper[5072]: E1124 11:39:40.017003 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 11:39:55 crc kubenswrapper[5072]: I1124 11:39:55.016850 5072 scope.go:117] "RemoveContainer" containerID="f0239aa581e66fddd8c16af420543c1743e09635c9f82c2f13fdce098c99f8ec" Nov 24 11:39:55 crc kubenswrapper[5072]: E1124 11:39:55.017988 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 11:40:10 crc kubenswrapper[5072]: I1124 11:40:10.016239 5072 scope.go:117] "RemoveContainer" containerID="f0239aa581e66fddd8c16af420543c1743e09635c9f82c2f13fdce098c99f8ec" Nov 24 11:40:10 crc kubenswrapper[5072]: E1124 11:40:10.017310 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 11:40:25 crc kubenswrapper[5072]: I1124 11:40:25.016437 5072 scope.go:117] "RemoveContainer" containerID="f0239aa581e66fddd8c16af420543c1743e09635c9f82c2f13fdce098c99f8ec" Nov 24 11:40:26 crc kubenswrapper[5072]: I1124 11:40:26.243841 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" event={"ID":"85ee6420-36f0-467c-acf4-ebea8b02c8d5","Type":"ContainerStarted","Data":"189ce64d61f8d24afa478e629c32eb4f3644b48f2f7f50733de592c3b81bfb86"} Nov 24 11:42:19 crc kubenswrapper[5072]: I1124 11:42:19.740093 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-n2x4l"] Nov 24 11:42:19 crc kubenswrapper[5072]: I1124 11:42:19.748010 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-rl64x"] Nov 24 11:42:19 crc kubenswrapper[5072]: I1124 11:42:19.756785 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-n2x4l"] Nov 24 11:42:19 crc kubenswrapper[5072]: I1124 11:42:19.765613 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-rl64x"] Nov 24 11:42:19 crc kubenswrapper[5072]: I1124 11:42:19.776814 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-99glp"] Nov 24 11:42:19 crc kubenswrapper[5072]: I1124 11:42:19.783906 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-99glp"] Nov 24 11:42:19 crc kubenswrapper[5072]: I1124 11:42:19.794053 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-lrz9p"] Nov 24 11:42:19 crc kubenswrapper[5072]: I1124 11:42:19.800505 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w6jhm"] Nov 24 11:42:19 crc kubenswrapper[5072]: I1124 11:42:19.806433 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-j95cn"] Nov 24 11:42:19 crc kubenswrapper[5072]: I1124 11:42:19.812509 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-w24sd"] Nov 24 11:42:19 crc kubenswrapper[5072]: I1124 11:42:19.818674 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m7pfg"] Nov 24 11:42:19 crc kubenswrapper[5072]: I1124 11:42:19.823647 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-s2k9h"] Nov 24 11:42:19 crc kubenswrapper[5072]: I1124 11:42:19.829060 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-77zw7"] Nov 24 11:42:19 crc kubenswrapper[5072]: I1124 11:42:19.833969 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-77zw7"] Nov 24 11:42:19 crc kubenswrapper[5072]: I1124 11:42:19.838924 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-w24sd"] Nov 24 11:42:19 crc kubenswrapper[5072]: I1124 11:42:19.843704 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-s2k9h"] Nov 24 11:42:19 crc kubenswrapper[5072]: I1124 11:42:19.848490 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-j95cn"] Nov 24 11:42:19 crc kubenswrapper[5072]: I1124 11:42:19.853203 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w6jhm"] Nov 24 11:42:19 crc kubenswrapper[5072]: I1124 11:42:19.859756 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m7pfg"] Nov 24 11:42:19 crc kubenswrapper[5072]: I1124 11:42:19.865225 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-lrz9p"] Nov 24 11:42:21 crc kubenswrapper[5072]: I1124 11:42:21.037648 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b6db25f-182c-4b29-a975-acfa3253dec8" path="/var/lib/kubelet/pods/1b6db25f-182c-4b29-a975-acfa3253dec8/volumes" Nov 24 11:42:21 crc kubenswrapper[5072]: I1124 11:42:21.039941 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e327a89-b7a4-4e57-bc77-bb3a64afce6d" path="/var/lib/kubelet/pods/2e327a89-b7a4-4e57-bc77-bb3a64afce6d/volumes" Nov 24 11:42:21 crc kubenswrapper[5072]: I1124 11:42:21.040717 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45bca15f-243e-425b-b451-de61c3da8a4d" path="/var/lib/kubelet/pods/45bca15f-243e-425b-b451-de61c3da8a4d/volumes" Nov 24 11:42:21 crc kubenswrapper[5072]: I1124 11:42:21.041527 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50e2848c-d753-449b-ad0d-2b8a862cd800" path="/var/lib/kubelet/pods/50e2848c-d753-449b-ad0d-2b8a862cd800/volumes" Nov 24 11:42:21 crc kubenswrapper[5072]: I1124 11:42:21.042990 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55d5c4ad-dbbc-4728-bac4-f12adda414f1" path="/var/lib/kubelet/pods/55d5c4ad-dbbc-4728-bac4-f12adda414f1/volumes" Nov 24 11:42:21 crc kubenswrapper[5072]: I1124 11:42:21.043777 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6cf1de62-84ec-42cd-8354-14d52eb4e29b" path="/var/lib/kubelet/pods/6cf1de62-84ec-42cd-8354-14d52eb4e29b/volumes" Nov 24 11:42:21 crc kubenswrapper[5072]: I1124 11:42:21.044562 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98ed5522-ccf3-4c2a-81c3-d3013af6442b" path="/var/lib/kubelet/pods/98ed5522-ccf3-4c2a-81c3-d3013af6442b/volumes" Nov 24 11:42:21 crc kubenswrapper[5072]: I1124 11:42:21.046517 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d9d2c85-76f2-4a51-be9c-7f2436ae35f1" path="/var/lib/kubelet/pods/9d9d2c85-76f2-4a51-be9c-7f2436ae35f1/volumes" Nov 24 11:42:21 crc kubenswrapper[5072]: I1124 11:42:21.047343 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3c835dc-ad27-4cd1-a28b-4875b1e87d8c" path="/var/lib/kubelet/pods/a3c835dc-ad27-4cd1-a28b-4875b1e87d8c/volumes" Nov 24 11:42:21 crc kubenswrapper[5072]: I1124 11:42:21.048276 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b41cd94b-9e44-431e-b3f9-76655cda4c0f" path="/var/lib/kubelet/pods/b41cd94b-9e44-431e-b3f9-76655cda4c0f/volumes" Nov 24 11:42:26 crc kubenswrapper[5072]: I1124 11:42:26.151341 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xdvcd"] Nov 24 11:42:26 crc kubenswrapper[5072]: E1124 11:42:26.153584 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="815a6f38-93cc-4a99-9c61-1102103a6dfe" containerName="extract-content" Nov 24 11:42:26 crc kubenswrapper[5072]: I1124 11:42:26.153737 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="815a6f38-93cc-4a99-9c61-1102103a6dfe" containerName="extract-content" Nov 24 11:42:26 crc kubenswrapper[5072]: E1124 11:42:26.153824 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="815a6f38-93cc-4a99-9c61-1102103a6dfe" containerName="extract-utilities" Nov 24 11:42:26 crc kubenswrapper[5072]: I1124 11:42:26.153881 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="815a6f38-93cc-4a99-9c61-1102103a6dfe" containerName="extract-utilities" Nov 24 11:42:26 crc kubenswrapper[5072]: E1124 11:42:26.153941 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98ed5522-ccf3-4c2a-81c3-d3013af6442b" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 24 11:42:26 crc kubenswrapper[5072]: I1124 11:42:26.153997 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="98ed5522-ccf3-4c2a-81c3-d3013af6442b" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 24 11:42:26 crc kubenswrapper[5072]: E1124 11:42:26.154063 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="815a6f38-93cc-4a99-9c61-1102103a6dfe" containerName="registry-server" Nov 24 11:42:26 crc kubenswrapper[5072]: I1124 11:42:26.154116 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="815a6f38-93cc-4a99-9c61-1102103a6dfe" containerName="registry-server" Nov 24 11:42:26 crc kubenswrapper[5072]: I1124 11:42:26.154393 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="98ed5522-ccf3-4c2a-81c3-d3013af6442b" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 24 11:42:26 crc kubenswrapper[5072]: I1124 11:42:26.154480 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="815a6f38-93cc-4a99-9c61-1102103a6dfe" containerName="registry-server" Nov 24 11:42:26 crc kubenswrapper[5072]: I1124 11:42:26.155352 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xdvcd" Nov 24 11:42:26 crc kubenswrapper[5072]: I1124 11:42:26.157436 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 11:42:26 crc kubenswrapper[5072]: I1124 11:42:26.157520 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 11:42:26 crc kubenswrapper[5072]: I1124 11:42:26.157627 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-b6s7d" Nov 24 11:42:26 crc kubenswrapper[5072]: I1124 11:42:26.157926 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 24 11:42:26 crc kubenswrapper[5072]: I1124 11:42:26.157932 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 11:42:26 crc kubenswrapper[5072]: I1124 11:42:26.167928 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xdvcd"] Nov 24 11:42:26 crc kubenswrapper[5072]: I1124 11:42:26.349162 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftznj\" (UniqueName: \"kubernetes.io/projected/0dcc0eb2-52d6-4d82-bddd-960848462a81-kube-api-access-ftznj\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-xdvcd\" (UID: \"0dcc0eb2-52d6-4d82-bddd-960848462a81\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xdvcd" Nov 24 11:42:26 crc kubenswrapper[5072]: I1124 11:42:26.349269 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dcc0eb2-52d6-4d82-bddd-960848462a81-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-xdvcd\" (UID: \"0dcc0eb2-52d6-4d82-bddd-960848462a81\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xdvcd" Nov 24 11:42:26 crc kubenswrapper[5072]: I1124 11:42:26.349469 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0dcc0eb2-52d6-4d82-bddd-960848462a81-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-xdvcd\" (UID: \"0dcc0eb2-52d6-4d82-bddd-960848462a81\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xdvcd" Nov 24 11:42:26 crc kubenswrapper[5072]: I1124 11:42:26.349530 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0dcc0eb2-52d6-4d82-bddd-960848462a81-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-xdvcd\" (UID: \"0dcc0eb2-52d6-4d82-bddd-960848462a81\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xdvcd" Nov 24 11:42:26 crc kubenswrapper[5072]: I1124 11:42:26.349735 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0dcc0eb2-52d6-4d82-bddd-960848462a81-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-xdvcd\" (UID: \"0dcc0eb2-52d6-4d82-bddd-960848462a81\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xdvcd" Nov 24 11:42:26 crc kubenswrapper[5072]: I1124 11:42:26.450852 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0dcc0eb2-52d6-4d82-bddd-960848462a81-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-xdvcd\" (UID: \"0dcc0eb2-52d6-4d82-bddd-960848462a81\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xdvcd" Nov 24 11:42:26 crc kubenswrapper[5072]: I1124 11:42:26.450898 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftznj\" (UniqueName: \"kubernetes.io/projected/0dcc0eb2-52d6-4d82-bddd-960848462a81-kube-api-access-ftznj\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-xdvcd\" (UID: \"0dcc0eb2-52d6-4d82-bddd-960848462a81\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xdvcd" Nov 24 11:42:26 crc kubenswrapper[5072]: I1124 11:42:26.450924 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dcc0eb2-52d6-4d82-bddd-960848462a81-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-xdvcd\" (UID: \"0dcc0eb2-52d6-4d82-bddd-960848462a81\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xdvcd" Nov 24 11:42:26 crc kubenswrapper[5072]: I1124 11:42:26.450995 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0dcc0eb2-52d6-4d82-bddd-960848462a81-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-xdvcd\" (UID: \"0dcc0eb2-52d6-4d82-bddd-960848462a81\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xdvcd" Nov 24 11:42:26 crc kubenswrapper[5072]: I1124 11:42:26.451033 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0dcc0eb2-52d6-4d82-bddd-960848462a81-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-xdvcd\" (UID: \"0dcc0eb2-52d6-4d82-bddd-960848462a81\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xdvcd" Nov 24 11:42:26 crc kubenswrapper[5072]: I1124 11:42:26.458904 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0dcc0eb2-52d6-4d82-bddd-960848462a81-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-xdvcd\" (UID: \"0dcc0eb2-52d6-4d82-bddd-960848462a81\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xdvcd" Nov 24 11:42:26 crc kubenswrapper[5072]: I1124 11:42:26.459030 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dcc0eb2-52d6-4d82-bddd-960848462a81-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-xdvcd\" (UID: \"0dcc0eb2-52d6-4d82-bddd-960848462a81\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xdvcd" Nov 24 11:42:26 crc kubenswrapper[5072]: I1124 11:42:26.459552 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0dcc0eb2-52d6-4d82-bddd-960848462a81-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-xdvcd\" (UID: \"0dcc0eb2-52d6-4d82-bddd-960848462a81\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xdvcd" Nov 24 11:42:26 crc kubenswrapper[5072]: I1124 11:42:26.464778 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0dcc0eb2-52d6-4d82-bddd-960848462a81-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-xdvcd\" (UID: \"0dcc0eb2-52d6-4d82-bddd-960848462a81\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xdvcd" Nov 24 11:42:26 crc kubenswrapper[5072]: I1124 11:42:26.476645 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftznj\" (UniqueName: \"kubernetes.io/projected/0dcc0eb2-52d6-4d82-bddd-960848462a81-kube-api-access-ftznj\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-xdvcd\" (UID: \"0dcc0eb2-52d6-4d82-bddd-960848462a81\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xdvcd" Nov 24 11:42:26 crc kubenswrapper[5072]: I1124 11:42:26.485975 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xdvcd" Nov 24 11:42:27 crc kubenswrapper[5072]: I1124 11:42:27.045161 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xdvcd"] Nov 24 11:42:27 crc kubenswrapper[5072]: I1124 11:42:27.047617 5072 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 11:42:27 crc kubenswrapper[5072]: I1124 11:42:27.499102 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xdvcd" event={"ID":"0dcc0eb2-52d6-4d82-bddd-960848462a81","Type":"ContainerStarted","Data":"8d07bc66ee526039bc3f692a9d9394ea8fc5749edd683716109f27851ba59f9b"} Nov 24 11:42:28 crc kubenswrapper[5072]: I1124 11:42:28.508819 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xdvcd" event={"ID":"0dcc0eb2-52d6-4d82-bddd-960848462a81","Type":"ContainerStarted","Data":"9be8f67afc5e47664b5ee3c54345db27015491b1a47a68e8bedfcf3042e227aa"} Nov 24 11:42:28 crc kubenswrapper[5072]: I1124 11:42:28.533950 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xdvcd" podStartSLOduration=2.103191957 podStartE2EDuration="2.533932701s" podCreationTimestamp="2025-11-24 11:42:26 +0000 UTC" firstStartedPulling="2025-11-24 11:42:27.047332366 +0000 UTC m=+1998.758856842" lastFinishedPulling="2025-11-24 11:42:27.47807311 +0000 UTC m=+1999.189597586" observedRunningTime="2025-11-24 11:42:28.530149837 +0000 UTC m=+2000.241674313" watchObservedRunningTime="2025-11-24 11:42:28.533932701 +0000 UTC m=+2000.245457177" Nov 24 11:42:40 crc kubenswrapper[5072]: I1124 11:42:40.627436 5072 generic.go:334] "Generic (PLEG): container finished" podID="0dcc0eb2-52d6-4d82-bddd-960848462a81" containerID="9be8f67afc5e47664b5ee3c54345db27015491b1a47a68e8bedfcf3042e227aa" exitCode=0 Nov 24 11:42:40 crc kubenswrapper[5072]: I1124 11:42:40.627480 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xdvcd" event={"ID":"0dcc0eb2-52d6-4d82-bddd-960848462a81","Type":"ContainerDied","Data":"9be8f67afc5e47664b5ee3c54345db27015491b1a47a68e8bedfcf3042e227aa"} Nov 24 11:42:42 crc kubenswrapper[5072]: I1124 11:42:42.111676 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xdvcd" Nov 24 11:42:42 crc kubenswrapper[5072]: I1124 11:42:42.154037 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0dcc0eb2-52d6-4d82-bddd-960848462a81-ssh-key\") pod \"0dcc0eb2-52d6-4d82-bddd-960848462a81\" (UID: \"0dcc0eb2-52d6-4d82-bddd-960848462a81\") " Nov 24 11:42:42 crc kubenswrapper[5072]: I1124 11:42:42.154200 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0dcc0eb2-52d6-4d82-bddd-960848462a81-inventory\") pod \"0dcc0eb2-52d6-4d82-bddd-960848462a81\" (UID: \"0dcc0eb2-52d6-4d82-bddd-960848462a81\") " Nov 24 11:42:42 crc kubenswrapper[5072]: I1124 11:42:42.154323 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftznj\" (UniqueName: \"kubernetes.io/projected/0dcc0eb2-52d6-4d82-bddd-960848462a81-kube-api-access-ftznj\") pod \"0dcc0eb2-52d6-4d82-bddd-960848462a81\" (UID: \"0dcc0eb2-52d6-4d82-bddd-960848462a81\") " Nov 24 11:42:42 crc kubenswrapper[5072]: I1124 11:42:42.154461 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dcc0eb2-52d6-4d82-bddd-960848462a81-repo-setup-combined-ca-bundle\") pod \"0dcc0eb2-52d6-4d82-bddd-960848462a81\" (UID: \"0dcc0eb2-52d6-4d82-bddd-960848462a81\") " Nov 24 11:42:42 crc kubenswrapper[5072]: I1124 11:42:42.154514 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0dcc0eb2-52d6-4d82-bddd-960848462a81-ceph\") pod \"0dcc0eb2-52d6-4d82-bddd-960848462a81\" (UID: \"0dcc0eb2-52d6-4d82-bddd-960848462a81\") " Nov 24 11:42:42 crc kubenswrapper[5072]: I1124 11:42:42.182157 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dcc0eb2-52d6-4d82-bddd-960848462a81-ceph" (OuterVolumeSpecName: "ceph") pod "0dcc0eb2-52d6-4d82-bddd-960848462a81" (UID: "0dcc0eb2-52d6-4d82-bddd-960848462a81"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:42:42 crc kubenswrapper[5072]: I1124 11:42:42.182455 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dcc0eb2-52d6-4d82-bddd-960848462a81-kube-api-access-ftznj" (OuterVolumeSpecName: "kube-api-access-ftznj") pod "0dcc0eb2-52d6-4d82-bddd-960848462a81" (UID: "0dcc0eb2-52d6-4d82-bddd-960848462a81"). InnerVolumeSpecName "kube-api-access-ftznj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:42:42 crc kubenswrapper[5072]: I1124 11:42:42.187613 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dcc0eb2-52d6-4d82-bddd-960848462a81-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "0dcc0eb2-52d6-4d82-bddd-960848462a81" (UID: "0dcc0eb2-52d6-4d82-bddd-960848462a81"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:42:42 crc kubenswrapper[5072]: I1124 11:42:42.201798 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dcc0eb2-52d6-4d82-bddd-960848462a81-inventory" (OuterVolumeSpecName: "inventory") pod "0dcc0eb2-52d6-4d82-bddd-960848462a81" (UID: "0dcc0eb2-52d6-4d82-bddd-960848462a81"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:42:42 crc kubenswrapper[5072]: I1124 11:42:42.211712 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dcc0eb2-52d6-4d82-bddd-960848462a81-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "0dcc0eb2-52d6-4d82-bddd-960848462a81" (UID: "0dcc0eb2-52d6-4d82-bddd-960848462a81"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:42:42 crc kubenswrapper[5072]: I1124 11:42:42.256415 5072 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0dcc0eb2-52d6-4d82-bddd-960848462a81-ceph\") on node \"crc\" DevicePath \"\"" Nov 24 11:42:42 crc kubenswrapper[5072]: I1124 11:42:42.256448 5072 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0dcc0eb2-52d6-4d82-bddd-960848462a81-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 11:42:42 crc kubenswrapper[5072]: I1124 11:42:42.256457 5072 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0dcc0eb2-52d6-4d82-bddd-960848462a81-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 11:42:42 crc kubenswrapper[5072]: I1124 11:42:42.256467 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ftznj\" (UniqueName: \"kubernetes.io/projected/0dcc0eb2-52d6-4d82-bddd-960848462a81-kube-api-access-ftznj\") on node \"crc\" DevicePath \"\"" Nov 24 11:42:42 crc kubenswrapper[5072]: I1124 11:42:42.256477 5072 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dcc0eb2-52d6-4d82-bddd-960848462a81-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:42:42 crc kubenswrapper[5072]: I1124 11:42:42.656459 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xdvcd" event={"ID":"0dcc0eb2-52d6-4d82-bddd-960848462a81","Type":"ContainerDied","Data":"8d07bc66ee526039bc3f692a9d9394ea8fc5749edd683716109f27851ba59f9b"} Nov 24 11:42:42 crc kubenswrapper[5072]: I1124 11:42:42.656530 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xdvcd" Nov 24 11:42:42 crc kubenswrapper[5072]: I1124 11:42:42.656538 5072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d07bc66ee526039bc3f692a9d9394ea8fc5749edd683716109f27851ba59f9b" Nov 24 11:42:42 crc kubenswrapper[5072]: I1124 11:42:42.741800 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cv2h4"] Nov 24 11:42:42 crc kubenswrapper[5072]: E1124 11:42:42.742242 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dcc0eb2-52d6-4d82-bddd-960848462a81" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 24 11:42:42 crc kubenswrapper[5072]: I1124 11:42:42.742259 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dcc0eb2-52d6-4d82-bddd-960848462a81" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 24 11:42:42 crc kubenswrapper[5072]: I1124 11:42:42.742503 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="0dcc0eb2-52d6-4d82-bddd-960848462a81" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 24 11:42:42 crc kubenswrapper[5072]: I1124 11:42:42.743330 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cv2h4" Nov 24 11:42:42 crc kubenswrapper[5072]: I1124 11:42:42.748688 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-b6s7d" Nov 24 11:42:42 crc kubenswrapper[5072]: I1124 11:42:42.749003 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 11:42:42 crc kubenswrapper[5072]: I1124 11:42:42.749229 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 24 11:42:42 crc kubenswrapper[5072]: I1124 11:42:42.749499 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 11:42:42 crc kubenswrapper[5072]: I1124 11:42:42.749732 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 11:42:42 crc kubenswrapper[5072]: I1124 11:42:42.764168 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ddef4dcc-c1f4-4057-8503-14afc5bffd37-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-cv2h4\" (UID: \"ddef4dcc-c1f4-4057-8503-14afc5bffd37\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cv2h4" Nov 24 11:42:42 crc kubenswrapper[5072]: I1124 11:42:42.764395 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7nvn\" (UniqueName: \"kubernetes.io/projected/ddef4dcc-c1f4-4057-8503-14afc5bffd37-kube-api-access-q7nvn\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-cv2h4\" (UID: \"ddef4dcc-c1f4-4057-8503-14afc5bffd37\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cv2h4" Nov 24 11:42:42 crc kubenswrapper[5072]: I1124 11:42:42.764458 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ddef4dcc-c1f4-4057-8503-14afc5bffd37-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-cv2h4\" (UID: \"ddef4dcc-c1f4-4057-8503-14afc5bffd37\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cv2h4" Nov 24 11:42:42 crc kubenswrapper[5072]: I1124 11:42:42.764501 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ddef4dcc-c1f4-4057-8503-14afc5bffd37-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-cv2h4\" (UID: \"ddef4dcc-c1f4-4057-8503-14afc5bffd37\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cv2h4" Nov 24 11:42:42 crc kubenswrapper[5072]: I1124 11:42:42.764543 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddef4dcc-c1f4-4057-8503-14afc5bffd37-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-cv2h4\" (UID: \"ddef4dcc-c1f4-4057-8503-14afc5bffd37\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cv2h4" Nov 24 11:42:42 crc kubenswrapper[5072]: I1124 11:42:42.770474 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cv2h4"] Nov 24 11:42:42 crc kubenswrapper[5072]: I1124 11:42:42.866723 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7nvn\" (UniqueName: \"kubernetes.io/projected/ddef4dcc-c1f4-4057-8503-14afc5bffd37-kube-api-access-q7nvn\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-cv2h4\" (UID: \"ddef4dcc-c1f4-4057-8503-14afc5bffd37\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cv2h4" Nov 24 11:42:42 crc kubenswrapper[5072]: I1124 11:42:42.866800 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ddef4dcc-c1f4-4057-8503-14afc5bffd37-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-cv2h4\" (UID: \"ddef4dcc-c1f4-4057-8503-14afc5bffd37\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cv2h4" Nov 24 11:42:42 crc kubenswrapper[5072]: I1124 11:42:42.866828 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ddef4dcc-c1f4-4057-8503-14afc5bffd37-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-cv2h4\" (UID: \"ddef4dcc-c1f4-4057-8503-14afc5bffd37\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cv2h4" Nov 24 11:42:42 crc kubenswrapper[5072]: I1124 11:42:42.866850 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddef4dcc-c1f4-4057-8503-14afc5bffd37-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-cv2h4\" (UID: \"ddef4dcc-c1f4-4057-8503-14afc5bffd37\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cv2h4" Nov 24 11:42:42 crc kubenswrapper[5072]: I1124 11:42:42.866919 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ddef4dcc-c1f4-4057-8503-14afc5bffd37-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-cv2h4\" (UID: \"ddef4dcc-c1f4-4057-8503-14afc5bffd37\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cv2h4" Nov 24 11:42:42 crc kubenswrapper[5072]: I1124 11:42:42.871244 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ddef4dcc-c1f4-4057-8503-14afc5bffd37-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-cv2h4\" (UID: \"ddef4dcc-c1f4-4057-8503-14afc5bffd37\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cv2h4" Nov 24 11:42:42 crc kubenswrapper[5072]: I1124 11:42:42.871428 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ddef4dcc-c1f4-4057-8503-14afc5bffd37-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-cv2h4\" (UID: \"ddef4dcc-c1f4-4057-8503-14afc5bffd37\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cv2h4" Nov 24 11:42:42 crc kubenswrapper[5072]: I1124 11:42:42.871608 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ddef4dcc-c1f4-4057-8503-14afc5bffd37-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-cv2h4\" (UID: \"ddef4dcc-c1f4-4057-8503-14afc5bffd37\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cv2h4" Nov 24 11:42:42 crc kubenswrapper[5072]: I1124 11:42:42.872215 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddef4dcc-c1f4-4057-8503-14afc5bffd37-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-cv2h4\" (UID: \"ddef4dcc-c1f4-4057-8503-14afc5bffd37\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cv2h4" Nov 24 11:42:42 crc kubenswrapper[5072]: I1124 11:42:42.884924 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7nvn\" (UniqueName: \"kubernetes.io/projected/ddef4dcc-c1f4-4057-8503-14afc5bffd37-kube-api-access-q7nvn\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-cv2h4\" (UID: \"ddef4dcc-c1f4-4057-8503-14afc5bffd37\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cv2h4" Nov 24 11:42:43 crc kubenswrapper[5072]: I1124 11:42:43.104962 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cv2h4" Nov 24 11:42:43 crc kubenswrapper[5072]: I1124 11:42:43.645422 5072 patch_prober.go:28] interesting pod/machine-config-daemon-jfxnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 11:42:43 crc kubenswrapper[5072]: I1124 11:42:43.645750 5072 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 11:42:43 crc kubenswrapper[5072]: I1124 11:42:43.698661 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cv2h4"] Nov 24 11:42:43 crc kubenswrapper[5072]: W1124 11:42:43.707604 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podddef4dcc_c1f4_4057_8503_14afc5bffd37.slice/crio-fa32acd02890b698545eb8011fe34426e75458a01f4ca340f02422ea3edea546 WatchSource:0}: Error finding container fa32acd02890b698545eb8011fe34426e75458a01f4ca340f02422ea3edea546: Status 404 returned error can't find the container with id fa32acd02890b698545eb8011fe34426e75458a01f4ca340f02422ea3edea546 Nov 24 11:42:44 crc kubenswrapper[5072]: I1124 11:42:44.675646 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cv2h4" event={"ID":"ddef4dcc-c1f4-4057-8503-14afc5bffd37","Type":"ContainerStarted","Data":"538c3b3284f84fddb1668c00c8771d94d547add6817482d65ef4f1d02382aa5a"} Nov 24 11:42:44 crc kubenswrapper[5072]: I1124 11:42:44.676351 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cv2h4" event={"ID":"ddef4dcc-c1f4-4057-8503-14afc5bffd37","Type":"ContainerStarted","Data":"fa32acd02890b698545eb8011fe34426e75458a01f4ca340f02422ea3edea546"} Nov 24 11:42:44 crc kubenswrapper[5072]: I1124 11:42:44.699114 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cv2h4" podStartSLOduration=2.263008438 podStartE2EDuration="2.699100158s" podCreationTimestamp="2025-11-24 11:42:42 +0000 UTC" firstStartedPulling="2025-11-24 11:42:43.711123091 +0000 UTC m=+2015.422647557" lastFinishedPulling="2025-11-24 11:42:44.147214801 +0000 UTC m=+2015.858739277" observedRunningTime="2025-11-24 11:42:44.696859093 +0000 UTC m=+2016.408383619" watchObservedRunningTime="2025-11-24 11:42:44.699100158 +0000 UTC m=+2016.410624634" Nov 24 11:43:13 crc kubenswrapper[5072]: I1124 11:43:13.509990 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-slj5d"] Nov 24 11:43:13 crc kubenswrapper[5072]: I1124 11:43:13.515016 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-slj5d" Nov 24 11:43:13 crc kubenswrapper[5072]: I1124 11:43:13.523419 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-slj5d"] Nov 24 11:43:13 crc kubenswrapper[5072]: I1124 11:43:13.640989 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/624cab1b-b05a-4860-800e-96840cccfd97-utilities\") pod \"redhat-marketplace-slj5d\" (UID: \"624cab1b-b05a-4860-800e-96840cccfd97\") " pod="openshift-marketplace/redhat-marketplace-slj5d" Nov 24 11:43:13 crc kubenswrapper[5072]: I1124 11:43:13.641297 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hn47c\" (UniqueName: \"kubernetes.io/projected/624cab1b-b05a-4860-800e-96840cccfd97-kube-api-access-hn47c\") pod \"redhat-marketplace-slj5d\" (UID: \"624cab1b-b05a-4860-800e-96840cccfd97\") " pod="openshift-marketplace/redhat-marketplace-slj5d" Nov 24 11:43:13 crc kubenswrapper[5072]: I1124 11:43:13.641532 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/624cab1b-b05a-4860-800e-96840cccfd97-catalog-content\") pod \"redhat-marketplace-slj5d\" (UID: \"624cab1b-b05a-4860-800e-96840cccfd97\") " pod="openshift-marketplace/redhat-marketplace-slj5d" Nov 24 11:43:13 crc kubenswrapper[5072]: I1124 11:43:13.645605 5072 patch_prober.go:28] interesting pod/machine-config-daemon-jfxnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 11:43:13 crc kubenswrapper[5072]: I1124 11:43:13.645706 5072 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 11:43:13 crc kubenswrapper[5072]: I1124 11:43:13.743625 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/624cab1b-b05a-4860-800e-96840cccfd97-utilities\") pod \"redhat-marketplace-slj5d\" (UID: \"624cab1b-b05a-4860-800e-96840cccfd97\") " pod="openshift-marketplace/redhat-marketplace-slj5d" Nov 24 11:43:13 crc kubenswrapper[5072]: I1124 11:43:13.743695 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hn47c\" (UniqueName: \"kubernetes.io/projected/624cab1b-b05a-4860-800e-96840cccfd97-kube-api-access-hn47c\") pod \"redhat-marketplace-slj5d\" (UID: \"624cab1b-b05a-4860-800e-96840cccfd97\") " pod="openshift-marketplace/redhat-marketplace-slj5d" Nov 24 11:43:13 crc kubenswrapper[5072]: I1124 11:43:13.743819 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/624cab1b-b05a-4860-800e-96840cccfd97-catalog-content\") pod \"redhat-marketplace-slj5d\" (UID: \"624cab1b-b05a-4860-800e-96840cccfd97\") " pod="openshift-marketplace/redhat-marketplace-slj5d" Nov 24 11:43:13 crc kubenswrapper[5072]: I1124 11:43:13.744521 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/624cab1b-b05a-4860-800e-96840cccfd97-catalog-content\") pod \"redhat-marketplace-slj5d\" (UID: \"624cab1b-b05a-4860-800e-96840cccfd97\") " pod="openshift-marketplace/redhat-marketplace-slj5d" Nov 24 11:43:13 crc kubenswrapper[5072]: I1124 11:43:13.744813 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/624cab1b-b05a-4860-800e-96840cccfd97-utilities\") pod \"redhat-marketplace-slj5d\" (UID: \"624cab1b-b05a-4860-800e-96840cccfd97\") " pod="openshift-marketplace/redhat-marketplace-slj5d" Nov 24 11:43:13 crc kubenswrapper[5072]: I1124 11:43:13.767104 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hn47c\" (UniqueName: \"kubernetes.io/projected/624cab1b-b05a-4860-800e-96840cccfd97-kube-api-access-hn47c\") pod \"redhat-marketplace-slj5d\" (UID: \"624cab1b-b05a-4860-800e-96840cccfd97\") " pod="openshift-marketplace/redhat-marketplace-slj5d" Nov 24 11:43:13 crc kubenswrapper[5072]: I1124 11:43:13.845354 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-slj5d" Nov 24 11:43:14 crc kubenswrapper[5072]: I1124 11:43:14.323203 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-slj5d"] Nov 24 11:43:14 crc kubenswrapper[5072]: I1124 11:43:14.999030 5072 generic.go:334] "Generic (PLEG): container finished" podID="624cab1b-b05a-4860-800e-96840cccfd97" containerID="4dfdea4b3a760b2bed43fbbb0b817bcd7051f86c3a338fdd4c0c746a7e79bcea" exitCode=0 Nov 24 11:43:14 crc kubenswrapper[5072]: I1124 11:43:14.999137 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-slj5d" event={"ID":"624cab1b-b05a-4860-800e-96840cccfd97","Type":"ContainerDied","Data":"4dfdea4b3a760b2bed43fbbb0b817bcd7051f86c3a338fdd4c0c746a7e79bcea"} Nov 24 11:43:15 crc kubenswrapper[5072]: I1124 11:43:14.999595 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-slj5d" event={"ID":"624cab1b-b05a-4860-800e-96840cccfd97","Type":"ContainerStarted","Data":"9bfff0aaaaeb510488ecb3fe5393b321dc21a38c1c44761c358d91200d34fb09"} Nov 24 11:43:16 crc kubenswrapper[5072]: I1124 11:43:16.011226 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-slj5d" event={"ID":"624cab1b-b05a-4860-800e-96840cccfd97","Type":"ContainerStarted","Data":"a6dc9edc3a30b5aea97fa00acfd2088288c13d320ca815c6cf2cc332289d2be6"} Nov 24 11:43:16 crc kubenswrapper[5072]: I1124 11:43:16.579046 5072 scope.go:117] "RemoveContainer" containerID="1d88395c2efe70f24a107df6739293c60105543b4e4229e74e8b0a5b99430513" Nov 24 11:43:16 crc kubenswrapper[5072]: I1124 11:43:16.623708 5072 scope.go:117] "RemoveContainer" containerID="c834356271529c6c1adb078853d64923e8a035431fdb0383ccbbe222234378be" Nov 24 11:43:16 crc kubenswrapper[5072]: I1124 11:43:16.670729 5072 scope.go:117] "RemoveContainer" containerID="ebaa4b9965366c6c8a7732aed495cc04d83610061550042df64b483ae56e7edb" Nov 24 11:43:16 crc kubenswrapper[5072]: I1124 11:43:16.717222 5072 scope.go:117] "RemoveContainer" containerID="77a3a39bf85af92b1c834d68dbde8708a949c92eabd566ff7cd8bd1d49cb6f9f" Nov 24 11:43:16 crc kubenswrapper[5072]: I1124 11:43:16.806264 5072 scope.go:117] "RemoveContainer" containerID="88a8cf97c05f80035492dc10257e6a33e7c8316097c95fd7fbc33d1e4c88ae5f" Nov 24 11:43:16 crc kubenswrapper[5072]: I1124 11:43:16.846589 5072 scope.go:117] "RemoveContainer" containerID="66afa1d556fec312d4ddeb598933ce847e0b18eec852dc9e5974621983e42561" Nov 24 11:43:16 crc kubenswrapper[5072]: I1124 11:43:16.904047 5072 scope.go:117] "RemoveContainer" containerID="a8e5f07e17bf328e8092b0d7be49c38dfe1062e29b1e94e4b268ed6581a78740" Nov 24 11:43:16 crc kubenswrapper[5072]: I1124 11:43:16.965871 5072 scope.go:117] "RemoveContainer" containerID="f450160e093e287116986029ff191cc07b2f5fb5c29a036fd60bf3b4fb4b79cf" Nov 24 11:43:16 crc kubenswrapper[5072]: I1124 11:43:16.997569 5072 scope.go:117] "RemoveContainer" containerID="bacde65c0bf7088a571c6dd75c114ac6fdad7e96b5f661ba9978746b8f8f018e" Nov 24 11:43:17 crc kubenswrapper[5072]: I1124 11:43:17.043365 5072 generic.go:334] "Generic (PLEG): container finished" podID="624cab1b-b05a-4860-800e-96840cccfd97" containerID="a6dc9edc3a30b5aea97fa00acfd2088288c13d320ca815c6cf2cc332289d2be6" exitCode=0 Nov 24 11:43:17 crc kubenswrapper[5072]: I1124 11:43:17.043451 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-slj5d" event={"ID":"624cab1b-b05a-4860-800e-96840cccfd97","Type":"ContainerDied","Data":"a6dc9edc3a30b5aea97fa00acfd2088288c13d320ca815c6cf2cc332289d2be6"} Nov 24 11:43:18 crc kubenswrapper[5072]: I1124 11:43:18.059627 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-slj5d" event={"ID":"624cab1b-b05a-4860-800e-96840cccfd97","Type":"ContainerStarted","Data":"0a5fcb2896f7c66005f36953b5f5333b1c276d65863b552127b281665069a5d0"} Nov 24 11:43:23 crc kubenswrapper[5072]: I1124 11:43:23.845537 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-slj5d" Nov 24 11:43:23 crc kubenswrapper[5072]: I1124 11:43:23.846326 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-slj5d" Nov 24 11:43:23 crc kubenswrapper[5072]: I1124 11:43:23.901194 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-slj5d" Nov 24 11:43:23 crc kubenswrapper[5072]: I1124 11:43:23.924280 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-slj5d" podStartSLOduration=8.277048898 podStartE2EDuration="10.924235112s" podCreationTimestamp="2025-11-24 11:43:13 +0000 UTC" firstStartedPulling="2025-11-24 11:43:15.002680208 +0000 UTC m=+2046.714204694" lastFinishedPulling="2025-11-24 11:43:17.649866422 +0000 UTC m=+2049.361390908" observedRunningTime="2025-11-24 11:43:18.084366044 +0000 UTC m=+2049.795890590" watchObservedRunningTime="2025-11-24 11:43:23.924235112 +0000 UTC m=+2055.635759588" Nov 24 11:43:24 crc kubenswrapper[5072]: I1124 11:43:24.154010 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-slj5d" Nov 24 11:43:24 crc kubenswrapper[5072]: I1124 11:43:24.208579 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-slj5d"] Nov 24 11:43:26 crc kubenswrapper[5072]: I1124 11:43:26.142078 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-slj5d" podUID="624cab1b-b05a-4860-800e-96840cccfd97" containerName="registry-server" containerID="cri-o://0a5fcb2896f7c66005f36953b5f5333b1c276d65863b552127b281665069a5d0" gracePeriod=2 Nov 24 11:43:26 crc kubenswrapper[5072]: I1124 11:43:26.601264 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-slj5d" Nov 24 11:43:26 crc kubenswrapper[5072]: I1124 11:43:26.604969 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/624cab1b-b05a-4860-800e-96840cccfd97-catalog-content\") pod \"624cab1b-b05a-4860-800e-96840cccfd97\" (UID: \"624cab1b-b05a-4860-800e-96840cccfd97\") " Nov 24 11:43:26 crc kubenswrapper[5072]: I1124 11:43:26.605238 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/624cab1b-b05a-4860-800e-96840cccfd97-utilities\") pod \"624cab1b-b05a-4860-800e-96840cccfd97\" (UID: \"624cab1b-b05a-4860-800e-96840cccfd97\") " Nov 24 11:43:26 crc kubenswrapper[5072]: I1124 11:43:26.605285 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hn47c\" (UniqueName: \"kubernetes.io/projected/624cab1b-b05a-4860-800e-96840cccfd97-kube-api-access-hn47c\") pod \"624cab1b-b05a-4860-800e-96840cccfd97\" (UID: \"624cab1b-b05a-4860-800e-96840cccfd97\") " Nov 24 11:43:26 crc kubenswrapper[5072]: I1124 11:43:26.606045 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/624cab1b-b05a-4860-800e-96840cccfd97-utilities" (OuterVolumeSpecName: "utilities") pod "624cab1b-b05a-4860-800e-96840cccfd97" (UID: "624cab1b-b05a-4860-800e-96840cccfd97"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:43:26 crc kubenswrapper[5072]: I1124 11:43:26.610902 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/624cab1b-b05a-4860-800e-96840cccfd97-kube-api-access-hn47c" (OuterVolumeSpecName: "kube-api-access-hn47c") pod "624cab1b-b05a-4860-800e-96840cccfd97" (UID: "624cab1b-b05a-4860-800e-96840cccfd97"). InnerVolumeSpecName "kube-api-access-hn47c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:43:26 crc kubenswrapper[5072]: I1124 11:43:26.650471 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/624cab1b-b05a-4860-800e-96840cccfd97-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "624cab1b-b05a-4860-800e-96840cccfd97" (UID: "624cab1b-b05a-4860-800e-96840cccfd97"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:43:26 crc kubenswrapper[5072]: I1124 11:43:26.707423 5072 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/624cab1b-b05a-4860-800e-96840cccfd97-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 11:43:26 crc kubenswrapper[5072]: I1124 11:43:26.707458 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hn47c\" (UniqueName: \"kubernetes.io/projected/624cab1b-b05a-4860-800e-96840cccfd97-kube-api-access-hn47c\") on node \"crc\" DevicePath \"\"" Nov 24 11:43:26 crc kubenswrapper[5072]: I1124 11:43:26.707471 5072 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/624cab1b-b05a-4860-800e-96840cccfd97-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 11:43:27 crc kubenswrapper[5072]: I1124 11:43:27.153070 5072 generic.go:334] "Generic (PLEG): container finished" podID="624cab1b-b05a-4860-800e-96840cccfd97" containerID="0a5fcb2896f7c66005f36953b5f5333b1c276d65863b552127b281665069a5d0" exitCode=0 Nov 24 11:43:27 crc kubenswrapper[5072]: I1124 11:43:27.153170 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-slj5d" event={"ID":"624cab1b-b05a-4860-800e-96840cccfd97","Type":"ContainerDied","Data":"0a5fcb2896f7c66005f36953b5f5333b1c276d65863b552127b281665069a5d0"} Nov 24 11:43:27 crc kubenswrapper[5072]: I1124 11:43:27.153413 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-slj5d" event={"ID":"624cab1b-b05a-4860-800e-96840cccfd97","Type":"ContainerDied","Data":"9bfff0aaaaeb510488ecb3fe5393b321dc21a38c1c44761c358d91200d34fb09"} Nov 24 11:43:27 crc kubenswrapper[5072]: I1124 11:43:27.153197 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-slj5d" Nov 24 11:43:27 crc kubenswrapper[5072]: I1124 11:43:27.153450 5072 scope.go:117] "RemoveContainer" containerID="0a5fcb2896f7c66005f36953b5f5333b1c276d65863b552127b281665069a5d0" Nov 24 11:43:27 crc kubenswrapper[5072]: I1124 11:43:27.176263 5072 scope.go:117] "RemoveContainer" containerID="a6dc9edc3a30b5aea97fa00acfd2088288c13d320ca815c6cf2cc332289d2be6" Nov 24 11:43:27 crc kubenswrapper[5072]: I1124 11:43:27.182439 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-slj5d"] Nov 24 11:43:27 crc kubenswrapper[5072]: I1124 11:43:27.192482 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-slj5d"] Nov 24 11:43:27 crc kubenswrapper[5072]: I1124 11:43:27.196331 5072 scope.go:117] "RemoveContainer" containerID="4dfdea4b3a760b2bed43fbbb0b817bcd7051f86c3a338fdd4c0c746a7e79bcea" Nov 24 11:43:27 crc kubenswrapper[5072]: I1124 11:43:27.232351 5072 scope.go:117] "RemoveContainer" containerID="0a5fcb2896f7c66005f36953b5f5333b1c276d65863b552127b281665069a5d0" Nov 24 11:43:27 crc kubenswrapper[5072]: E1124 11:43:27.232728 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a5fcb2896f7c66005f36953b5f5333b1c276d65863b552127b281665069a5d0\": container with ID starting with 0a5fcb2896f7c66005f36953b5f5333b1c276d65863b552127b281665069a5d0 not found: ID does not exist" containerID="0a5fcb2896f7c66005f36953b5f5333b1c276d65863b552127b281665069a5d0" Nov 24 11:43:27 crc kubenswrapper[5072]: I1124 11:43:27.232768 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a5fcb2896f7c66005f36953b5f5333b1c276d65863b552127b281665069a5d0"} err="failed to get container status \"0a5fcb2896f7c66005f36953b5f5333b1c276d65863b552127b281665069a5d0\": rpc error: code = NotFound desc = could not find container \"0a5fcb2896f7c66005f36953b5f5333b1c276d65863b552127b281665069a5d0\": container with ID starting with 0a5fcb2896f7c66005f36953b5f5333b1c276d65863b552127b281665069a5d0 not found: ID does not exist" Nov 24 11:43:27 crc kubenswrapper[5072]: I1124 11:43:27.232794 5072 scope.go:117] "RemoveContainer" containerID="a6dc9edc3a30b5aea97fa00acfd2088288c13d320ca815c6cf2cc332289d2be6" Nov 24 11:43:27 crc kubenswrapper[5072]: E1124 11:43:27.233087 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6dc9edc3a30b5aea97fa00acfd2088288c13d320ca815c6cf2cc332289d2be6\": container with ID starting with a6dc9edc3a30b5aea97fa00acfd2088288c13d320ca815c6cf2cc332289d2be6 not found: ID does not exist" containerID="a6dc9edc3a30b5aea97fa00acfd2088288c13d320ca815c6cf2cc332289d2be6" Nov 24 11:43:27 crc kubenswrapper[5072]: I1124 11:43:27.233124 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6dc9edc3a30b5aea97fa00acfd2088288c13d320ca815c6cf2cc332289d2be6"} err="failed to get container status \"a6dc9edc3a30b5aea97fa00acfd2088288c13d320ca815c6cf2cc332289d2be6\": rpc error: code = NotFound desc = could not find container \"a6dc9edc3a30b5aea97fa00acfd2088288c13d320ca815c6cf2cc332289d2be6\": container with ID starting with a6dc9edc3a30b5aea97fa00acfd2088288c13d320ca815c6cf2cc332289d2be6 not found: ID does not exist" Nov 24 11:43:27 crc kubenswrapper[5072]: I1124 11:43:27.233147 5072 scope.go:117] "RemoveContainer" containerID="4dfdea4b3a760b2bed43fbbb0b817bcd7051f86c3a338fdd4c0c746a7e79bcea" Nov 24 11:43:27 crc kubenswrapper[5072]: E1124 11:43:27.233400 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4dfdea4b3a760b2bed43fbbb0b817bcd7051f86c3a338fdd4c0c746a7e79bcea\": container with ID starting with 4dfdea4b3a760b2bed43fbbb0b817bcd7051f86c3a338fdd4c0c746a7e79bcea not found: ID does not exist" containerID="4dfdea4b3a760b2bed43fbbb0b817bcd7051f86c3a338fdd4c0c746a7e79bcea" Nov 24 11:43:27 crc kubenswrapper[5072]: I1124 11:43:27.233440 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4dfdea4b3a760b2bed43fbbb0b817bcd7051f86c3a338fdd4c0c746a7e79bcea"} err="failed to get container status \"4dfdea4b3a760b2bed43fbbb0b817bcd7051f86c3a338fdd4c0c746a7e79bcea\": rpc error: code = NotFound desc = could not find container \"4dfdea4b3a760b2bed43fbbb0b817bcd7051f86c3a338fdd4c0c746a7e79bcea\": container with ID starting with 4dfdea4b3a760b2bed43fbbb0b817bcd7051f86c3a338fdd4c0c746a7e79bcea not found: ID does not exist" Nov 24 11:43:29 crc kubenswrapper[5072]: I1124 11:43:29.028486 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="624cab1b-b05a-4860-800e-96840cccfd97" path="/var/lib/kubelet/pods/624cab1b-b05a-4860-800e-96840cccfd97/volumes" Nov 24 11:43:43 crc kubenswrapper[5072]: I1124 11:43:43.645227 5072 patch_prober.go:28] interesting pod/machine-config-daemon-jfxnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 11:43:43 crc kubenswrapper[5072]: I1124 11:43:43.645922 5072 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 11:43:43 crc kubenswrapper[5072]: I1124 11:43:43.645991 5072 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" Nov 24 11:43:43 crc kubenswrapper[5072]: I1124 11:43:43.647065 5072 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"189ce64d61f8d24afa478e629c32eb4f3644b48f2f7f50733de592c3b81bfb86"} pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 11:43:43 crc kubenswrapper[5072]: I1124 11:43:43.647163 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" containerName="machine-config-daemon" containerID="cri-o://189ce64d61f8d24afa478e629c32eb4f3644b48f2f7f50733de592c3b81bfb86" gracePeriod=600 Nov 24 11:43:44 crc kubenswrapper[5072]: I1124 11:43:44.331567 5072 generic.go:334] "Generic (PLEG): container finished" podID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" containerID="189ce64d61f8d24afa478e629c32eb4f3644b48f2f7f50733de592c3b81bfb86" exitCode=0 Nov 24 11:43:44 crc kubenswrapper[5072]: I1124 11:43:44.334027 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" event={"ID":"85ee6420-36f0-467c-acf4-ebea8b02c8d5","Type":"ContainerDied","Data":"189ce64d61f8d24afa478e629c32eb4f3644b48f2f7f50733de592c3b81bfb86"} Nov 24 11:43:44 crc kubenswrapper[5072]: I1124 11:43:44.334082 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" event={"ID":"85ee6420-36f0-467c-acf4-ebea8b02c8d5","Type":"ContainerStarted","Data":"6821956e4cab86ef1bb97ee072ae286fa9afb6be72f793a93d8280a527b7f493"} Nov 24 11:43:44 crc kubenswrapper[5072]: I1124 11:43:44.334109 5072 scope.go:117] "RemoveContainer" containerID="f0239aa581e66fddd8c16af420543c1743e09635c9f82c2f13fdce098c99f8ec" Nov 24 11:44:17 crc kubenswrapper[5072]: I1124 11:44:17.261289 5072 scope.go:117] "RemoveContainer" containerID="c3fb37aaeb9e5ac882e6158fdd7359f212f5dbfaa3d7e6da67936447484f7258" Nov 24 11:44:27 crc kubenswrapper[5072]: I1124 11:44:27.772856 5072 generic.go:334] "Generic (PLEG): container finished" podID="ddef4dcc-c1f4-4057-8503-14afc5bffd37" containerID="538c3b3284f84fddb1668c00c8771d94d547add6817482d65ef4f1d02382aa5a" exitCode=0 Nov 24 11:44:27 crc kubenswrapper[5072]: I1124 11:44:27.772932 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cv2h4" event={"ID":"ddef4dcc-c1f4-4057-8503-14afc5bffd37","Type":"ContainerDied","Data":"538c3b3284f84fddb1668c00c8771d94d547add6817482d65ef4f1d02382aa5a"} Nov 24 11:44:29 crc kubenswrapper[5072]: I1124 11:44:29.253159 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cv2h4" Nov 24 11:44:29 crc kubenswrapper[5072]: I1124 11:44:29.359276 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddef4dcc-c1f4-4057-8503-14afc5bffd37-bootstrap-combined-ca-bundle\") pod \"ddef4dcc-c1f4-4057-8503-14afc5bffd37\" (UID: \"ddef4dcc-c1f4-4057-8503-14afc5bffd37\") " Nov 24 11:44:29 crc kubenswrapper[5072]: I1124 11:44:29.359336 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ddef4dcc-c1f4-4057-8503-14afc5bffd37-ceph\") pod \"ddef4dcc-c1f4-4057-8503-14afc5bffd37\" (UID: \"ddef4dcc-c1f4-4057-8503-14afc5bffd37\") " Nov 24 11:44:29 crc kubenswrapper[5072]: I1124 11:44:29.359391 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7nvn\" (UniqueName: \"kubernetes.io/projected/ddef4dcc-c1f4-4057-8503-14afc5bffd37-kube-api-access-q7nvn\") pod \"ddef4dcc-c1f4-4057-8503-14afc5bffd37\" (UID: \"ddef4dcc-c1f4-4057-8503-14afc5bffd37\") " Nov 24 11:44:29 crc kubenswrapper[5072]: I1124 11:44:29.359466 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ddef4dcc-c1f4-4057-8503-14afc5bffd37-inventory\") pod \"ddef4dcc-c1f4-4057-8503-14afc5bffd37\" (UID: \"ddef4dcc-c1f4-4057-8503-14afc5bffd37\") " Nov 24 11:44:29 crc kubenswrapper[5072]: I1124 11:44:29.359566 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ddef4dcc-c1f4-4057-8503-14afc5bffd37-ssh-key\") pod \"ddef4dcc-c1f4-4057-8503-14afc5bffd37\" (UID: \"ddef4dcc-c1f4-4057-8503-14afc5bffd37\") " Nov 24 11:44:29 crc kubenswrapper[5072]: I1124 11:44:29.366056 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddef4dcc-c1f4-4057-8503-14afc5bffd37-kube-api-access-q7nvn" (OuterVolumeSpecName: "kube-api-access-q7nvn") pod "ddef4dcc-c1f4-4057-8503-14afc5bffd37" (UID: "ddef4dcc-c1f4-4057-8503-14afc5bffd37"). InnerVolumeSpecName "kube-api-access-q7nvn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:44:29 crc kubenswrapper[5072]: I1124 11:44:29.366412 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddef4dcc-c1f4-4057-8503-14afc5bffd37-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "ddef4dcc-c1f4-4057-8503-14afc5bffd37" (UID: "ddef4dcc-c1f4-4057-8503-14afc5bffd37"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:44:29 crc kubenswrapper[5072]: I1124 11:44:29.367282 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddef4dcc-c1f4-4057-8503-14afc5bffd37-ceph" (OuterVolumeSpecName: "ceph") pod "ddef4dcc-c1f4-4057-8503-14afc5bffd37" (UID: "ddef4dcc-c1f4-4057-8503-14afc5bffd37"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:44:29 crc kubenswrapper[5072]: I1124 11:44:29.392457 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddef4dcc-c1f4-4057-8503-14afc5bffd37-inventory" (OuterVolumeSpecName: "inventory") pod "ddef4dcc-c1f4-4057-8503-14afc5bffd37" (UID: "ddef4dcc-c1f4-4057-8503-14afc5bffd37"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:44:29 crc kubenswrapper[5072]: I1124 11:44:29.399608 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddef4dcc-c1f4-4057-8503-14afc5bffd37-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "ddef4dcc-c1f4-4057-8503-14afc5bffd37" (UID: "ddef4dcc-c1f4-4057-8503-14afc5bffd37"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:44:29 crc kubenswrapper[5072]: I1124 11:44:29.461822 5072 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ddef4dcc-c1f4-4057-8503-14afc5bffd37-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 11:44:29 crc kubenswrapper[5072]: I1124 11:44:29.461894 5072 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddef4dcc-c1f4-4057-8503-14afc5bffd37-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:44:29 crc kubenswrapper[5072]: I1124 11:44:29.461927 5072 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ddef4dcc-c1f4-4057-8503-14afc5bffd37-ceph\") on node \"crc\" DevicePath \"\"" Nov 24 11:44:29 crc kubenswrapper[5072]: I1124 11:44:29.461955 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q7nvn\" (UniqueName: \"kubernetes.io/projected/ddef4dcc-c1f4-4057-8503-14afc5bffd37-kube-api-access-q7nvn\") on node \"crc\" DevicePath \"\"" Nov 24 11:44:29 crc kubenswrapper[5072]: I1124 11:44:29.461981 5072 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ddef4dcc-c1f4-4057-8503-14afc5bffd37-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 11:44:29 crc kubenswrapper[5072]: I1124 11:44:29.799715 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cv2h4" event={"ID":"ddef4dcc-c1f4-4057-8503-14afc5bffd37","Type":"ContainerDied","Data":"fa32acd02890b698545eb8011fe34426e75458a01f4ca340f02422ea3edea546"} Nov 24 11:44:29 crc kubenswrapper[5072]: I1124 11:44:29.799788 5072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa32acd02890b698545eb8011fe34426e75458a01f4ca340f02422ea3edea546" Nov 24 11:44:29 crc kubenswrapper[5072]: I1124 11:44:29.799792 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cv2h4" Nov 24 11:44:29 crc kubenswrapper[5072]: I1124 11:44:29.899135 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5lhlt"] Nov 24 11:44:29 crc kubenswrapper[5072]: E1124 11:44:29.899551 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="624cab1b-b05a-4860-800e-96840cccfd97" containerName="registry-server" Nov 24 11:44:29 crc kubenswrapper[5072]: I1124 11:44:29.899574 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="624cab1b-b05a-4860-800e-96840cccfd97" containerName="registry-server" Nov 24 11:44:29 crc kubenswrapper[5072]: E1124 11:44:29.899605 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="624cab1b-b05a-4860-800e-96840cccfd97" containerName="extract-content" Nov 24 11:44:29 crc kubenswrapper[5072]: I1124 11:44:29.899615 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="624cab1b-b05a-4860-800e-96840cccfd97" containerName="extract-content" Nov 24 11:44:29 crc kubenswrapper[5072]: E1124 11:44:29.899638 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddef4dcc-c1f4-4057-8503-14afc5bffd37" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 24 11:44:29 crc kubenswrapper[5072]: I1124 11:44:29.899647 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddef4dcc-c1f4-4057-8503-14afc5bffd37" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 24 11:44:29 crc kubenswrapper[5072]: E1124 11:44:29.899662 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="624cab1b-b05a-4860-800e-96840cccfd97" containerName="extract-utilities" Nov 24 11:44:29 crc kubenswrapper[5072]: I1124 11:44:29.899670 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="624cab1b-b05a-4860-800e-96840cccfd97" containerName="extract-utilities" Nov 24 11:44:29 crc kubenswrapper[5072]: I1124 11:44:29.899892 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="ddef4dcc-c1f4-4057-8503-14afc5bffd37" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 24 11:44:29 crc kubenswrapper[5072]: I1124 11:44:29.899915 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="624cab1b-b05a-4860-800e-96840cccfd97" containerName="registry-server" Nov 24 11:44:29 crc kubenswrapper[5072]: I1124 11:44:29.900608 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5lhlt" Nov 24 11:44:29 crc kubenswrapper[5072]: I1124 11:44:29.904763 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 11:44:29 crc kubenswrapper[5072]: I1124 11:44:29.905519 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 11:44:29 crc kubenswrapper[5072]: I1124 11:44:29.906031 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 11:44:29 crc kubenswrapper[5072]: I1124 11:44:29.907119 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 24 11:44:29 crc kubenswrapper[5072]: I1124 11:44:29.912564 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-b6s7d" Nov 24 11:44:29 crc kubenswrapper[5072]: I1124 11:44:29.924480 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5lhlt"] Nov 24 11:44:29 crc kubenswrapper[5072]: I1124 11:44:29.974133 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3960ebf7-e874-4d40-9d12-759d8bf2b312-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5lhlt\" (UID: \"3960ebf7-e874-4d40-9d12-759d8bf2b312\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5lhlt" Nov 24 11:44:29 crc kubenswrapper[5072]: I1124 11:44:29.976356 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jznhp\" (UniqueName: \"kubernetes.io/projected/3960ebf7-e874-4d40-9d12-759d8bf2b312-kube-api-access-jznhp\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5lhlt\" (UID: \"3960ebf7-e874-4d40-9d12-759d8bf2b312\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5lhlt" Nov 24 11:44:29 crc kubenswrapper[5072]: I1124 11:44:29.976512 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3960ebf7-e874-4d40-9d12-759d8bf2b312-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5lhlt\" (UID: \"3960ebf7-e874-4d40-9d12-759d8bf2b312\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5lhlt" Nov 24 11:44:29 crc kubenswrapper[5072]: I1124 11:44:29.976712 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3960ebf7-e874-4d40-9d12-759d8bf2b312-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5lhlt\" (UID: \"3960ebf7-e874-4d40-9d12-759d8bf2b312\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5lhlt" Nov 24 11:44:30 crc kubenswrapper[5072]: I1124 11:44:30.078899 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jznhp\" (UniqueName: \"kubernetes.io/projected/3960ebf7-e874-4d40-9d12-759d8bf2b312-kube-api-access-jznhp\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5lhlt\" (UID: \"3960ebf7-e874-4d40-9d12-759d8bf2b312\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5lhlt" Nov 24 11:44:30 crc kubenswrapper[5072]: I1124 11:44:30.078981 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3960ebf7-e874-4d40-9d12-759d8bf2b312-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5lhlt\" (UID: \"3960ebf7-e874-4d40-9d12-759d8bf2b312\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5lhlt" Nov 24 11:44:30 crc kubenswrapper[5072]: I1124 11:44:30.079030 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3960ebf7-e874-4d40-9d12-759d8bf2b312-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5lhlt\" (UID: \"3960ebf7-e874-4d40-9d12-759d8bf2b312\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5lhlt" Nov 24 11:44:30 crc kubenswrapper[5072]: I1124 11:44:30.079147 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3960ebf7-e874-4d40-9d12-759d8bf2b312-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5lhlt\" (UID: \"3960ebf7-e874-4d40-9d12-759d8bf2b312\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5lhlt" Nov 24 11:44:30 crc kubenswrapper[5072]: I1124 11:44:30.086838 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3960ebf7-e874-4d40-9d12-759d8bf2b312-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5lhlt\" (UID: \"3960ebf7-e874-4d40-9d12-759d8bf2b312\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5lhlt" Nov 24 11:44:30 crc kubenswrapper[5072]: I1124 11:44:30.088909 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3960ebf7-e874-4d40-9d12-759d8bf2b312-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5lhlt\" (UID: \"3960ebf7-e874-4d40-9d12-759d8bf2b312\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5lhlt" Nov 24 11:44:30 crc kubenswrapper[5072]: I1124 11:44:30.090527 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3960ebf7-e874-4d40-9d12-759d8bf2b312-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5lhlt\" (UID: \"3960ebf7-e874-4d40-9d12-759d8bf2b312\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5lhlt" Nov 24 11:44:30 crc kubenswrapper[5072]: I1124 11:44:30.110834 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jznhp\" (UniqueName: \"kubernetes.io/projected/3960ebf7-e874-4d40-9d12-759d8bf2b312-kube-api-access-jznhp\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5lhlt\" (UID: \"3960ebf7-e874-4d40-9d12-759d8bf2b312\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5lhlt" Nov 24 11:44:30 crc kubenswrapper[5072]: I1124 11:44:30.221531 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5lhlt" Nov 24 11:44:30 crc kubenswrapper[5072]: I1124 11:44:30.814295 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5lhlt"] Nov 24 11:44:31 crc kubenswrapper[5072]: I1124 11:44:31.819048 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5lhlt" event={"ID":"3960ebf7-e874-4d40-9d12-759d8bf2b312","Type":"ContainerStarted","Data":"4a78a70f56a603b7a4ce767f6f44ac5a07cb87c50da37c4e1149941d7cd18d75"} Nov 24 11:44:31 crc kubenswrapper[5072]: I1124 11:44:31.819682 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5lhlt" event={"ID":"3960ebf7-e874-4d40-9d12-759d8bf2b312","Type":"ContainerStarted","Data":"ab1253078cfa8470cb8154c9ba805a4524ceb473d673e0f817927cf60948cc83"} Nov 24 11:44:32 crc kubenswrapper[5072]: I1124 11:44:32.852142 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5lhlt" podStartSLOduration=3.349909237 podStartE2EDuration="3.852121683s" podCreationTimestamp="2025-11-24 11:44:29 +0000 UTC" firstStartedPulling="2025-11-24 11:44:30.825310498 +0000 UTC m=+2122.536834984" lastFinishedPulling="2025-11-24 11:44:31.327522924 +0000 UTC m=+2123.039047430" observedRunningTime="2025-11-24 11:44:32.84224117 +0000 UTC m=+2124.553765666" watchObservedRunningTime="2025-11-24 11:44:32.852121683 +0000 UTC m=+2124.563646169" Nov 24 11:44:59 crc kubenswrapper[5072]: I1124 11:44:59.697677 5072 generic.go:334] "Generic (PLEG): container finished" podID="3960ebf7-e874-4d40-9d12-759d8bf2b312" containerID="4a78a70f56a603b7a4ce767f6f44ac5a07cb87c50da37c4e1149941d7cd18d75" exitCode=0 Nov 24 11:44:59 crc kubenswrapper[5072]: I1124 11:44:59.697831 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5lhlt" event={"ID":"3960ebf7-e874-4d40-9d12-759d8bf2b312","Type":"ContainerDied","Data":"4a78a70f56a603b7a4ce767f6f44ac5a07cb87c50da37c4e1149941d7cd18d75"} Nov 24 11:45:00 crc kubenswrapper[5072]: I1124 11:45:00.177832 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399745-lr9s2"] Nov 24 11:45:00 crc kubenswrapper[5072]: I1124 11:45:00.179116 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399745-lr9s2" Nov 24 11:45:00 crc kubenswrapper[5072]: I1124 11:45:00.183566 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 11:45:00 crc kubenswrapper[5072]: I1124 11:45:00.183683 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 24 11:45:00 crc kubenswrapper[5072]: I1124 11:45:00.189875 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399745-lr9s2"] Nov 24 11:45:00 crc kubenswrapper[5072]: I1124 11:45:00.350444 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5664d\" (UniqueName: \"kubernetes.io/projected/fb3542d8-1d20-441f-8af8-031a8559c49b-kube-api-access-5664d\") pod \"collect-profiles-29399745-lr9s2\" (UID: \"fb3542d8-1d20-441f-8af8-031a8559c49b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399745-lr9s2" Nov 24 11:45:00 crc kubenswrapper[5072]: I1124 11:45:00.350494 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fb3542d8-1d20-441f-8af8-031a8559c49b-config-volume\") pod \"collect-profiles-29399745-lr9s2\" (UID: \"fb3542d8-1d20-441f-8af8-031a8559c49b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399745-lr9s2" Nov 24 11:45:00 crc kubenswrapper[5072]: I1124 11:45:00.350545 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fb3542d8-1d20-441f-8af8-031a8559c49b-secret-volume\") pod \"collect-profiles-29399745-lr9s2\" (UID: \"fb3542d8-1d20-441f-8af8-031a8559c49b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399745-lr9s2" Nov 24 11:45:00 crc kubenswrapper[5072]: I1124 11:45:00.452542 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5664d\" (UniqueName: \"kubernetes.io/projected/fb3542d8-1d20-441f-8af8-031a8559c49b-kube-api-access-5664d\") pod \"collect-profiles-29399745-lr9s2\" (UID: \"fb3542d8-1d20-441f-8af8-031a8559c49b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399745-lr9s2" Nov 24 11:45:00 crc kubenswrapper[5072]: I1124 11:45:00.452586 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fb3542d8-1d20-441f-8af8-031a8559c49b-config-volume\") pod \"collect-profiles-29399745-lr9s2\" (UID: \"fb3542d8-1d20-441f-8af8-031a8559c49b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399745-lr9s2" Nov 24 11:45:00 crc kubenswrapper[5072]: I1124 11:45:00.452645 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fb3542d8-1d20-441f-8af8-031a8559c49b-secret-volume\") pod \"collect-profiles-29399745-lr9s2\" (UID: \"fb3542d8-1d20-441f-8af8-031a8559c49b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399745-lr9s2" Nov 24 11:45:00 crc kubenswrapper[5072]: I1124 11:45:00.453968 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fb3542d8-1d20-441f-8af8-031a8559c49b-config-volume\") pod \"collect-profiles-29399745-lr9s2\" (UID: \"fb3542d8-1d20-441f-8af8-031a8559c49b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399745-lr9s2" Nov 24 11:45:00 crc kubenswrapper[5072]: I1124 11:45:00.461991 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fb3542d8-1d20-441f-8af8-031a8559c49b-secret-volume\") pod \"collect-profiles-29399745-lr9s2\" (UID: \"fb3542d8-1d20-441f-8af8-031a8559c49b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399745-lr9s2" Nov 24 11:45:00 crc kubenswrapper[5072]: I1124 11:45:00.471057 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5664d\" (UniqueName: \"kubernetes.io/projected/fb3542d8-1d20-441f-8af8-031a8559c49b-kube-api-access-5664d\") pod \"collect-profiles-29399745-lr9s2\" (UID: \"fb3542d8-1d20-441f-8af8-031a8559c49b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399745-lr9s2" Nov 24 11:45:00 crc kubenswrapper[5072]: I1124 11:45:00.509455 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399745-lr9s2" Nov 24 11:45:00 crc kubenswrapper[5072]: I1124 11:45:00.991088 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399745-lr9s2"] Nov 24 11:45:01 crc kubenswrapper[5072]: I1124 11:45:01.039196 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5lhlt" Nov 24 11:45:01 crc kubenswrapper[5072]: I1124 11:45:01.166539 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3960ebf7-e874-4d40-9d12-759d8bf2b312-inventory\") pod \"3960ebf7-e874-4d40-9d12-759d8bf2b312\" (UID: \"3960ebf7-e874-4d40-9d12-759d8bf2b312\") " Nov 24 11:45:01 crc kubenswrapper[5072]: I1124 11:45:01.166610 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jznhp\" (UniqueName: \"kubernetes.io/projected/3960ebf7-e874-4d40-9d12-759d8bf2b312-kube-api-access-jznhp\") pod \"3960ebf7-e874-4d40-9d12-759d8bf2b312\" (UID: \"3960ebf7-e874-4d40-9d12-759d8bf2b312\") " Nov 24 11:45:01 crc kubenswrapper[5072]: I1124 11:45:01.166799 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3960ebf7-e874-4d40-9d12-759d8bf2b312-ssh-key\") pod \"3960ebf7-e874-4d40-9d12-759d8bf2b312\" (UID: \"3960ebf7-e874-4d40-9d12-759d8bf2b312\") " Nov 24 11:45:01 crc kubenswrapper[5072]: I1124 11:45:01.166890 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3960ebf7-e874-4d40-9d12-759d8bf2b312-ceph\") pod \"3960ebf7-e874-4d40-9d12-759d8bf2b312\" (UID: \"3960ebf7-e874-4d40-9d12-759d8bf2b312\") " Nov 24 11:45:01 crc kubenswrapper[5072]: I1124 11:45:01.172418 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3960ebf7-e874-4d40-9d12-759d8bf2b312-ceph" (OuterVolumeSpecName: "ceph") pod "3960ebf7-e874-4d40-9d12-759d8bf2b312" (UID: "3960ebf7-e874-4d40-9d12-759d8bf2b312"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:45:01 crc kubenswrapper[5072]: I1124 11:45:01.172681 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3960ebf7-e874-4d40-9d12-759d8bf2b312-kube-api-access-jznhp" (OuterVolumeSpecName: "kube-api-access-jznhp") pod "3960ebf7-e874-4d40-9d12-759d8bf2b312" (UID: "3960ebf7-e874-4d40-9d12-759d8bf2b312"). InnerVolumeSpecName "kube-api-access-jznhp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:45:01 crc kubenswrapper[5072]: I1124 11:45:01.194790 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3960ebf7-e874-4d40-9d12-759d8bf2b312-inventory" (OuterVolumeSpecName: "inventory") pod "3960ebf7-e874-4d40-9d12-759d8bf2b312" (UID: "3960ebf7-e874-4d40-9d12-759d8bf2b312"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:45:01 crc kubenswrapper[5072]: I1124 11:45:01.199962 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3960ebf7-e874-4d40-9d12-759d8bf2b312-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "3960ebf7-e874-4d40-9d12-759d8bf2b312" (UID: "3960ebf7-e874-4d40-9d12-759d8bf2b312"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:45:01 crc kubenswrapper[5072]: I1124 11:45:01.269750 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jznhp\" (UniqueName: \"kubernetes.io/projected/3960ebf7-e874-4d40-9d12-759d8bf2b312-kube-api-access-jznhp\") on node \"crc\" DevicePath \"\"" Nov 24 11:45:01 crc kubenswrapper[5072]: I1124 11:45:01.269789 5072 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3960ebf7-e874-4d40-9d12-759d8bf2b312-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 11:45:01 crc kubenswrapper[5072]: I1124 11:45:01.269806 5072 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3960ebf7-e874-4d40-9d12-759d8bf2b312-ceph\") on node \"crc\" DevicePath \"\"" Nov 24 11:45:01 crc kubenswrapper[5072]: I1124 11:45:01.269815 5072 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3960ebf7-e874-4d40-9d12-759d8bf2b312-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 11:45:01 crc kubenswrapper[5072]: I1124 11:45:01.717518 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5lhlt" Nov 24 11:45:01 crc kubenswrapper[5072]: I1124 11:45:01.717511 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5lhlt" event={"ID":"3960ebf7-e874-4d40-9d12-759d8bf2b312","Type":"ContainerDied","Data":"ab1253078cfa8470cb8154c9ba805a4524ceb473d673e0f817927cf60948cc83"} Nov 24 11:45:01 crc kubenswrapper[5072]: I1124 11:45:01.717637 5072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab1253078cfa8470cb8154c9ba805a4524ceb473d673e0f817927cf60948cc83" Nov 24 11:45:01 crc kubenswrapper[5072]: I1124 11:45:01.719601 5072 generic.go:334] "Generic (PLEG): container finished" podID="fb3542d8-1d20-441f-8af8-031a8559c49b" containerID="efd5842877ce866c92ce3b1b26eacbb8c5a7ba097d3f2d26e8e369edc733bba7" exitCode=0 Nov 24 11:45:01 crc kubenswrapper[5072]: I1124 11:45:01.719671 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399745-lr9s2" event={"ID":"fb3542d8-1d20-441f-8af8-031a8559c49b","Type":"ContainerDied","Data":"efd5842877ce866c92ce3b1b26eacbb8c5a7ba097d3f2d26e8e369edc733bba7"} Nov 24 11:45:01 crc kubenswrapper[5072]: I1124 11:45:01.719715 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399745-lr9s2" event={"ID":"fb3542d8-1d20-441f-8af8-031a8559c49b","Type":"ContainerStarted","Data":"2f06e16b87eba1f1c9b9aa2ff1126e060e342df66a6f4e5f38e42fbce2479d10"} Nov 24 11:45:01 crc kubenswrapper[5072]: I1124 11:45:01.818992 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nw2kj"] Nov 24 11:45:01 crc kubenswrapper[5072]: E1124 11:45:01.819467 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3960ebf7-e874-4d40-9d12-759d8bf2b312" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 24 11:45:01 crc kubenswrapper[5072]: I1124 11:45:01.819490 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="3960ebf7-e874-4d40-9d12-759d8bf2b312" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 24 11:45:01 crc kubenswrapper[5072]: I1124 11:45:01.819751 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="3960ebf7-e874-4d40-9d12-759d8bf2b312" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 24 11:45:01 crc kubenswrapper[5072]: I1124 11:45:01.820470 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nw2kj" Nov 24 11:45:01 crc kubenswrapper[5072]: I1124 11:45:01.854071 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 24 11:45:01 crc kubenswrapper[5072]: I1124 11:45:01.854495 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 11:45:01 crc kubenswrapper[5072]: I1124 11:45:01.854677 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 11:45:01 crc kubenswrapper[5072]: I1124 11:45:01.854791 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-b6s7d" Nov 24 11:45:01 crc kubenswrapper[5072]: I1124 11:45:01.854817 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 11:45:01 crc kubenswrapper[5072]: I1124 11:45:01.869272 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nw2kj"] Nov 24 11:45:01 crc kubenswrapper[5072]: I1124 11:45:01.887462 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/2f1ddd2f-edb5-4613-9fde-a27861d899bc-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-nw2kj\" (UID: \"2f1ddd2f-edb5-4613-9fde-a27861d899bc\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nw2kj" Nov 24 11:45:01 crc kubenswrapper[5072]: I1124 11:45:01.887566 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9shj\" (UniqueName: \"kubernetes.io/projected/2f1ddd2f-edb5-4613-9fde-a27861d899bc-kube-api-access-n9shj\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-nw2kj\" (UID: \"2f1ddd2f-edb5-4613-9fde-a27861d899bc\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nw2kj" Nov 24 11:45:01 crc kubenswrapper[5072]: I1124 11:45:01.887632 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2f1ddd2f-edb5-4613-9fde-a27861d899bc-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-nw2kj\" (UID: \"2f1ddd2f-edb5-4613-9fde-a27861d899bc\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nw2kj" Nov 24 11:45:01 crc kubenswrapper[5072]: I1124 11:45:01.887821 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f1ddd2f-edb5-4613-9fde-a27861d899bc-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-nw2kj\" (UID: \"2f1ddd2f-edb5-4613-9fde-a27861d899bc\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nw2kj" Nov 24 11:45:01 crc kubenswrapper[5072]: I1124 11:45:01.989976 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/2f1ddd2f-edb5-4613-9fde-a27861d899bc-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-nw2kj\" (UID: \"2f1ddd2f-edb5-4613-9fde-a27861d899bc\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nw2kj" Nov 24 11:45:01 crc kubenswrapper[5072]: I1124 11:45:01.990070 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9shj\" (UniqueName: \"kubernetes.io/projected/2f1ddd2f-edb5-4613-9fde-a27861d899bc-kube-api-access-n9shj\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-nw2kj\" (UID: \"2f1ddd2f-edb5-4613-9fde-a27861d899bc\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nw2kj" Nov 24 11:45:01 crc kubenswrapper[5072]: I1124 11:45:01.990128 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2f1ddd2f-edb5-4613-9fde-a27861d899bc-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-nw2kj\" (UID: \"2f1ddd2f-edb5-4613-9fde-a27861d899bc\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nw2kj" Nov 24 11:45:01 crc kubenswrapper[5072]: I1124 11:45:01.990186 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f1ddd2f-edb5-4613-9fde-a27861d899bc-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-nw2kj\" (UID: \"2f1ddd2f-edb5-4613-9fde-a27861d899bc\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nw2kj" Nov 24 11:45:01 crc kubenswrapper[5072]: I1124 11:45:01.998841 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2f1ddd2f-edb5-4613-9fde-a27861d899bc-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-nw2kj\" (UID: \"2f1ddd2f-edb5-4613-9fde-a27861d899bc\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nw2kj" Nov 24 11:45:02 crc kubenswrapper[5072]: I1124 11:45:02.004639 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f1ddd2f-edb5-4613-9fde-a27861d899bc-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-nw2kj\" (UID: \"2f1ddd2f-edb5-4613-9fde-a27861d899bc\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nw2kj" Nov 24 11:45:02 crc kubenswrapper[5072]: I1124 11:45:02.007948 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/2f1ddd2f-edb5-4613-9fde-a27861d899bc-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-nw2kj\" (UID: \"2f1ddd2f-edb5-4613-9fde-a27861d899bc\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nw2kj" Nov 24 11:45:02 crc kubenswrapper[5072]: I1124 11:45:02.010304 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9shj\" (UniqueName: \"kubernetes.io/projected/2f1ddd2f-edb5-4613-9fde-a27861d899bc-kube-api-access-n9shj\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-nw2kj\" (UID: \"2f1ddd2f-edb5-4613-9fde-a27861d899bc\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nw2kj" Nov 24 11:45:02 crc kubenswrapper[5072]: I1124 11:45:02.175888 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nw2kj" Nov 24 11:45:02 crc kubenswrapper[5072]: I1124 11:45:02.493252 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nw2kj"] Nov 24 11:45:02 crc kubenswrapper[5072]: W1124 11:45:02.501189 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2f1ddd2f_edb5_4613_9fde_a27861d899bc.slice/crio-fca629af89876dacc5cb925d00ac8801c340108e500df9d5ed730076fef5e496 WatchSource:0}: Error finding container fca629af89876dacc5cb925d00ac8801c340108e500df9d5ed730076fef5e496: Status 404 returned error can't find the container with id fca629af89876dacc5cb925d00ac8801c340108e500df9d5ed730076fef5e496 Nov 24 11:45:02 crc kubenswrapper[5072]: I1124 11:45:02.729795 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nw2kj" event={"ID":"2f1ddd2f-edb5-4613-9fde-a27861d899bc","Type":"ContainerStarted","Data":"fca629af89876dacc5cb925d00ac8801c340108e500df9d5ed730076fef5e496"} Nov 24 11:45:03 crc kubenswrapper[5072]: I1124 11:45:03.065650 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399745-lr9s2" Nov 24 11:45:03 crc kubenswrapper[5072]: I1124 11:45:03.108020 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5664d\" (UniqueName: \"kubernetes.io/projected/fb3542d8-1d20-441f-8af8-031a8559c49b-kube-api-access-5664d\") pod \"fb3542d8-1d20-441f-8af8-031a8559c49b\" (UID: \"fb3542d8-1d20-441f-8af8-031a8559c49b\") " Nov 24 11:45:03 crc kubenswrapper[5072]: I1124 11:45:03.108070 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fb3542d8-1d20-441f-8af8-031a8559c49b-secret-volume\") pod \"fb3542d8-1d20-441f-8af8-031a8559c49b\" (UID: \"fb3542d8-1d20-441f-8af8-031a8559c49b\") " Nov 24 11:45:03 crc kubenswrapper[5072]: I1124 11:45:03.108115 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fb3542d8-1d20-441f-8af8-031a8559c49b-config-volume\") pod \"fb3542d8-1d20-441f-8af8-031a8559c49b\" (UID: \"fb3542d8-1d20-441f-8af8-031a8559c49b\") " Nov 24 11:45:03 crc kubenswrapper[5072]: I1124 11:45:03.109055 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb3542d8-1d20-441f-8af8-031a8559c49b-config-volume" (OuterVolumeSpecName: "config-volume") pod "fb3542d8-1d20-441f-8af8-031a8559c49b" (UID: "fb3542d8-1d20-441f-8af8-031a8559c49b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:45:03 crc kubenswrapper[5072]: I1124 11:45:03.114373 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb3542d8-1d20-441f-8af8-031a8559c49b-kube-api-access-5664d" (OuterVolumeSpecName: "kube-api-access-5664d") pod "fb3542d8-1d20-441f-8af8-031a8559c49b" (UID: "fb3542d8-1d20-441f-8af8-031a8559c49b"). InnerVolumeSpecName "kube-api-access-5664d". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:45:03 crc kubenswrapper[5072]: I1124 11:45:03.114964 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb3542d8-1d20-441f-8af8-031a8559c49b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "fb3542d8-1d20-441f-8af8-031a8559c49b" (UID: "fb3542d8-1d20-441f-8af8-031a8559c49b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:45:03 crc kubenswrapper[5072]: I1124 11:45:03.210213 5072 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fb3542d8-1d20-441f-8af8-031a8559c49b-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 11:45:03 crc kubenswrapper[5072]: I1124 11:45:03.210252 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5664d\" (UniqueName: \"kubernetes.io/projected/fb3542d8-1d20-441f-8af8-031a8559c49b-kube-api-access-5664d\") on node \"crc\" DevicePath \"\"" Nov 24 11:45:03 crc kubenswrapper[5072]: I1124 11:45:03.210267 5072 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fb3542d8-1d20-441f-8af8-031a8559c49b-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 24 11:45:03 crc kubenswrapper[5072]: I1124 11:45:03.742192 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399745-lr9s2" event={"ID":"fb3542d8-1d20-441f-8af8-031a8559c49b","Type":"ContainerDied","Data":"2f06e16b87eba1f1c9b9aa2ff1126e060e342df66a6f4e5f38e42fbce2479d10"} Nov 24 11:45:03 crc kubenswrapper[5072]: I1124 11:45:03.742580 5072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f06e16b87eba1f1c9b9aa2ff1126e060e342df66a6f4e5f38e42fbce2479d10" Nov 24 11:45:03 crc kubenswrapper[5072]: I1124 11:45:03.742672 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399745-lr9s2" Nov 24 11:45:03 crc kubenswrapper[5072]: I1124 11:45:03.747751 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nw2kj" event={"ID":"2f1ddd2f-edb5-4613-9fde-a27861d899bc","Type":"ContainerStarted","Data":"4424d3fb659be1a03c7bc9ed01cd64e907a9dbf21ef70da18f7940c513d87333"} Nov 24 11:45:03 crc kubenswrapper[5072]: I1124 11:45:03.781332 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nw2kj" podStartSLOduration=2.14854931 podStartE2EDuration="2.781307043s" podCreationTimestamp="2025-11-24 11:45:01 +0000 UTC" firstStartedPulling="2025-11-24 11:45:02.503285577 +0000 UTC m=+2154.214810053" lastFinishedPulling="2025-11-24 11:45:03.1360433 +0000 UTC m=+2154.847567786" observedRunningTime="2025-11-24 11:45:03.771307287 +0000 UTC m=+2155.482831763" watchObservedRunningTime="2025-11-24 11:45:03.781307043 +0000 UTC m=+2155.492831529" Nov 24 11:45:04 crc kubenswrapper[5072]: I1124 11:45:04.158978 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399700-hnjjf"] Nov 24 11:45:04 crc kubenswrapper[5072]: I1124 11:45:04.166203 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399700-hnjjf"] Nov 24 11:45:05 crc kubenswrapper[5072]: I1124 11:45:05.027379 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96be0671-6ddf-4af0-8989-da8c4a4dcfa7" path="/var/lib/kubelet/pods/96be0671-6ddf-4af0-8989-da8c4a4dcfa7/volumes" Nov 24 11:45:08 crc kubenswrapper[5072]: I1124 11:45:08.790026 5072 generic.go:334] "Generic (PLEG): container finished" podID="2f1ddd2f-edb5-4613-9fde-a27861d899bc" containerID="4424d3fb659be1a03c7bc9ed01cd64e907a9dbf21ef70da18f7940c513d87333" exitCode=0 Nov 24 11:45:08 crc kubenswrapper[5072]: I1124 11:45:08.790624 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nw2kj" event={"ID":"2f1ddd2f-edb5-4613-9fde-a27861d899bc","Type":"ContainerDied","Data":"4424d3fb659be1a03c7bc9ed01cd64e907a9dbf21ef70da18f7940c513d87333"} Nov 24 11:45:10 crc kubenswrapper[5072]: I1124 11:45:10.242882 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nw2kj" Nov 24 11:45:10 crc kubenswrapper[5072]: I1124 11:45:10.404635 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9shj\" (UniqueName: \"kubernetes.io/projected/2f1ddd2f-edb5-4613-9fde-a27861d899bc-kube-api-access-n9shj\") pod \"2f1ddd2f-edb5-4613-9fde-a27861d899bc\" (UID: \"2f1ddd2f-edb5-4613-9fde-a27861d899bc\") " Nov 24 11:45:10 crc kubenswrapper[5072]: I1124 11:45:10.404743 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2f1ddd2f-edb5-4613-9fde-a27861d899bc-ssh-key\") pod \"2f1ddd2f-edb5-4613-9fde-a27861d899bc\" (UID: \"2f1ddd2f-edb5-4613-9fde-a27861d899bc\") " Nov 24 11:45:10 crc kubenswrapper[5072]: I1124 11:45:10.404784 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/2f1ddd2f-edb5-4613-9fde-a27861d899bc-ceph\") pod \"2f1ddd2f-edb5-4613-9fde-a27861d899bc\" (UID: \"2f1ddd2f-edb5-4613-9fde-a27861d899bc\") " Nov 24 11:45:10 crc kubenswrapper[5072]: I1124 11:45:10.404867 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f1ddd2f-edb5-4613-9fde-a27861d899bc-inventory\") pod \"2f1ddd2f-edb5-4613-9fde-a27861d899bc\" (UID: \"2f1ddd2f-edb5-4613-9fde-a27861d899bc\") " Nov 24 11:45:10 crc kubenswrapper[5072]: I1124 11:45:10.409822 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f1ddd2f-edb5-4613-9fde-a27861d899bc-ceph" (OuterVolumeSpecName: "ceph") pod "2f1ddd2f-edb5-4613-9fde-a27861d899bc" (UID: "2f1ddd2f-edb5-4613-9fde-a27861d899bc"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:45:10 crc kubenswrapper[5072]: I1124 11:45:10.422594 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f1ddd2f-edb5-4613-9fde-a27861d899bc-kube-api-access-n9shj" (OuterVolumeSpecName: "kube-api-access-n9shj") pod "2f1ddd2f-edb5-4613-9fde-a27861d899bc" (UID: "2f1ddd2f-edb5-4613-9fde-a27861d899bc"). InnerVolumeSpecName "kube-api-access-n9shj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:45:10 crc kubenswrapper[5072]: I1124 11:45:10.429878 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f1ddd2f-edb5-4613-9fde-a27861d899bc-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "2f1ddd2f-edb5-4613-9fde-a27861d899bc" (UID: "2f1ddd2f-edb5-4613-9fde-a27861d899bc"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:45:10 crc kubenswrapper[5072]: I1124 11:45:10.430241 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f1ddd2f-edb5-4613-9fde-a27861d899bc-inventory" (OuterVolumeSpecName: "inventory") pod "2f1ddd2f-edb5-4613-9fde-a27861d899bc" (UID: "2f1ddd2f-edb5-4613-9fde-a27861d899bc"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:45:10 crc kubenswrapper[5072]: I1124 11:45:10.507737 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9shj\" (UniqueName: \"kubernetes.io/projected/2f1ddd2f-edb5-4613-9fde-a27861d899bc-kube-api-access-n9shj\") on node \"crc\" DevicePath \"\"" Nov 24 11:45:10 crc kubenswrapper[5072]: I1124 11:45:10.507775 5072 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2f1ddd2f-edb5-4613-9fde-a27861d899bc-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 11:45:10 crc kubenswrapper[5072]: I1124 11:45:10.507793 5072 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/2f1ddd2f-edb5-4613-9fde-a27861d899bc-ceph\") on node \"crc\" DevicePath \"\"" Nov 24 11:45:10 crc kubenswrapper[5072]: I1124 11:45:10.507805 5072 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f1ddd2f-edb5-4613-9fde-a27861d899bc-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 11:45:10 crc kubenswrapper[5072]: I1124 11:45:10.810111 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nw2kj" event={"ID":"2f1ddd2f-edb5-4613-9fde-a27861d899bc","Type":"ContainerDied","Data":"fca629af89876dacc5cb925d00ac8801c340108e500df9d5ed730076fef5e496"} Nov 24 11:45:10 crc kubenswrapper[5072]: I1124 11:45:10.810152 5072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fca629af89876dacc5cb925d00ac8801c340108e500df9d5ed730076fef5e496" Nov 24 11:45:10 crc kubenswrapper[5072]: I1124 11:45:10.810245 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nw2kj" Nov 24 11:45:10 crc kubenswrapper[5072]: I1124 11:45:10.899696 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-lrxgj"] Nov 24 11:45:10 crc kubenswrapper[5072]: E1124 11:45:10.900586 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb3542d8-1d20-441f-8af8-031a8559c49b" containerName="collect-profiles" Nov 24 11:45:10 crc kubenswrapper[5072]: I1124 11:45:10.900620 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb3542d8-1d20-441f-8af8-031a8559c49b" containerName="collect-profiles" Nov 24 11:45:10 crc kubenswrapper[5072]: E1124 11:45:10.900685 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f1ddd2f-edb5-4613-9fde-a27861d899bc" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 24 11:45:10 crc kubenswrapper[5072]: I1124 11:45:10.900699 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f1ddd2f-edb5-4613-9fde-a27861d899bc" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 24 11:45:10 crc kubenswrapper[5072]: I1124 11:45:10.900994 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb3542d8-1d20-441f-8af8-031a8559c49b" containerName="collect-profiles" Nov 24 11:45:10 crc kubenswrapper[5072]: I1124 11:45:10.901049 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f1ddd2f-edb5-4613-9fde-a27861d899bc" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 24 11:45:10 crc kubenswrapper[5072]: I1124 11:45:10.902196 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lrxgj" Nov 24 11:45:10 crc kubenswrapper[5072]: I1124 11:45:10.904690 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 11:45:10 crc kubenswrapper[5072]: I1124 11:45:10.905066 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 11:45:10 crc kubenswrapper[5072]: I1124 11:45:10.905315 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 24 11:45:10 crc kubenswrapper[5072]: I1124 11:45:10.905586 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 11:45:10 crc kubenswrapper[5072]: I1124 11:45:10.905891 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-b6s7d" Nov 24 11:45:10 crc kubenswrapper[5072]: I1124 11:45:10.923839 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvxpn\" (UniqueName: \"kubernetes.io/projected/b7687777-0417-42e1-8f0e-201de683f32d-kube-api-access-fvxpn\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lrxgj\" (UID: \"b7687777-0417-42e1-8f0e-201de683f32d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lrxgj" Nov 24 11:45:10 crc kubenswrapper[5072]: I1124 11:45:10.924053 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b7687777-0417-42e1-8f0e-201de683f32d-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lrxgj\" (UID: \"b7687777-0417-42e1-8f0e-201de683f32d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lrxgj" Nov 24 11:45:10 crc kubenswrapper[5072]: I1124 11:45:10.924111 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b7687777-0417-42e1-8f0e-201de683f32d-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lrxgj\" (UID: \"b7687777-0417-42e1-8f0e-201de683f32d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lrxgj" Nov 24 11:45:10 crc kubenswrapper[5072]: I1124 11:45:10.924163 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b7687777-0417-42e1-8f0e-201de683f32d-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lrxgj\" (UID: \"b7687777-0417-42e1-8f0e-201de683f32d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lrxgj" Nov 24 11:45:10 crc kubenswrapper[5072]: I1124 11:45:10.924783 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-lrxgj"] Nov 24 11:45:11 crc kubenswrapper[5072]: I1124 11:45:11.029637 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvxpn\" (UniqueName: \"kubernetes.io/projected/b7687777-0417-42e1-8f0e-201de683f32d-kube-api-access-fvxpn\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lrxgj\" (UID: \"b7687777-0417-42e1-8f0e-201de683f32d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lrxgj" Nov 24 11:45:11 crc kubenswrapper[5072]: I1124 11:45:11.030867 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b7687777-0417-42e1-8f0e-201de683f32d-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lrxgj\" (UID: \"b7687777-0417-42e1-8f0e-201de683f32d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lrxgj" Nov 24 11:45:11 crc kubenswrapper[5072]: I1124 11:45:11.030978 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b7687777-0417-42e1-8f0e-201de683f32d-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lrxgj\" (UID: \"b7687777-0417-42e1-8f0e-201de683f32d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lrxgj" Nov 24 11:45:11 crc kubenswrapper[5072]: I1124 11:45:11.031066 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b7687777-0417-42e1-8f0e-201de683f32d-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lrxgj\" (UID: \"b7687777-0417-42e1-8f0e-201de683f32d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lrxgj" Nov 24 11:45:11 crc kubenswrapper[5072]: I1124 11:45:11.036622 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b7687777-0417-42e1-8f0e-201de683f32d-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lrxgj\" (UID: \"b7687777-0417-42e1-8f0e-201de683f32d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lrxgj" Nov 24 11:45:11 crc kubenswrapper[5072]: I1124 11:45:11.036550 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b7687777-0417-42e1-8f0e-201de683f32d-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lrxgj\" (UID: \"b7687777-0417-42e1-8f0e-201de683f32d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lrxgj" Nov 24 11:45:11 crc kubenswrapper[5072]: I1124 11:45:11.037640 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b7687777-0417-42e1-8f0e-201de683f32d-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lrxgj\" (UID: \"b7687777-0417-42e1-8f0e-201de683f32d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lrxgj" Nov 24 11:45:11 crc kubenswrapper[5072]: I1124 11:45:11.055420 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvxpn\" (UniqueName: \"kubernetes.io/projected/b7687777-0417-42e1-8f0e-201de683f32d-kube-api-access-fvxpn\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lrxgj\" (UID: \"b7687777-0417-42e1-8f0e-201de683f32d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lrxgj" Nov 24 11:45:11 crc kubenswrapper[5072]: I1124 11:45:11.223747 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lrxgj" Nov 24 11:45:11 crc kubenswrapper[5072]: I1124 11:45:11.587435 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-lrxgj"] Nov 24 11:45:11 crc kubenswrapper[5072]: I1124 11:45:11.825609 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lrxgj" event={"ID":"b7687777-0417-42e1-8f0e-201de683f32d","Type":"ContainerStarted","Data":"b8e48577015afba0d523745b2a4d34d7be7a8c0646a8a80cc08404b1b94dc202"} Nov 24 11:45:12 crc kubenswrapper[5072]: I1124 11:45:12.835833 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lrxgj" event={"ID":"b7687777-0417-42e1-8f0e-201de683f32d","Type":"ContainerStarted","Data":"bc29ca6d5c38010a544a201d305982e7ce6270585722500da3b25a7d72a8b34b"} Nov 24 11:45:12 crc kubenswrapper[5072]: I1124 11:45:12.856646 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lrxgj" podStartSLOduration=2.268502809 podStartE2EDuration="2.856630937s" podCreationTimestamp="2025-11-24 11:45:10 +0000 UTC" firstStartedPulling="2025-11-24 11:45:11.604270601 +0000 UTC m=+2163.315795107" lastFinishedPulling="2025-11-24 11:45:12.192398759 +0000 UTC m=+2163.903923235" observedRunningTime="2025-11-24 11:45:12.852807963 +0000 UTC m=+2164.564332429" watchObservedRunningTime="2025-11-24 11:45:12.856630937 +0000 UTC m=+2164.568155403" Nov 24 11:45:17 crc kubenswrapper[5072]: I1124 11:45:17.366434 5072 scope.go:117] "RemoveContainer" containerID="c48dcbaf38f2a63fd2677bbd5dc38e2f921e4b8b27185ac7837b2e5a55a30906" Nov 24 11:45:43 crc kubenswrapper[5072]: I1124 11:45:43.644467 5072 patch_prober.go:28] interesting pod/machine-config-daemon-jfxnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 11:45:43 crc kubenswrapper[5072]: I1124 11:45:43.645000 5072 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 11:45:52 crc kubenswrapper[5072]: I1124 11:45:52.240596 5072 generic.go:334] "Generic (PLEG): container finished" podID="b7687777-0417-42e1-8f0e-201de683f32d" containerID="bc29ca6d5c38010a544a201d305982e7ce6270585722500da3b25a7d72a8b34b" exitCode=0 Nov 24 11:45:52 crc kubenswrapper[5072]: I1124 11:45:52.240739 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lrxgj" event={"ID":"b7687777-0417-42e1-8f0e-201de683f32d","Type":"ContainerDied","Data":"bc29ca6d5c38010a544a201d305982e7ce6270585722500da3b25a7d72a8b34b"} Nov 24 11:45:53 crc kubenswrapper[5072]: I1124 11:45:53.678156 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lrxgj" Nov 24 11:45:53 crc kubenswrapper[5072]: I1124 11:45:53.862005 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b7687777-0417-42e1-8f0e-201de683f32d-inventory\") pod \"b7687777-0417-42e1-8f0e-201de683f32d\" (UID: \"b7687777-0417-42e1-8f0e-201de683f32d\") " Nov 24 11:45:53 crc kubenswrapper[5072]: I1124 11:45:53.862369 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b7687777-0417-42e1-8f0e-201de683f32d-ceph\") pod \"b7687777-0417-42e1-8f0e-201de683f32d\" (UID: \"b7687777-0417-42e1-8f0e-201de683f32d\") " Nov 24 11:45:53 crc kubenswrapper[5072]: I1124 11:45:53.862446 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvxpn\" (UniqueName: \"kubernetes.io/projected/b7687777-0417-42e1-8f0e-201de683f32d-kube-api-access-fvxpn\") pod \"b7687777-0417-42e1-8f0e-201de683f32d\" (UID: \"b7687777-0417-42e1-8f0e-201de683f32d\") " Nov 24 11:45:53 crc kubenswrapper[5072]: I1124 11:45:53.862484 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b7687777-0417-42e1-8f0e-201de683f32d-ssh-key\") pod \"b7687777-0417-42e1-8f0e-201de683f32d\" (UID: \"b7687777-0417-42e1-8f0e-201de683f32d\") " Nov 24 11:45:53 crc kubenswrapper[5072]: I1124 11:45:53.869147 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7687777-0417-42e1-8f0e-201de683f32d-kube-api-access-fvxpn" (OuterVolumeSpecName: "kube-api-access-fvxpn") pod "b7687777-0417-42e1-8f0e-201de683f32d" (UID: "b7687777-0417-42e1-8f0e-201de683f32d"). InnerVolumeSpecName "kube-api-access-fvxpn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:45:53 crc kubenswrapper[5072]: I1124 11:45:53.869828 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7687777-0417-42e1-8f0e-201de683f32d-ceph" (OuterVolumeSpecName: "ceph") pod "b7687777-0417-42e1-8f0e-201de683f32d" (UID: "b7687777-0417-42e1-8f0e-201de683f32d"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:45:53 crc kubenswrapper[5072]: I1124 11:45:53.912732 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7687777-0417-42e1-8f0e-201de683f32d-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b7687777-0417-42e1-8f0e-201de683f32d" (UID: "b7687777-0417-42e1-8f0e-201de683f32d"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:45:53 crc kubenswrapper[5072]: I1124 11:45:53.921408 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7687777-0417-42e1-8f0e-201de683f32d-inventory" (OuterVolumeSpecName: "inventory") pod "b7687777-0417-42e1-8f0e-201de683f32d" (UID: "b7687777-0417-42e1-8f0e-201de683f32d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:45:53 crc kubenswrapper[5072]: I1124 11:45:53.964534 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvxpn\" (UniqueName: \"kubernetes.io/projected/b7687777-0417-42e1-8f0e-201de683f32d-kube-api-access-fvxpn\") on node \"crc\" DevicePath \"\"" Nov 24 11:45:53 crc kubenswrapper[5072]: I1124 11:45:53.964567 5072 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b7687777-0417-42e1-8f0e-201de683f32d-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 11:45:53 crc kubenswrapper[5072]: I1124 11:45:53.964576 5072 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b7687777-0417-42e1-8f0e-201de683f32d-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 11:45:53 crc kubenswrapper[5072]: I1124 11:45:53.964585 5072 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b7687777-0417-42e1-8f0e-201de683f32d-ceph\") on node \"crc\" DevicePath \"\"" Nov 24 11:45:54 crc kubenswrapper[5072]: I1124 11:45:54.268491 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lrxgj" event={"ID":"b7687777-0417-42e1-8f0e-201de683f32d","Type":"ContainerDied","Data":"b8e48577015afba0d523745b2a4d34d7be7a8c0646a8a80cc08404b1b94dc202"} Nov 24 11:45:54 crc kubenswrapper[5072]: I1124 11:45:54.268782 5072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b8e48577015afba0d523745b2a4d34d7be7a8c0646a8a80cc08404b1b94dc202" Nov 24 11:45:54 crc kubenswrapper[5072]: I1124 11:45:54.268544 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lrxgj" Nov 24 11:45:54 crc kubenswrapper[5072]: I1124 11:45:54.352549 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-h9kpr"] Nov 24 11:45:54 crc kubenswrapper[5072]: E1124 11:45:54.353017 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7687777-0417-42e1-8f0e-201de683f32d" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 24 11:45:54 crc kubenswrapper[5072]: I1124 11:45:54.353047 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7687777-0417-42e1-8f0e-201de683f32d" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 24 11:45:54 crc kubenswrapper[5072]: I1124 11:45:54.353284 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7687777-0417-42e1-8f0e-201de683f32d" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 24 11:45:54 crc kubenswrapper[5072]: I1124 11:45:54.353962 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-h9kpr" Nov 24 11:45:54 crc kubenswrapper[5072]: I1124 11:45:54.356488 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 11:45:54 crc kubenswrapper[5072]: I1124 11:45:54.361459 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 24 11:45:54 crc kubenswrapper[5072]: I1124 11:45:54.361459 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-b6s7d" Nov 24 11:45:54 crc kubenswrapper[5072]: I1124 11:45:54.362068 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 11:45:54 crc kubenswrapper[5072]: I1124 11:45:54.362161 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 11:45:54 crc kubenswrapper[5072]: I1124 11:45:54.364274 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-h9kpr"] Nov 24 11:45:54 crc kubenswrapper[5072]: I1124 11:45:54.472190 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/42275dab-0c0f-488a-9d9f-00d08fd1a9fb-ssh-key\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-h9kpr\" (UID: \"42275dab-0c0f-488a-9d9f-00d08fd1a9fb\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-h9kpr" Nov 24 11:45:54 crc kubenswrapper[5072]: I1124 11:45:54.472260 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/42275dab-0c0f-488a-9d9f-00d08fd1a9fb-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-h9kpr\" (UID: \"42275dab-0c0f-488a-9d9f-00d08fd1a9fb\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-h9kpr" Nov 24 11:45:54 crc kubenswrapper[5072]: I1124 11:45:54.472441 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/42275dab-0c0f-488a-9d9f-00d08fd1a9fb-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-h9kpr\" (UID: \"42275dab-0c0f-488a-9d9f-00d08fd1a9fb\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-h9kpr" Nov 24 11:45:54 crc kubenswrapper[5072]: I1124 11:45:54.472471 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtnfd\" (UniqueName: \"kubernetes.io/projected/42275dab-0c0f-488a-9d9f-00d08fd1a9fb-kube-api-access-vtnfd\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-h9kpr\" (UID: \"42275dab-0c0f-488a-9d9f-00d08fd1a9fb\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-h9kpr" Nov 24 11:45:54 crc kubenswrapper[5072]: I1124 11:45:54.573743 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/42275dab-0c0f-488a-9d9f-00d08fd1a9fb-ssh-key\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-h9kpr\" (UID: \"42275dab-0c0f-488a-9d9f-00d08fd1a9fb\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-h9kpr" Nov 24 11:45:54 crc kubenswrapper[5072]: I1124 11:45:54.573796 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/42275dab-0c0f-488a-9d9f-00d08fd1a9fb-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-h9kpr\" (UID: \"42275dab-0c0f-488a-9d9f-00d08fd1a9fb\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-h9kpr" Nov 24 11:45:54 crc kubenswrapper[5072]: I1124 11:45:54.573898 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/42275dab-0c0f-488a-9d9f-00d08fd1a9fb-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-h9kpr\" (UID: \"42275dab-0c0f-488a-9d9f-00d08fd1a9fb\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-h9kpr" Nov 24 11:45:54 crc kubenswrapper[5072]: I1124 11:45:54.573923 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtnfd\" (UniqueName: \"kubernetes.io/projected/42275dab-0c0f-488a-9d9f-00d08fd1a9fb-kube-api-access-vtnfd\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-h9kpr\" (UID: \"42275dab-0c0f-488a-9d9f-00d08fd1a9fb\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-h9kpr" Nov 24 11:45:54 crc kubenswrapper[5072]: I1124 11:45:54.578114 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/42275dab-0c0f-488a-9d9f-00d08fd1a9fb-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-h9kpr\" (UID: \"42275dab-0c0f-488a-9d9f-00d08fd1a9fb\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-h9kpr" Nov 24 11:45:54 crc kubenswrapper[5072]: I1124 11:45:54.578155 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/42275dab-0c0f-488a-9d9f-00d08fd1a9fb-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-h9kpr\" (UID: \"42275dab-0c0f-488a-9d9f-00d08fd1a9fb\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-h9kpr" Nov 24 11:45:54 crc kubenswrapper[5072]: I1124 11:45:54.578519 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/42275dab-0c0f-488a-9d9f-00d08fd1a9fb-ssh-key\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-h9kpr\" (UID: \"42275dab-0c0f-488a-9d9f-00d08fd1a9fb\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-h9kpr" Nov 24 11:45:54 crc kubenswrapper[5072]: I1124 11:45:54.589965 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtnfd\" (UniqueName: \"kubernetes.io/projected/42275dab-0c0f-488a-9d9f-00d08fd1a9fb-kube-api-access-vtnfd\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-h9kpr\" (UID: \"42275dab-0c0f-488a-9d9f-00d08fd1a9fb\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-h9kpr" Nov 24 11:45:54 crc kubenswrapper[5072]: I1124 11:45:54.678938 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-h9kpr" Nov 24 11:45:55 crc kubenswrapper[5072]: I1124 11:45:55.049155 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-h9kpr"] Nov 24 11:45:55 crc kubenswrapper[5072]: I1124 11:45:55.277196 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-h9kpr" event={"ID":"42275dab-0c0f-488a-9d9f-00d08fd1a9fb","Type":"ContainerStarted","Data":"4a6105c13502d8ed1006ce4da91c6128ab5ee2694b0c2289cb277faa67cf8552"} Nov 24 11:45:56 crc kubenswrapper[5072]: I1124 11:45:56.284564 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-h9kpr" event={"ID":"42275dab-0c0f-488a-9d9f-00d08fd1a9fb","Type":"ContainerStarted","Data":"4ffcac3116185466ea5d56b1a3d59c0767fa787e40adde0207ed4b244dbbc4f7"} Nov 24 11:46:00 crc kubenswrapper[5072]: I1124 11:46:00.323600 5072 generic.go:334] "Generic (PLEG): container finished" podID="42275dab-0c0f-488a-9d9f-00d08fd1a9fb" containerID="4ffcac3116185466ea5d56b1a3d59c0767fa787e40adde0207ed4b244dbbc4f7" exitCode=0 Nov 24 11:46:00 crc kubenswrapper[5072]: I1124 11:46:00.323665 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-h9kpr" event={"ID":"42275dab-0c0f-488a-9d9f-00d08fd1a9fb","Type":"ContainerDied","Data":"4ffcac3116185466ea5d56b1a3d59c0767fa787e40adde0207ed4b244dbbc4f7"} Nov 24 11:46:01 crc kubenswrapper[5072]: I1124 11:46:01.712533 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-h9kpr" Nov 24 11:46:01 crc kubenswrapper[5072]: I1124 11:46:01.915611 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/42275dab-0c0f-488a-9d9f-00d08fd1a9fb-ceph\") pod \"42275dab-0c0f-488a-9d9f-00d08fd1a9fb\" (UID: \"42275dab-0c0f-488a-9d9f-00d08fd1a9fb\") " Nov 24 11:46:01 crc kubenswrapper[5072]: I1124 11:46:01.915854 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/42275dab-0c0f-488a-9d9f-00d08fd1a9fb-ssh-key\") pod \"42275dab-0c0f-488a-9d9f-00d08fd1a9fb\" (UID: \"42275dab-0c0f-488a-9d9f-00d08fd1a9fb\") " Nov 24 11:46:01 crc kubenswrapper[5072]: I1124 11:46:01.915976 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/42275dab-0c0f-488a-9d9f-00d08fd1a9fb-inventory\") pod \"42275dab-0c0f-488a-9d9f-00d08fd1a9fb\" (UID: \"42275dab-0c0f-488a-9d9f-00d08fd1a9fb\") " Nov 24 11:46:01 crc kubenswrapper[5072]: I1124 11:46:01.916168 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vtnfd\" (UniqueName: \"kubernetes.io/projected/42275dab-0c0f-488a-9d9f-00d08fd1a9fb-kube-api-access-vtnfd\") pod \"42275dab-0c0f-488a-9d9f-00d08fd1a9fb\" (UID: \"42275dab-0c0f-488a-9d9f-00d08fd1a9fb\") " Nov 24 11:46:01 crc kubenswrapper[5072]: I1124 11:46:01.924776 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42275dab-0c0f-488a-9d9f-00d08fd1a9fb-ceph" (OuterVolumeSpecName: "ceph") pod "42275dab-0c0f-488a-9d9f-00d08fd1a9fb" (UID: "42275dab-0c0f-488a-9d9f-00d08fd1a9fb"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:46:01 crc kubenswrapper[5072]: I1124 11:46:01.925708 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42275dab-0c0f-488a-9d9f-00d08fd1a9fb-kube-api-access-vtnfd" (OuterVolumeSpecName: "kube-api-access-vtnfd") pod "42275dab-0c0f-488a-9d9f-00d08fd1a9fb" (UID: "42275dab-0c0f-488a-9d9f-00d08fd1a9fb"). InnerVolumeSpecName "kube-api-access-vtnfd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:46:01 crc kubenswrapper[5072]: I1124 11:46:01.964617 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42275dab-0c0f-488a-9d9f-00d08fd1a9fb-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "42275dab-0c0f-488a-9d9f-00d08fd1a9fb" (UID: "42275dab-0c0f-488a-9d9f-00d08fd1a9fb"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:46:01 crc kubenswrapper[5072]: I1124 11:46:01.968075 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42275dab-0c0f-488a-9d9f-00d08fd1a9fb-inventory" (OuterVolumeSpecName: "inventory") pod "42275dab-0c0f-488a-9d9f-00d08fd1a9fb" (UID: "42275dab-0c0f-488a-9d9f-00d08fd1a9fb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:46:02 crc kubenswrapper[5072]: I1124 11:46:02.022900 5072 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/42275dab-0c0f-488a-9d9f-00d08fd1a9fb-ceph\") on node \"crc\" DevicePath \"\"" Nov 24 11:46:02 crc kubenswrapper[5072]: I1124 11:46:02.022957 5072 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/42275dab-0c0f-488a-9d9f-00d08fd1a9fb-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 11:46:02 crc kubenswrapper[5072]: I1124 11:46:02.022981 5072 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/42275dab-0c0f-488a-9d9f-00d08fd1a9fb-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 11:46:02 crc kubenswrapper[5072]: I1124 11:46:02.023001 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vtnfd\" (UniqueName: \"kubernetes.io/projected/42275dab-0c0f-488a-9d9f-00d08fd1a9fb-kube-api-access-vtnfd\") on node \"crc\" DevicePath \"\"" Nov 24 11:46:02 crc kubenswrapper[5072]: I1124 11:46:02.346794 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-h9kpr" event={"ID":"42275dab-0c0f-488a-9d9f-00d08fd1a9fb","Type":"ContainerDied","Data":"4a6105c13502d8ed1006ce4da91c6128ab5ee2694b0c2289cb277faa67cf8552"} Nov 24 11:46:02 crc kubenswrapper[5072]: I1124 11:46:02.346875 5072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a6105c13502d8ed1006ce4da91c6128ab5ee2694b0c2289cb277faa67cf8552" Nov 24 11:46:02 crc kubenswrapper[5072]: I1124 11:46:02.346877 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-h9kpr" Nov 24 11:46:02 crc kubenswrapper[5072]: I1124 11:46:02.416839 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vptlp"] Nov 24 11:46:02 crc kubenswrapper[5072]: E1124 11:46:02.417538 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42275dab-0c0f-488a-9d9f-00d08fd1a9fb" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Nov 24 11:46:02 crc kubenswrapper[5072]: I1124 11:46:02.417560 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="42275dab-0c0f-488a-9d9f-00d08fd1a9fb" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Nov 24 11:46:02 crc kubenswrapper[5072]: I1124 11:46:02.417797 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="42275dab-0c0f-488a-9d9f-00d08fd1a9fb" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Nov 24 11:46:02 crc kubenswrapper[5072]: I1124 11:46:02.418532 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vptlp" Nov 24 11:46:02 crc kubenswrapper[5072]: I1124 11:46:02.421495 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 11:46:02 crc kubenswrapper[5072]: I1124 11:46:02.422682 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 11:46:02 crc kubenswrapper[5072]: I1124 11:46:02.422756 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-b6s7d" Nov 24 11:46:02 crc kubenswrapper[5072]: I1124 11:46:02.423044 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 11:46:02 crc kubenswrapper[5072]: I1124 11:46:02.423602 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 24 11:46:02 crc kubenswrapper[5072]: I1124 11:46:02.433467 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/792ebb76-1e10-452d-a1e3-159bb5b80975-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vptlp\" (UID: \"792ebb76-1e10-452d-a1e3-159bb5b80975\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vptlp" Nov 24 11:46:02 crc kubenswrapper[5072]: I1124 11:46:02.433540 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/792ebb76-1e10-452d-a1e3-159bb5b80975-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vptlp\" (UID: \"792ebb76-1e10-452d-a1e3-159bb5b80975\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vptlp" Nov 24 11:46:02 crc kubenswrapper[5072]: I1124 11:46:02.433727 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/792ebb76-1e10-452d-a1e3-159bb5b80975-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vptlp\" (UID: \"792ebb76-1e10-452d-a1e3-159bb5b80975\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vptlp" Nov 24 11:46:02 crc kubenswrapper[5072]: I1124 11:46:02.433824 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6wnj\" (UniqueName: \"kubernetes.io/projected/792ebb76-1e10-452d-a1e3-159bb5b80975-kube-api-access-l6wnj\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vptlp\" (UID: \"792ebb76-1e10-452d-a1e3-159bb5b80975\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vptlp" Nov 24 11:46:02 crc kubenswrapper[5072]: I1124 11:46:02.452158 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vptlp"] Nov 24 11:46:02 crc kubenswrapper[5072]: I1124 11:46:02.534847 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/792ebb76-1e10-452d-a1e3-159bb5b80975-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vptlp\" (UID: \"792ebb76-1e10-452d-a1e3-159bb5b80975\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vptlp" Nov 24 11:46:02 crc kubenswrapper[5072]: I1124 11:46:02.534901 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6wnj\" (UniqueName: \"kubernetes.io/projected/792ebb76-1e10-452d-a1e3-159bb5b80975-kube-api-access-l6wnj\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vptlp\" (UID: \"792ebb76-1e10-452d-a1e3-159bb5b80975\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vptlp" Nov 24 11:46:02 crc kubenswrapper[5072]: I1124 11:46:02.534973 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/792ebb76-1e10-452d-a1e3-159bb5b80975-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vptlp\" (UID: \"792ebb76-1e10-452d-a1e3-159bb5b80975\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vptlp" Nov 24 11:46:02 crc kubenswrapper[5072]: I1124 11:46:02.534990 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/792ebb76-1e10-452d-a1e3-159bb5b80975-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vptlp\" (UID: \"792ebb76-1e10-452d-a1e3-159bb5b80975\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vptlp" Nov 24 11:46:02 crc kubenswrapper[5072]: I1124 11:46:02.541125 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/792ebb76-1e10-452d-a1e3-159bb5b80975-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vptlp\" (UID: \"792ebb76-1e10-452d-a1e3-159bb5b80975\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vptlp" Nov 24 11:46:02 crc kubenswrapper[5072]: I1124 11:46:02.541607 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/792ebb76-1e10-452d-a1e3-159bb5b80975-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vptlp\" (UID: \"792ebb76-1e10-452d-a1e3-159bb5b80975\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vptlp" Nov 24 11:46:02 crc kubenswrapper[5072]: I1124 11:46:02.544522 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/792ebb76-1e10-452d-a1e3-159bb5b80975-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vptlp\" (UID: \"792ebb76-1e10-452d-a1e3-159bb5b80975\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vptlp" Nov 24 11:46:02 crc kubenswrapper[5072]: I1124 11:46:02.556056 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6wnj\" (UniqueName: \"kubernetes.io/projected/792ebb76-1e10-452d-a1e3-159bb5b80975-kube-api-access-l6wnj\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vptlp\" (UID: \"792ebb76-1e10-452d-a1e3-159bb5b80975\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vptlp" Nov 24 11:46:02 crc kubenswrapper[5072]: I1124 11:46:02.741753 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vptlp" Nov 24 11:46:03 crc kubenswrapper[5072]: I1124 11:46:03.054891 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vptlp"] Nov 24 11:46:03 crc kubenswrapper[5072]: I1124 11:46:03.354078 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vptlp" event={"ID":"792ebb76-1e10-452d-a1e3-159bb5b80975","Type":"ContainerStarted","Data":"5f153d5d1055100a019ad24ff1760b7e4e41485c49874873b601e780074e4ce4"} Nov 24 11:46:04 crc kubenswrapper[5072]: I1124 11:46:04.366262 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vptlp" event={"ID":"792ebb76-1e10-452d-a1e3-159bb5b80975","Type":"ContainerStarted","Data":"88a78b4f7a318091a28ba5d2062d4838e8fd14e41110154680f274023f158cad"} Nov 24 11:46:04 crc kubenswrapper[5072]: I1124 11:46:04.395631 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vptlp" podStartSLOduration=1.860909072 podStartE2EDuration="2.395604115s" podCreationTimestamp="2025-11-24 11:46:02 +0000 UTC" firstStartedPulling="2025-11-24 11:46:03.068149994 +0000 UTC m=+2214.779674470" lastFinishedPulling="2025-11-24 11:46:03.602845037 +0000 UTC m=+2215.314369513" observedRunningTime="2025-11-24 11:46:04.39254298 +0000 UTC m=+2216.104067456" watchObservedRunningTime="2025-11-24 11:46:04.395604115 +0000 UTC m=+2216.107128631" Nov 24 11:46:13 crc kubenswrapper[5072]: I1124 11:46:13.644750 5072 patch_prober.go:28] interesting pod/machine-config-daemon-jfxnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 11:46:13 crc kubenswrapper[5072]: I1124 11:46:13.645190 5072 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 11:46:24 crc kubenswrapper[5072]: I1124 11:46:24.412537 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jsphb"] Nov 24 11:46:24 crc kubenswrapper[5072]: I1124 11:46:24.414928 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jsphb" Nov 24 11:46:24 crc kubenswrapper[5072]: I1124 11:46:24.437549 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jsphb"] Nov 24 11:46:24 crc kubenswrapper[5072]: I1124 11:46:24.588633 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7q9dp\" (UniqueName: \"kubernetes.io/projected/32896dd6-e92e-42bc-93fa-5ad41c44d299-kube-api-access-7q9dp\") pod \"redhat-operators-jsphb\" (UID: \"32896dd6-e92e-42bc-93fa-5ad41c44d299\") " pod="openshift-marketplace/redhat-operators-jsphb" Nov 24 11:46:24 crc kubenswrapper[5072]: I1124 11:46:24.588937 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32896dd6-e92e-42bc-93fa-5ad41c44d299-catalog-content\") pod \"redhat-operators-jsphb\" (UID: \"32896dd6-e92e-42bc-93fa-5ad41c44d299\") " pod="openshift-marketplace/redhat-operators-jsphb" Nov 24 11:46:24 crc kubenswrapper[5072]: I1124 11:46:24.589102 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32896dd6-e92e-42bc-93fa-5ad41c44d299-utilities\") pod \"redhat-operators-jsphb\" (UID: \"32896dd6-e92e-42bc-93fa-5ad41c44d299\") " pod="openshift-marketplace/redhat-operators-jsphb" Nov 24 11:46:24 crc kubenswrapper[5072]: I1124 11:46:24.690994 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32896dd6-e92e-42bc-93fa-5ad41c44d299-utilities\") pod \"redhat-operators-jsphb\" (UID: \"32896dd6-e92e-42bc-93fa-5ad41c44d299\") " pod="openshift-marketplace/redhat-operators-jsphb" Nov 24 11:46:24 crc kubenswrapper[5072]: I1124 11:46:24.691098 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7q9dp\" (UniqueName: \"kubernetes.io/projected/32896dd6-e92e-42bc-93fa-5ad41c44d299-kube-api-access-7q9dp\") pod \"redhat-operators-jsphb\" (UID: \"32896dd6-e92e-42bc-93fa-5ad41c44d299\") " pod="openshift-marketplace/redhat-operators-jsphb" Nov 24 11:46:24 crc kubenswrapper[5072]: I1124 11:46:24.691147 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32896dd6-e92e-42bc-93fa-5ad41c44d299-catalog-content\") pod \"redhat-operators-jsphb\" (UID: \"32896dd6-e92e-42bc-93fa-5ad41c44d299\") " pod="openshift-marketplace/redhat-operators-jsphb" Nov 24 11:46:24 crc kubenswrapper[5072]: I1124 11:46:24.691528 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32896dd6-e92e-42bc-93fa-5ad41c44d299-utilities\") pod \"redhat-operators-jsphb\" (UID: \"32896dd6-e92e-42bc-93fa-5ad41c44d299\") " pod="openshift-marketplace/redhat-operators-jsphb" Nov 24 11:46:24 crc kubenswrapper[5072]: I1124 11:46:24.691602 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32896dd6-e92e-42bc-93fa-5ad41c44d299-catalog-content\") pod \"redhat-operators-jsphb\" (UID: \"32896dd6-e92e-42bc-93fa-5ad41c44d299\") " pod="openshift-marketplace/redhat-operators-jsphb" Nov 24 11:46:24 crc kubenswrapper[5072]: I1124 11:46:24.708845 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7q9dp\" (UniqueName: \"kubernetes.io/projected/32896dd6-e92e-42bc-93fa-5ad41c44d299-kube-api-access-7q9dp\") pod \"redhat-operators-jsphb\" (UID: \"32896dd6-e92e-42bc-93fa-5ad41c44d299\") " pod="openshift-marketplace/redhat-operators-jsphb" Nov 24 11:46:24 crc kubenswrapper[5072]: I1124 11:46:24.750626 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jsphb" Nov 24 11:46:25 crc kubenswrapper[5072]: W1124 11:46:25.219277 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod32896dd6_e92e_42bc_93fa_5ad41c44d299.slice/crio-e3a2a5b3e35ebc96006e01925d9874554d284970ea40df63d25741e1a76cdd58 WatchSource:0}: Error finding container e3a2a5b3e35ebc96006e01925d9874554d284970ea40df63d25741e1a76cdd58: Status 404 returned error can't find the container with id e3a2a5b3e35ebc96006e01925d9874554d284970ea40df63d25741e1a76cdd58 Nov 24 11:46:25 crc kubenswrapper[5072]: I1124 11:46:25.226964 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jsphb"] Nov 24 11:46:25 crc kubenswrapper[5072]: I1124 11:46:25.540981 5072 generic.go:334] "Generic (PLEG): container finished" podID="32896dd6-e92e-42bc-93fa-5ad41c44d299" containerID="4e4cd9f5bfcc59915c692db4409e3aacec770386e769f6467f1fd209d2cb6747" exitCode=0 Nov 24 11:46:25 crc kubenswrapper[5072]: I1124 11:46:25.541080 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jsphb" event={"ID":"32896dd6-e92e-42bc-93fa-5ad41c44d299","Type":"ContainerDied","Data":"4e4cd9f5bfcc59915c692db4409e3aacec770386e769f6467f1fd209d2cb6747"} Nov 24 11:46:25 crc kubenswrapper[5072]: I1124 11:46:25.541260 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jsphb" event={"ID":"32896dd6-e92e-42bc-93fa-5ad41c44d299","Type":"ContainerStarted","Data":"e3a2a5b3e35ebc96006e01925d9874554d284970ea40df63d25741e1a76cdd58"} Nov 24 11:46:26 crc kubenswrapper[5072]: I1124 11:46:26.557393 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jsphb" event={"ID":"32896dd6-e92e-42bc-93fa-5ad41c44d299","Type":"ContainerStarted","Data":"35dbb52639f4bcb87ac2daedcc7ea89cfbf31e795b91c50a99b73559859b4d22"} Nov 24 11:46:27 crc kubenswrapper[5072]: I1124 11:46:27.565534 5072 generic.go:334] "Generic (PLEG): container finished" podID="32896dd6-e92e-42bc-93fa-5ad41c44d299" containerID="35dbb52639f4bcb87ac2daedcc7ea89cfbf31e795b91c50a99b73559859b4d22" exitCode=0 Nov 24 11:46:27 crc kubenswrapper[5072]: I1124 11:46:27.565787 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jsphb" event={"ID":"32896dd6-e92e-42bc-93fa-5ad41c44d299","Type":"ContainerDied","Data":"35dbb52639f4bcb87ac2daedcc7ea89cfbf31e795b91c50a99b73559859b4d22"} Nov 24 11:46:28 crc kubenswrapper[5072]: I1124 11:46:28.577560 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jsphb" event={"ID":"32896dd6-e92e-42bc-93fa-5ad41c44d299","Type":"ContainerStarted","Data":"1c6051915bd6cb7312caf5a6a9ea4601d6ab412bb40620786c4c7288aee17f2a"} Nov 24 11:46:28 crc kubenswrapper[5072]: I1124 11:46:28.600961 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jsphb" podStartSLOduration=2.160030789 podStartE2EDuration="4.600939503s" podCreationTimestamp="2025-11-24 11:46:24 +0000 UTC" firstStartedPulling="2025-11-24 11:46:25.542411537 +0000 UTC m=+2237.253936013" lastFinishedPulling="2025-11-24 11:46:27.983320251 +0000 UTC m=+2239.694844727" observedRunningTime="2025-11-24 11:46:28.596653568 +0000 UTC m=+2240.308178044" watchObservedRunningTime="2025-11-24 11:46:28.600939503 +0000 UTC m=+2240.312463979" Nov 24 11:46:29 crc kubenswrapper[5072]: I1124 11:46:29.198309 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4nsmr"] Nov 24 11:46:29 crc kubenswrapper[5072]: I1124 11:46:29.201585 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4nsmr" Nov 24 11:46:29 crc kubenswrapper[5072]: I1124 11:46:29.212911 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4nsmr"] Nov 24 11:46:29 crc kubenswrapper[5072]: I1124 11:46:29.271317 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38853327-58cd-437a-9f17-6558118671bf-utilities\") pod \"community-operators-4nsmr\" (UID: \"38853327-58cd-437a-9f17-6558118671bf\") " pod="openshift-marketplace/community-operators-4nsmr" Nov 24 11:46:29 crc kubenswrapper[5072]: I1124 11:46:29.271446 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38853327-58cd-437a-9f17-6558118671bf-catalog-content\") pod \"community-operators-4nsmr\" (UID: \"38853327-58cd-437a-9f17-6558118671bf\") " pod="openshift-marketplace/community-operators-4nsmr" Nov 24 11:46:29 crc kubenswrapper[5072]: I1124 11:46:29.271556 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cs7s\" (UniqueName: \"kubernetes.io/projected/38853327-58cd-437a-9f17-6558118671bf-kube-api-access-7cs7s\") pod \"community-operators-4nsmr\" (UID: \"38853327-58cd-437a-9f17-6558118671bf\") " pod="openshift-marketplace/community-operators-4nsmr" Nov 24 11:46:29 crc kubenswrapper[5072]: I1124 11:46:29.373595 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7cs7s\" (UniqueName: \"kubernetes.io/projected/38853327-58cd-437a-9f17-6558118671bf-kube-api-access-7cs7s\") pod \"community-operators-4nsmr\" (UID: \"38853327-58cd-437a-9f17-6558118671bf\") " pod="openshift-marketplace/community-operators-4nsmr" Nov 24 11:46:29 crc kubenswrapper[5072]: I1124 11:46:29.373750 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38853327-58cd-437a-9f17-6558118671bf-utilities\") pod \"community-operators-4nsmr\" (UID: \"38853327-58cd-437a-9f17-6558118671bf\") " pod="openshift-marketplace/community-operators-4nsmr" Nov 24 11:46:29 crc kubenswrapper[5072]: I1124 11:46:29.373774 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38853327-58cd-437a-9f17-6558118671bf-catalog-content\") pod \"community-operators-4nsmr\" (UID: \"38853327-58cd-437a-9f17-6558118671bf\") " pod="openshift-marketplace/community-operators-4nsmr" Nov 24 11:46:29 crc kubenswrapper[5072]: I1124 11:46:29.374220 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38853327-58cd-437a-9f17-6558118671bf-catalog-content\") pod \"community-operators-4nsmr\" (UID: \"38853327-58cd-437a-9f17-6558118671bf\") " pod="openshift-marketplace/community-operators-4nsmr" Nov 24 11:46:29 crc kubenswrapper[5072]: I1124 11:46:29.374309 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38853327-58cd-437a-9f17-6558118671bf-utilities\") pod \"community-operators-4nsmr\" (UID: \"38853327-58cd-437a-9f17-6558118671bf\") " pod="openshift-marketplace/community-operators-4nsmr" Nov 24 11:46:29 crc kubenswrapper[5072]: I1124 11:46:29.394871 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7cs7s\" (UniqueName: \"kubernetes.io/projected/38853327-58cd-437a-9f17-6558118671bf-kube-api-access-7cs7s\") pod \"community-operators-4nsmr\" (UID: \"38853327-58cd-437a-9f17-6558118671bf\") " pod="openshift-marketplace/community-operators-4nsmr" Nov 24 11:46:29 crc kubenswrapper[5072]: I1124 11:46:29.521286 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4nsmr" Nov 24 11:46:30 crc kubenswrapper[5072]: I1124 11:46:30.109095 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4nsmr"] Nov 24 11:46:30 crc kubenswrapper[5072]: I1124 11:46:30.603147 5072 generic.go:334] "Generic (PLEG): container finished" podID="38853327-58cd-437a-9f17-6558118671bf" containerID="e9a56360fda9ad445326b5b7b6f14f4d86916c3d8fec39f8f0274bb6a8f4dad1" exitCode=0 Nov 24 11:46:30 crc kubenswrapper[5072]: I1124 11:46:30.603242 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4nsmr" event={"ID":"38853327-58cd-437a-9f17-6558118671bf","Type":"ContainerDied","Data":"e9a56360fda9ad445326b5b7b6f14f4d86916c3d8fec39f8f0274bb6a8f4dad1"} Nov 24 11:46:30 crc kubenswrapper[5072]: I1124 11:46:30.603406 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4nsmr" event={"ID":"38853327-58cd-437a-9f17-6558118671bf","Type":"ContainerStarted","Data":"ada3ce1646863847db42597d561640e194995ec769e4a0dfc7e718f700ac8170"} Nov 24 11:46:34 crc kubenswrapper[5072]: I1124 11:46:34.751443 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jsphb" Nov 24 11:46:34 crc kubenswrapper[5072]: I1124 11:46:34.752011 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jsphb" Nov 24 11:46:34 crc kubenswrapper[5072]: I1124 11:46:34.821224 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jsphb" Nov 24 11:46:35 crc kubenswrapper[5072]: I1124 11:46:35.668582 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4nsmr" event={"ID":"38853327-58cd-437a-9f17-6558118671bf","Type":"ContainerStarted","Data":"6d6c35c8878231ad3b4bfe32e8251af4bc883b6ef65a48e10ac57609949bc6a4"} Nov 24 11:46:35 crc kubenswrapper[5072]: I1124 11:46:35.731562 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jsphb" Nov 24 11:46:36 crc kubenswrapper[5072]: I1124 11:46:36.682605 5072 generic.go:334] "Generic (PLEG): container finished" podID="38853327-58cd-437a-9f17-6558118671bf" containerID="6d6c35c8878231ad3b4bfe32e8251af4bc883b6ef65a48e10ac57609949bc6a4" exitCode=0 Nov 24 11:46:36 crc kubenswrapper[5072]: I1124 11:46:36.682703 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4nsmr" event={"ID":"38853327-58cd-437a-9f17-6558118671bf","Type":"ContainerDied","Data":"6d6c35c8878231ad3b4bfe32e8251af4bc883b6ef65a48e10ac57609949bc6a4"} Nov 24 11:46:37 crc kubenswrapper[5072]: I1124 11:46:37.692605 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4nsmr" event={"ID":"38853327-58cd-437a-9f17-6558118671bf","Type":"ContainerStarted","Data":"e1cdade7dbe48871bd2e78173c7df34fe6fa4a624a187912bc550a5d2407ed2c"} Nov 24 11:46:37 crc kubenswrapper[5072]: I1124 11:46:37.730784 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4nsmr" podStartSLOduration=2.256784741 podStartE2EDuration="8.730764327s" podCreationTimestamp="2025-11-24 11:46:29 +0000 UTC" firstStartedPulling="2025-11-24 11:46:30.605562642 +0000 UTC m=+2242.317087158" lastFinishedPulling="2025-11-24 11:46:37.079542248 +0000 UTC m=+2248.791066744" observedRunningTime="2025-11-24 11:46:37.718536637 +0000 UTC m=+2249.430061113" watchObservedRunningTime="2025-11-24 11:46:37.730764327 +0000 UTC m=+2249.442288813" Nov 24 11:46:37 crc kubenswrapper[5072]: I1124 11:46:37.986981 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jsphb"] Nov 24 11:46:37 crc kubenswrapper[5072]: I1124 11:46:37.987597 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jsphb" podUID="32896dd6-e92e-42bc-93fa-5ad41c44d299" containerName="registry-server" containerID="cri-o://1c6051915bd6cb7312caf5a6a9ea4601d6ab412bb40620786c4c7288aee17f2a" gracePeriod=2 Nov 24 11:46:38 crc kubenswrapper[5072]: I1124 11:46:38.474258 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jsphb" Nov 24 11:46:38 crc kubenswrapper[5072]: I1124 11:46:38.669314 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32896dd6-e92e-42bc-93fa-5ad41c44d299-catalog-content\") pod \"32896dd6-e92e-42bc-93fa-5ad41c44d299\" (UID: \"32896dd6-e92e-42bc-93fa-5ad41c44d299\") " Nov 24 11:46:38 crc kubenswrapper[5072]: I1124 11:46:38.669793 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7q9dp\" (UniqueName: \"kubernetes.io/projected/32896dd6-e92e-42bc-93fa-5ad41c44d299-kube-api-access-7q9dp\") pod \"32896dd6-e92e-42bc-93fa-5ad41c44d299\" (UID: \"32896dd6-e92e-42bc-93fa-5ad41c44d299\") " Nov 24 11:46:38 crc kubenswrapper[5072]: I1124 11:46:38.669957 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32896dd6-e92e-42bc-93fa-5ad41c44d299-utilities\") pod \"32896dd6-e92e-42bc-93fa-5ad41c44d299\" (UID: \"32896dd6-e92e-42bc-93fa-5ad41c44d299\") " Nov 24 11:46:38 crc kubenswrapper[5072]: I1124 11:46:38.671618 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32896dd6-e92e-42bc-93fa-5ad41c44d299-utilities" (OuterVolumeSpecName: "utilities") pod "32896dd6-e92e-42bc-93fa-5ad41c44d299" (UID: "32896dd6-e92e-42bc-93fa-5ad41c44d299"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:46:38 crc kubenswrapper[5072]: I1124 11:46:38.676954 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32896dd6-e92e-42bc-93fa-5ad41c44d299-kube-api-access-7q9dp" (OuterVolumeSpecName: "kube-api-access-7q9dp") pod "32896dd6-e92e-42bc-93fa-5ad41c44d299" (UID: "32896dd6-e92e-42bc-93fa-5ad41c44d299"). InnerVolumeSpecName "kube-api-access-7q9dp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:46:38 crc kubenswrapper[5072]: I1124 11:46:38.719920 5072 generic.go:334] "Generic (PLEG): container finished" podID="32896dd6-e92e-42bc-93fa-5ad41c44d299" containerID="1c6051915bd6cb7312caf5a6a9ea4601d6ab412bb40620786c4c7288aee17f2a" exitCode=0 Nov 24 11:46:38 crc kubenswrapper[5072]: I1124 11:46:38.720553 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jsphb" event={"ID":"32896dd6-e92e-42bc-93fa-5ad41c44d299","Type":"ContainerDied","Data":"1c6051915bd6cb7312caf5a6a9ea4601d6ab412bb40620786c4c7288aee17f2a"} Nov 24 11:46:38 crc kubenswrapper[5072]: I1124 11:46:38.720583 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jsphb" Nov 24 11:46:38 crc kubenswrapper[5072]: I1124 11:46:38.720607 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jsphb" event={"ID":"32896dd6-e92e-42bc-93fa-5ad41c44d299","Type":"ContainerDied","Data":"e3a2a5b3e35ebc96006e01925d9874554d284970ea40df63d25741e1a76cdd58"} Nov 24 11:46:38 crc kubenswrapper[5072]: I1124 11:46:38.720642 5072 scope.go:117] "RemoveContainer" containerID="1c6051915bd6cb7312caf5a6a9ea4601d6ab412bb40620786c4c7288aee17f2a" Nov 24 11:46:38 crc kubenswrapper[5072]: I1124 11:46:38.746040 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32896dd6-e92e-42bc-93fa-5ad41c44d299-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "32896dd6-e92e-42bc-93fa-5ad41c44d299" (UID: "32896dd6-e92e-42bc-93fa-5ad41c44d299"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:46:38 crc kubenswrapper[5072]: I1124 11:46:38.757237 5072 scope.go:117] "RemoveContainer" containerID="35dbb52639f4bcb87ac2daedcc7ea89cfbf31e795b91c50a99b73559859b4d22" Nov 24 11:46:38 crc kubenswrapper[5072]: I1124 11:46:38.773248 5072 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32896dd6-e92e-42bc-93fa-5ad41c44d299-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 11:46:38 crc kubenswrapper[5072]: I1124 11:46:38.773299 5072 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32896dd6-e92e-42bc-93fa-5ad41c44d299-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 11:46:38 crc kubenswrapper[5072]: I1124 11:46:38.773311 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7q9dp\" (UniqueName: \"kubernetes.io/projected/32896dd6-e92e-42bc-93fa-5ad41c44d299-kube-api-access-7q9dp\") on node \"crc\" DevicePath \"\"" Nov 24 11:46:38 crc kubenswrapper[5072]: I1124 11:46:38.781155 5072 scope.go:117] "RemoveContainer" containerID="4e4cd9f5bfcc59915c692db4409e3aacec770386e769f6467f1fd209d2cb6747" Nov 24 11:46:38 crc kubenswrapper[5072]: I1124 11:46:38.845856 5072 scope.go:117] "RemoveContainer" containerID="1c6051915bd6cb7312caf5a6a9ea4601d6ab412bb40620786c4c7288aee17f2a" Nov 24 11:46:38 crc kubenswrapper[5072]: E1124 11:46:38.846510 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c6051915bd6cb7312caf5a6a9ea4601d6ab412bb40620786c4c7288aee17f2a\": container with ID starting with 1c6051915bd6cb7312caf5a6a9ea4601d6ab412bb40620786c4c7288aee17f2a not found: ID does not exist" containerID="1c6051915bd6cb7312caf5a6a9ea4601d6ab412bb40620786c4c7288aee17f2a" Nov 24 11:46:38 crc kubenswrapper[5072]: I1124 11:46:38.846569 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c6051915bd6cb7312caf5a6a9ea4601d6ab412bb40620786c4c7288aee17f2a"} err="failed to get container status \"1c6051915bd6cb7312caf5a6a9ea4601d6ab412bb40620786c4c7288aee17f2a\": rpc error: code = NotFound desc = could not find container \"1c6051915bd6cb7312caf5a6a9ea4601d6ab412bb40620786c4c7288aee17f2a\": container with ID starting with 1c6051915bd6cb7312caf5a6a9ea4601d6ab412bb40620786c4c7288aee17f2a not found: ID does not exist" Nov 24 11:46:38 crc kubenswrapper[5072]: I1124 11:46:38.846607 5072 scope.go:117] "RemoveContainer" containerID="35dbb52639f4bcb87ac2daedcc7ea89cfbf31e795b91c50a99b73559859b4d22" Nov 24 11:46:38 crc kubenswrapper[5072]: E1124 11:46:38.847140 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35dbb52639f4bcb87ac2daedcc7ea89cfbf31e795b91c50a99b73559859b4d22\": container with ID starting with 35dbb52639f4bcb87ac2daedcc7ea89cfbf31e795b91c50a99b73559859b4d22 not found: ID does not exist" containerID="35dbb52639f4bcb87ac2daedcc7ea89cfbf31e795b91c50a99b73559859b4d22" Nov 24 11:46:38 crc kubenswrapper[5072]: I1124 11:46:38.847206 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35dbb52639f4bcb87ac2daedcc7ea89cfbf31e795b91c50a99b73559859b4d22"} err="failed to get container status \"35dbb52639f4bcb87ac2daedcc7ea89cfbf31e795b91c50a99b73559859b4d22\": rpc error: code = NotFound desc = could not find container \"35dbb52639f4bcb87ac2daedcc7ea89cfbf31e795b91c50a99b73559859b4d22\": container with ID starting with 35dbb52639f4bcb87ac2daedcc7ea89cfbf31e795b91c50a99b73559859b4d22 not found: ID does not exist" Nov 24 11:46:38 crc kubenswrapper[5072]: I1124 11:46:38.847238 5072 scope.go:117] "RemoveContainer" containerID="4e4cd9f5bfcc59915c692db4409e3aacec770386e769f6467f1fd209d2cb6747" Nov 24 11:46:38 crc kubenswrapper[5072]: E1124 11:46:38.847917 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e4cd9f5bfcc59915c692db4409e3aacec770386e769f6467f1fd209d2cb6747\": container with ID starting with 4e4cd9f5bfcc59915c692db4409e3aacec770386e769f6467f1fd209d2cb6747 not found: ID does not exist" containerID="4e4cd9f5bfcc59915c692db4409e3aacec770386e769f6467f1fd209d2cb6747" Nov 24 11:46:38 crc kubenswrapper[5072]: I1124 11:46:38.847949 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e4cd9f5bfcc59915c692db4409e3aacec770386e769f6467f1fd209d2cb6747"} err="failed to get container status \"4e4cd9f5bfcc59915c692db4409e3aacec770386e769f6467f1fd209d2cb6747\": rpc error: code = NotFound desc = could not find container \"4e4cd9f5bfcc59915c692db4409e3aacec770386e769f6467f1fd209d2cb6747\": container with ID starting with 4e4cd9f5bfcc59915c692db4409e3aacec770386e769f6467f1fd209d2cb6747 not found: ID does not exist" Nov 24 11:46:39 crc kubenswrapper[5072]: I1124 11:46:39.066684 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jsphb"] Nov 24 11:46:39 crc kubenswrapper[5072]: I1124 11:46:39.073570 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jsphb"] Nov 24 11:46:39 crc kubenswrapper[5072]: I1124 11:46:39.521933 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4nsmr" Nov 24 11:46:39 crc kubenswrapper[5072]: I1124 11:46:39.521999 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4nsmr" Nov 24 11:46:39 crc kubenswrapper[5072]: I1124 11:46:39.593762 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4nsmr" Nov 24 11:46:41 crc kubenswrapper[5072]: I1124 11:46:41.035782 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32896dd6-e92e-42bc-93fa-5ad41c44d299" path="/var/lib/kubelet/pods/32896dd6-e92e-42bc-93fa-5ad41c44d299/volumes" Nov 24 11:46:43 crc kubenswrapper[5072]: I1124 11:46:43.645551 5072 patch_prober.go:28] interesting pod/machine-config-daemon-jfxnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 11:46:43 crc kubenswrapper[5072]: I1124 11:46:43.645885 5072 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 11:46:43 crc kubenswrapper[5072]: I1124 11:46:43.645930 5072 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" Nov 24 11:46:43 crc kubenswrapper[5072]: I1124 11:46:43.646587 5072 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6821956e4cab86ef1bb97ee072ae286fa9afb6be72f793a93d8280a527b7f493"} pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 11:46:43 crc kubenswrapper[5072]: I1124 11:46:43.646644 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" containerName="machine-config-daemon" containerID="cri-o://6821956e4cab86ef1bb97ee072ae286fa9afb6be72f793a93d8280a527b7f493" gracePeriod=600 Nov 24 11:46:43 crc kubenswrapper[5072]: E1124 11:46:43.772882 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 11:46:43 crc kubenswrapper[5072]: I1124 11:46:43.773765 5072 generic.go:334] "Generic (PLEG): container finished" podID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" containerID="6821956e4cab86ef1bb97ee072ae286fa9afb6be72f793a93d8280a527b7f493" exitCode=0 Nov 24 11:46:43 crc kubenswrapper[5072]: I1124 11:46:43.773839 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" event={"ID":"85ee6420-36f0-467c-acf4-ebea8b02c8d5","Type":"ContainerDied","Data":"6821956e4cab86ef1bb97ee072ae286fa9afb6be72f793a93d8280a527b7f493"} Nov 24 11:46:43 crc kubenswrapper[5072]: I1124 11:46:43.773912 5072 scope.go:117] "RemoveContainer" containerID="189ce64d61f8d24afa478e629c32eb4f3644b48f2f7f50733de592c3b81bfb86" Nov 24 11:46:44 crc kubenswrapper[5072]: I1124 11:46:44.793099 5072 scope.go:117] "RemoveContainer" containerID="6821956e4cab86ef1bb97ee072ae286fa9afb6be72f793a93d8280a527b7f493" Nov 24 11:46:44 crc kubenswrapper[5072]: E1124 11:46:44.793933 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 11:46:49 crc kubenswrapper[5072]: I1124 11:46:49.586412 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4nsmr" Nov 24 11:46:49 crc kubenswrapper[5072]: I1124 11:46:49.652604 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4nsmr"] Nov 24 11:46:49 crc kubenswrapper[5072]: I1124 11:46:49.727681 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9k5tg"] Nov 24 11:46:49 crc kubenswrapper[5072]: I1124 11:46:49.728182 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9k5tg" podUID="73b603ce-232a-4aa0-b6c7-fd3a47d3031c" containerName="registry-server" containerID="cri-o://ca49a9bae1e976a5495655ba26bd93da29a0bc9240d928a311ee7fd613b90d55" gracePeriod=2 Nov 24 11:46:50 crc kubenswrapper[5072]: I1124 11:46:50.199480 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9k5tg" Nov 24 11:46:50 crc kubenswrapper[5072]: I1124 11:46:50.302349 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73b603ce-232a-4aa0-b6c7-fd3a47d3031c-catalog-content\") pod \"73b603ce-232a-4aa0-b6c7-fd3a47d3031c\" (UID: \"73b603ce-232a-4aa0-b6c7-fd3a47d3031c\") " Nov 24 11:46:50 crc kubenswrapper[5072]: I1124 11:46:50.302627 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73b603ce-232a-4aa0-b6c7-fd3a47d3031c-utilities\") pod \"73b603ce-232a-4aa0-b6c7-fd3a47d3031c\" (UID: \"73b603ce-232a-4aa0-b6c7-fd3a47d3031c\") " Nov 24 11:46:50 crc kubenswrapper[5072]: I1124 11:46:50.302709 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bml6p\" (UniqueName: \"kubernetes.io/projected/73b603ce-232a-4aa0-b6c7-fd3a47d3031c-kube-api-access-bml6p\") pod \"73b603ce-232a-4aa0-b6c7-fd3a47d3031c\" (UID: \"73b603ce-232a-4aa0-b6c7-fd3a47d3031c\") " Nov 24 11:46:50 crc kubenswrapper[5072]: I1124 11:46:50.303120 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73b603ce-232a-4aa0-b6c7-fd3a47d3031c-utilities" (OuterVolumeSpecName: "utilities") pod "73b603ce-232a-4aa0-b6c7-fd3a47d3031c" (UID: "73b603ce-232a-4aa0-b6c7-fd3a47d3031c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:46:50 crc kubenswrapper[5072]: I1124 11:46:50.308315 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73b603ce-232a-4aa0-b6c7-fd3a47d3031c-kube-api-access-bml6p" (OuterVolumeSpecName: "kube-api-access-bml6p") pod "73b603ce-232a-4aa0-b6c7-fd3a47d3031c" (UID: "73b603ce-232a-4aa0-b6c7-fd3a47d3031c"). InnerVolumeSpecName "kube-api-access-bml6p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:46:50 crc kubenswrapper[5072]: I1124 11:46:50.369308 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73b603ce-232a-4aa0-b6c7-fd3a47d3031c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "73b603ce-232a-4aa0-b6c7-fd3a47d3031c" (UID: "73b603ce-232a-4aa0-b6c7-fd3a47d3031c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:46:50 crc kubenswrapper[5072]: I1124 11:46:50.404326 5072 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73b603ce-232a-4aa0-b6c7-fd3a47d3031c-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 11:46:50 crc kubenswrapper[5072]: I1124 11:46:50.404909 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bml6p\" (UniqueName: \"kubernetes.io/projected/73b603ce-232a-4aa0-b6c7-fd3a47d3031c-kube-api-access-bml6p\") on node \"crc\" DevicePath \"\"" Nov 24 11:46:50 crc kubenswrapper[5072]: I1124 11:46:50.404928 5072 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73b603ce-232a-4aa0-b6c7-fd3a47d3031c-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 11:46:50 crc kubenswrapper[5072]: I1124 11:46:50.846225 5072 generic.go:334] "Generic (PLEG): container finished" podID="73b603ce-232a-4aa0-b6c7-fd3a47d3031c" containerID="ca49a9bae1e976a5495655ba26bd93da29a0bc9240d928a311ee7fd613b90d55" exitCode=0 Nov 24 11:46:50 crc kubenswrapper[5072]: I1124 11:46:50.846290 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9k5tg" Nov 24 11:46:50 crc kubenswrapper[5072]: I1124 11:46:50.846277 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9k5tg" event={"ID":"73b603ce-232a-4aa0-b6c7-fd3a47d3031c","Type":"ContainerDied","Data":"ca49a9bae1e976a5495655ba26bd93da29a0bc9240d928a311ee7fd613b90d55"} Nov 24 11:46:50 crc kubenswrapper[5072]: I1124 11:46:50.846754 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9k5tg" event={"ID":"73b603ce-232a-4aa0-b6c7-fd3a47d3031c","Type":"ContainerDied","Data":"a8621cb41477fc4222c30d915a84f480e740d6bc67bcebcf96a3b8e76b3d7ffb"} Nov 24 11:46:50 crc kubenswrapper[5072]: I1124 11:46:50.846799 5072 scope.go:117] "RemoveContainer" containerID="ca49a9bae1e976a5495655ba26bd93da29a0bc9240d928a311ee7fd613b90d55" Nov 24 11:46:50 crc kubenswrapper[5072]: I1124 11:46:50.883803 5072 scope.go:117] "RemoveContainer" containerID="e4c9d497cbed7bb7513114d0a51f47637c86b0474a697317e2bedf6e24582b3a" Nov 24 11:46:50 crc kubenswrapper[5072]: I1124 11:46:50.889857 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9k5tg"] Nov 24 11:46:50 crc kubenswrapper[5072]: I1124 11:46:50.906119 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9k5tg"] Nov 24 11:46:50 crc kubenswrapper[5072]: I1124 11:46:50.909824 5072 scope.go:117] "RemoveContainer" containerID="ce6eec7b31dc9ee5918dc3c9b466e5a1f1d662881a13ddf235ca586d4f2a4e9f" Nov 24 11:46:50 crc kubenswrapper[5072]: I1124 11:46:50.955636 5072 scope.go:117] "RemoveContainer" containerID="ca49a9bae1e976a5495655ba26bd93da29a0bc9240d928a311ee7fd613b90d55" Nov 24 11:46:50 crc kubenswrapper[5072]: E1124 11:46:50.956481 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca49a9bae1e976a5495655ba26bd93da29a0bc9240d928a311ee7fd613b90d55\": container with ID starting with ca49a9bae1e976a5495655ba26bd93da29a0bc9240d928a311ee7fd613b90d55 not found: ID does not exist" containerID="ca49a9bae1e976a5495655ba26bd93da29a0bc9240d928a311ee7fd613b90d55" Nov 24 11:46:50 crc kubenswrapper[5072]: I1124 11:46:50.956536 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca49a9bae1e976a5495655ba26bd93da29a0bc9240d928a311ee7fd613b90d55"} err="failed to get container status \"ca49a9bae1e976a5495655ba26bd93da29a0bc9240d928a311ee7fd613b90d55\": rpc error: code = NotFound desc = could not find container \"ca49a9bae1e976a5495655ba26bd93da29a0bc9240d928a311ee7fd613b90d55\": container with ID starting with ca49a9bae1e976a5495655ba26bd93da29a0bc9240d928a311ee7fd613b90d55 not found: ID does not exist" Nov 24 11:46:50 crc kubenswrapper[5072]: I1124 11:46:50.956577 5072 scope.go:117] "RemoveContainer" containerID="e4c9d497cbed7bb7513114d0a51f47637c86b0474a697317e2bedf6e24582b3a" Nov 24 11:46:50 crc kubenswrapper[5072]: E1124 11:46:50.957103 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4c9d497cbed7bb7513114d0a51f47637c86b0474a697317e2bedf6e24582b3a\": container with ID starting with e4c9d497cbed7bb7513114d0a51f47637c86b0474a697317e2bedf6e24582b3a not found: ID does not exist" containerID="e4c9d497cbed7bb7513114d0a51f47637c86b0474a697317e2bedf6e24582b3a" Nov 24 11:46:50 crc kubenswrapper[5072]: I1124 11:46:50.957138 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4c9d497cbed7bb7513114d0a51f47637c86b0474a697317e2bedf6e24582b3a"} err="failed to get container status \"e4c9d497cbed7bb7513114d0a51f47637c86b0474a697317e2bedf6e24582b3a\": rpc error: code = NotFound desc = could not find container \"e4c9d497cbed7bb7513114d0a51f47637c86b0474a697317e2bedf6e24582b3a\": container with ID starting with e4c9d497cbed7bb7513114d0a51f47637c86b0474a697317e2bedf6e24582b3a not found: ID does not exist" Nov 24 11:46:50 crc kubenswrapper[5072]: I1124 11:46:50.957167 5072 scope.go:117] "RemoveContainer" containerID="ce6eec7b31dc9ee5918dc3c9b466e5a1f1d662881a13ddf235ca586d4f2a4e9f" Nov 24 11:46:50 crc kubenswrapper[5072]: E1124 11:46:50.957465 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce6eec7b31dc9ee5918dc3c9b466e5a1f1d662881a13ddf235ca586d4f2a4e9f\": container with ID starting with ce6eec7b31dc9ee5918dc3c9b466e5a1f1d662881a13ddf235ca586d4f2a4e9f not found: ID does not exist" containerID="ce6eec7b31dc9ee5918dc3c9b466e5a1f1d662881a13ddf235ca586d4f2a4e9f" Nov 24 11:46:50 crc kubenswrapper[5072]: I1124 11:46:50.957498 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce6eec7b31dc9ee5918dc3c9b466e5a1f1d662881a13ddf235ca586d4f2a4e9f"} err="failed to get container status \"ce6eec7b31dc9ee5918dc3c9b466e5a1f1d662881a13ddf235ca586d4f2a4e9f\": rpc error: code = NotFound desc = could not find container \"ce6eec7b31dc9ee5918dc3c9b466e5a1f1d662881a13ddf235ca586d4f2a4e9f\": container with ID starting with ce6eec7b31dc9ee5918dc3c9b466e5a1f1d662881a13ddf235ca586d4f2a4e9f not found: ID does not exist" Nov 24 11:46:51 crc kubenswrapper[5072]: I1124 11:46:51.028134 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73b603ce-232a-4aa0-b6c7-fd3a47d3031c" path="/var/lib/kubelet/pods/73b603ce-232a-4aa0-b6c7-fd3a47d3031c/volumes" Nov 24 11:46:51 crc kubenswrapper[5072]: I1124 11:46:51.878307 5072 generic.go:334] "Generic (PLEG): container finished" podID="792ebb76-1e10-452d-a1e3-159bb5b80975" containerID="88a78b4f7a318091a28ba5d2062d4838e8fd14e41110154680f274023f158cad" exitCode=0 Nov 24 11:46:51 crc kubenswrapper[5072]: I1124 11:46:51.879009 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vptlp" event={"ID":"792ebb76-1e10-452d-a1e3-159bb5b80975","Type":"ContainerDied","Data":"88a78b4f7a318091a28ba5d2062d4838e8fd14e41110154680f274023f158cad"} Nov 24 11:46:53 crc kubenswrapper[5072]: I1124 11:46:53.294804 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vptlp" Nov 24 11:46:53 crc kubenswrapper[5072]: I1124 11:46:53.461066 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/792ebb76-1e10-452d-a1e3-159bb5b80975-ceph\") pod \"792ebb76-1e10-452d-a1e3-159bb5b80975\" (UID: \"792ebb76-1e10-452d-a1e3-159bb5b80975\") " Nov 24 11:46:53 crc kubenswrapper[5072]: I1124 11:46:53.461241 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/792ebb76-1e10-452d-a1e3-159bb5b80975-ssh-key\") pod \"792ebb76-1e10-452d-a1e3-159bb5b80975\" (UID: \"792ebb76-1e10-452d-a1e3-159bb5b80975\") " Nov 24 11:46:53 crc kubenswrapper[5072]: I1124 11:46:53.461395 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/792ebb76-1e10-452d-a1e3-159bb5b80975-inventory\") pod \"792ebb76-1e10-452d-a1e3-159bb5b80975\" (UID: \"792ebb76-1e10-452d-a1e3-159bb5b80975\") " Nov 24 11:46:53 crc kubenswrapper[5072]: I1124 11:46:53.461632 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l6wnj\" (UniqueName: \"kubernetes.io/projected/792ebb76-1e10-452d-a1e3-159bb5b80975-kube-api-access-l6wnj\") pod \"792ebb76-1e10-452d-a1e3-159bb5b80975\" (UID: \"792ebb76-1e10-452d-a1e3-159bb5b80975\") " Nov 24 11:46:53 crc kubenswrapper[5072]: I1124 11:46:53.468846 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/792ebb76-1e10-452d-a1e3-159bb5b80975-ceph" (OuterVolumeSpecName: "ceph") pod "792ebb76-1e10-452d-a1e3-159bb5b80975" (UID: "792ebb76-1e10-452d-a1e3-159bb5b80975"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:46:53 crc kubenswrapper[5072]: I1124 11:46:53.473845 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/792ebb76-1e10-452d-a1e3-159bb5b80975-kube-api-access-l6wnj" (OuterVolumeSpecName: "kube-api-access-l6wnj") pod "792ebb76-1e10-452d-a1e3-159bb5b80975" (UID: "792ebb76-1e10-452d-a1e3-159bb5b80975"). InnerVolumeSpecName "kube-api-access-l6wnj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:46:53 crc kubenswrapper[5072]: I1124 11:46:53.488307 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/792ebb76-1e10-452d-a1e3-159bb5b80975-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "792ebb76-1e10-452d-a1e3-159bb5b80975" (UID: "792ebb76-1e10-452d-a1e3-159bb5b80975"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:46:53 crc kubenswrapper[5072]: I1124 11:46:53.491502 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/792ebb76-1e10-452d-a1e3-159bb5b80975-inventory" (OuterVolumeSpecName: "inventory") pod "792ebb76-1e10-452d-a1e3-159bb5b80975" (UID: "792ebb76-1e10-452d-a1e3-159bb5b80975"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:46:53 crc kubenswrapper[5072]: I1124 11:46:53.564076 5072 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/792ebb76-1e10-452d-a1e3-159bb5b80975-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 11:46:53 crc kubenswrapper[5072]: I1124 11:46:53.564131 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l6wnj\" (UniqueName: \"kubernetes.io/projected/792ebb76-1e10-452d-a1e3-159bb5b80975-kube-api-access-l6wnj\") on node \"crc\" DevicePath \"\"" Nov 24 11:46:53 crc kubenswrapper[5072]: I1124 11:46:53.564151 5072 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/792ebb76-1e10-452d-a1e3-159bb5b80975-ceph\") on node \"crc\" DevicePath \"\"" Nov 24 11:46:53 crc kubenswrapper[5072]: I1124 11:46:53.564167 5072 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/792ebb76-1e10-452d-a1e3-159bb5b80975-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 11:46:53 crc kubenswrapper[5072]: I1124 11:46:53.913934 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vptlp" event={"ID":"792ebb76-1e10-452d-a1e3-159bb5b80975","Type":"ContainerDied","Data":"5f153d5d1055100a019ad24ff1760b7e4e41485c49874873b601e780074e4ce4"} Nov 24 11:46:53 crc kubenswrapper[5072]: I1124 11:46:53.914013 5072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f153d5d1055100a019ad24ff1760b7e4e41485c49874873b601e780074e4ce4" Nov 24 11:46:53 crc kubenswrapper[5072]: I1124 11:46:53.914029 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vptlp" Nov 24 11:46:54 crc kubenswrapper[5072]: I1124 11:46:54.026533 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-p68cc"] Nov 24 11:46:54 crc kubenswrapper[5072]: E1124 11:46:54.027060 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73b603ce-232a-4aa0-b6c7-fd3a47d3031c" containerName="registry-server" Nov 24 11:46:54 crc kubenswrapper[5072]: I1124 11:46:54.027091 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="73b603ce-232a-4aa0-b6c7-fd3a47d3031c" containerName="registry-server" Nov 24 11:46:54 crc kubenswrapper[5072]: E1124 11:46:54.027120 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32896dd6-e92e-42bc-93fa-5ad41c44d299" containerName="extract-utilities" Nov 24 11:46:54 crc kubenswrapper[5072]: I1124 11:46:54.027133 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="32896dd6-e92e-42bc-93fa-5ad41c44d299" containerName="extract-utilities" Nov 24 11:46:54 crc kubenswrapper[5072]: E1124 11:46:54.027159 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73b603ce-232a-4aa0-b6c7-fd3a47d3031c" containerName="extract-content" Nov 24 11:46:54 crc kubenswrapper[5072]: I1124 11:46:54.027169 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="73b603ce-232a-4aa0-b6c7-fd3a47d3031c" containerName="extract-content" Nov 24 11:46:54 crc kubenswrapper[5072]: E1124 11:46:54.027204 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32896dd6-e92e-42bc-93fa-5ad41c44d299" containerName="extract-content" Nov 24 11:46:54 crc kubenswrapper[5072]: I1124 11:46:54.027216 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="32896dd6-e92e-42bc-93fa-5ad41c44d299" containerName="extract-content" Nov 24 11:46:54 crc kubenswrapper[5072]: E1124 11:46:54.027240 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="792ebb76-1e10-452d-a1e3-159bb5b80975" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 24 11:46:54 crc kubenswrapper[5072]: I1124 11:46:54.027253 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="792ebb76-1e10-452d-a1e3-159bb5b80975" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 24 11:46:54 crc kubenswrapper[5072]: E1124 11:46:54.027274 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32896dd6-e92e-42bc-93fa-5ad41c44d299" containerName="registry-server" Nov 24 11:46:54 crc kubenswrapper[5072]: I1124 11:46:54.027285 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="32896dd6-e92e-42bc-93fa-5ad41c44d299" containerName="registry-server" Nov 24 11:46:54 crc kubenswrapper[5072]: E1124 11:46:54.027299 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73b603ce-232a-4aa0-b6c7-fd3a47d3031c" containerName="extract-utilities" Nov 24 11:46:54 crc kubenswrapper[5072]: I1124 11:46:54.027311 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="73b603ce-232a-4aa0-b6c7-fd3a47d3031c" containerName="extract-utilities" Nov 24 11:46:54 crc kubenswrapper[5072]: I1124 11:46:54.027649 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="73b603ce-232a-4aa0-b6c7-fd3a47d3031c" containerName="registry-server" Nov 24 11:46:54 crc kubenswrapper[5072]: I1124 11:46:54.027688 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="792ebb76-1e10-452d-a1e3-159bb5b80975" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 24 11:46:54 crc kubenswrapper[5072]: I1124 11:46:54.027732 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="32896dd6-e92e-42bc-93fa-5ad41c44d299" containerName="registry-server" Nov 24 11:46:54 crc kubenswrapper[5072]: I1124 11:46:54.028583 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-p68cc" Nov 24 11:46:54 crc kubenswrapper[5072]: I1124 11:46:54.033410 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-b6s7d" Nov 24 11:46:54 crc kubenswrapper[5072]: I1124 11:46:54.033600 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 11:46:54 crc kubenswrapper[5072]: I1124 11:46:54.033811 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 11:46:54 crc kubenswrapper[5072]: I1124 11:46:54.033997 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 11:46:54 crc kubenswrapper[5072]: I1124 11:46:54.034925 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 24 11:46:54 crc kubenswrapper[5072]: I1124 11:46:54.040298 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-p68cc"] Nov 24 11:46:54 crc kubenswrapper[5072]: I1124 11:46:54.179790 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c8ddc412-753d-44ff-9ac9-39a003a786dd-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-p68cc\" (UID: \"c8ddc412-753d-44ff-9ac9-39a003a786dd\") " pod="openstack/ssh-known-hosts-edpm-deployment-p68cc" Nov 24 11:46:54 crc kubenswrapper[5072]: I1124 11:46:54.179944 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c8ddc412-753d-44ff-9ac9-39a003a786dd-ceph\") pod \"ssh-known-hosts-edpm-deployment-p68cc\" (UID: \"c8ddc412-753d-44ff-9ac9-39a003a786dd\") " pod="openstack/ssh-known-hosts-edpm-deployment-p68cc" Nov 24 11:46:54 crc kubenswrapper[5072]: I1124 11:46:54.180089 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c8ddc412-753d-44ff-9ac9-39a003a786dd-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-p68cc\" (UID: \"c8ddc412-753d-44ff-9ac9-39a003a786dd\") " pod="openstack/ssh-known-hosts-edpm-deployment-p68cc" Nov 24 11:46:54 crc kubenswrapper[5072]: I1124 11:46:54.180730 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42hfv\" (UniqueName: \"kubernetes.io/projected/c8ddc412-753d-44ff-9ac9-39a003a786dd-kube-api-access-42hfv\") pod \"ssh-known-hosts-edpm-deployment-p68cc\" (UID: \"c8ddc412-753d-44ff-9ac9-39a003a786dd\") " pod="openstack/ssh-known-hosts-edpm-deployment-p68cc" Nov 24 11:46:54 crc kubenswrapper[5072]: I1124 11:46:54.281642 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c8ddc412-753d-44ff-9ac9-39a003a786dd-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-p68cc\" (UID: \"c8ddc412-753d-44ff-9ac9-39a003a786dd\") " pod="openstack/ssh-known-hosts-edpm-deployment-p68cc" Nov 24 11:46:54 crc kubenswrapper[5072]: I1124 11:46:54.281733 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c8ddc412-753d-44ff-9ac9-39a003a786dd-ceph\") pod \"ssh-known-hosts-edpm-deployment-p68cc\" (UID: \"c8ddc412-753d-44ff-9ac9-39a003a786dd\") " pod="openstack/ssh-known-hosts-edpm-deployment-p68cc" Nov 24 11:46:54 crc kubenswrapper[5072]: I1124 11:46:54.281754 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c8ddc412-753d-44ff-9ac9-39a003a786dd-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-p68cc\" (UID: \"c8ddc412-753d-44ff-9ac9-39a003a786dd\") " pod="openstack/ssh-known-hosts-edpm-deployment-p68cc" Nov 24 11:46:54 crc kubenswrapper[5072]: I1124 11:46:54.281790 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42hfv\" (UniqueName: \"kubernetes.io/projected/c8ddc412-753d-44ff-9ac9-39a003a786dd-kube-api-access-42hfv\") pod \"ssh-known-hosts-edpm-deployment-p68cc\" (UID: \"c8ddc412-753d-44ff-9ac9-39a003a786dd\") " pod="openstack/ssh-known-hosts-edpm-deployment-p68cc" Nov 24 11:46:54 crc kubenswrapper[5072]: I1124 11:46:54.286144 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c8ddc412-753d-44ff-9ac9-39a003a786dd-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-p68cc\" (UID: \"c8ddc412-753d-44ff-9ac9-39a003a786dd\") " pod="openstack/ssh-known-hosts-edpm-deployment-p68cc" Nov 24 11:46:54 crc kubenswrapper[5072]: I1124 11:46:54.286268 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c8ddc412-753d-44ff-9ac9-39a003a786dd-ceph\") pod \"ssh-known-hosts-edpm-deployment-p68cc\" (UID: \"c8ddc412-753d-44ff-9ac9-39a003a786dd\") " pod="openstack/ssh-known-hosts-edpm-deployment-p68cc" Nov 24 11:46:54 crc kubenswrapper[5072]: I1124 11:46:54.293651 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c8ddc412-753d-44ff-9ac9-39a003a786dd-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-p68cc\" (UID: \"c8ddc412-753d-44ff-9ac9-39a003a786dd\") " pod="openstack/ssh-known-hosts-edpm-deployment-p68cc" Nov 24 11:46:54 crc kubenswrapper[5072]: I1124 11:46:54.298667 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42hfv\" (UniqueName: \"kubernetes.io/projected/c8ddc412-753d-44ff-9ac9-39a003a786dd-kube-api-access-42hfv\") pod \"ssh-known-hosts-edpm-deployment-p68cc\" (UID: \"c8ddc412-753d-44ff-9ac9-39a003a786dd\") " pod="openstack/ssh-known-hosts-edpm-deployment-p68cc" Nov 24 11:46:54 crc kubenswrapper[5072]: I1124 11:46:54.359508 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-p68cc" Nov 24 11:46:54 crc kubenswrapper[5072]: I1124 11:46:54.910106 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-p68cc"] Nov 24 11:46:54 crc kubenswrapper[5072]: I1124 11:46:54.930562 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-p68cc" event={"ID":"c8ddc412-753d-44ff-9ac9-39a003a786dd","Type":"ContainerStarted","Data":"d0a08ddbf6b70ea6e074b4e05958d23eeb031910c8c347bbd79a96984b6a777b"} Nov 24 11:46:55 crc kubenswrapper[5072]: I1124 11:46:55.939273 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-p68cc" event={"ID":"c8ddc412-753d-44ff-9ac9-39a003a786dd","Type":"ContainerStarted","Data":"9a643aaaa949ca3d0ec48ef716b084ecf0238476db2d6ffeab19291de781bdd2"} Nov 24 11:46:55 crc kubenswrapper[5072]: I1124 11:46:55.957937 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-p68cc" podStartSLOduration=1.35339173 podStartE2EDuration="1.957920702s" podCreationTimestamp="2025-11-24 11:46:54 +0000 UTC" firstStartedPulling="2025-11-24 11:46:54.920029514 +0000 UTC m=+2266.631553990" lastFinishedPulling="2025-11-24 11:46:55.524558466 +0000 UTC m=+2267.236082962" observedRunningTime="2025-11-24 11:46:55.953160075 +0000 UTC m=+2267.664684551" watchObservedRunningTime="2025-11-24 11:46:55.957920702 +0000 UTC m=+2267.669445178" Nov 24 11:46:59 crc kubenswrapper[5072]: I1124 11:46:59.030732 5072 scope.go:117] "RemoveContainer" containerID="6821956e4cab86ef1bb97ee072ae286fa9afb6be72f793a93d8280a527b7f493" Nov 24 11:46:59 crc kubenswrapper[5072]: E1124 11:46:59.031706 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 11:47:12 crc kubenswrapper[5072]: I1124 11:47:12.095959 5072 generic.go:334] "Generic (PLEG): container finished" podID="c8ddc412-753d-44ff-9ac9-39a003a786dd" containerID="9a643aaaa949ca3d0ec48ef716b084ecf0238476db2d6ffeab19291de781bdd2" exitCode=0 Nov 24 11:47:12 crc kubenswrapper[5072]: I1124 11:47:12.096011 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-p68cc" event={"ID":"c8ddc412-753d-44ff-9ac9-39a003a786dd","Type":"ContainerDied","Data":"9a643aaaa949ca3d0ec48ef716b084ecf0238476db2d6ffeab19291de781bdd2"} Nov 24 11:47:13 crc kubenswrapper[5072]: I1124 11:47:13.525724 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-p68cc" Nov 24 11:47:13 crc kubenswrapper[5072]: I1124 11:47:13.601721 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42hfv\" (UniqueName: \"kubernetes.io/projected/c8ddc412-753d-44ff-9ac9-39a003a786dd-kube-api-access-42hfv\") pod \"c8ddc412-753d-44ff-9ac9-39a003a786dd\" (UID: \"c8ddc412-753d-44ff-9ac9-39a003a786dd\") " Nov 24 11:47:13 crc kubenswrapper[5072]: I1124 11:47:13.601850 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c8ddc412-753d-44ff-9ac9-39a003a786dd-ssh-key-openstack-edpm-ipam\") pod \"c8ddc412-753d-44ff-9ac9-39a003a786dd\" (UID: \"c8ddc412-753d-44ff-9ac9-39a003a786dd\") " Nov 24 11:47:13 crc kubenswrapper[5072]: I1124 11:47:13.601924 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c8ddc412-753d-44ff-9ac9-39a003a786dd-inventory-0\") pod \"c8ddc412-753d-44ff-9ac9-39a003a786dd\" (UID: \"c8ddc412-753d-44ff-9ac9-39a003a786dd\") " Nov 24 11:47:13 crc kubenswrapper[5072]: I1124 11:47:13.602071 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c8ddc412-753d-44ff-9ac9-39a003a786dd-ceph\") pod \"c8ddc412-753d-44ff-9ac9-39a003a786dd\" (UID: \"c8ddc412-753d-44ff-9ac9-39a003a786dd\") " Nov 24 11:47:13 crc kubenswrapper[5072]: I1124 11:47:13.607489 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8ddc412-753d-44ff-9ac9-39a003a786dd-ceph" (OuterVolumeSpecName: "ceph") pod "c8ddc412-753d-44ff-9ac9-39a003a786dd" (UID: "c8ddc412-753d-44ff-9ac9-39a003a786dd"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:47:13 crc kubenswrapper[5072]: I1124 11:47:13.607633 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8ddc412-753d-44ff-9ac9-39a003a786dd-kube-api-access-42hfv" (OuterVolumeSpecName: "kube-api-access-42hfv") pod "c8ddc412-753d-44ff-9ac9-39a003a786dd" (UID: "c8ddc412-753d-44ff-9ac9-39a003a786dd"). InnerVolumeSpecName "kube-api-access-42hfv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:47:13 crc kubenswrapper[5072]: I1124 11:47:13.627047 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8ddc412-753d-44ff-9ac9-39a003a786dd-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "c8ddc412-753d-44ff-9ac9-39a003a786dd" (UID: "c8ddc412-753d-44ff-9ac9-39a003a786dd"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:47:13 crc kubenswrapper[5072]: I1124 11:47:13.641396 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8ddc412-753d-44ff-9ac9-39a003a786dd-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c8ddc412-753d-44ff-9ac9-39a003a786dd" (UID: "c8ddc412-753d-44ff-9ac9-39a003a786dd"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:47:13 crc kubenswrapper[5072]: I1124 11:47:13.704566 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-42hfv\" (UniqueName: \"kubernetes.io/projected/c8ddc412-753d-44ff-9ac9-39a003a786dd-kube-api-access-42hfv\") on node \"crc\" DevicePath \"\"" Nov 24 11:47:13 crc kubenswrapper[5072]: I1124 11:47:13.704596 5072 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c8ddc412-753d-44ff-9ac9-39a003a786dd-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Nov 24 11:47:13 crc kubenswrapper[5072]: I1124 11:47:13.704605 5072 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c8ddc412-753d-44ff-9ac9-39a003a786dd-inventory-0\") on node \"crc\" DevicePath \"\"" Nov 24 11:47:13 crc kubenswrapper[5072]: I1124 11:47:13.704614 5072 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c8ddc412-753d-44ff-9ac9-39a003a786dd-ceph\") on node \"crc\" DevicePath \"\"" Nov 24 11:47:14 crc kubenswrapper[5072]: I1124 11:47:14.017490 5072 scope.go:117] "RemoveContainer" containerID="6821956e4cab86ef1bb97ee072ae286fa9afb6be72f793a93d8280a527b7f493" Nov 24 11:47:14 crc kubenswrapper[5072]: E1124 11:47:14.018454 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 11:47:14 crc kubenswrapper[5072]: I1124 11:47:14.124533 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-p68cc" event={"ID":"c8ddc412-753d-44ff-9ac9-39a003a786dd","Type":"ContainerDied","Data":"d0a08ddbf6b70ea6e074b4e05958d23eeb031910c8c347bbd79a96984b6a777b"} Nov 24 11:47:14 crc kubenswrapper[5072]: I1124 11:47:14.124601 5072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0a08ddbf6b70ea6e074b4e05958d23eeb031910c8c347bbd79a96984b6a777b" Nov 24 11:47:14 crc kubenswrapper[5072]: I1124 11:47:14.124704 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-p68cc" Nov 24 11:47:14 crc kubenswrapper[5072]: I1124 11:47:14.211498 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-9klcc"] Nov 24 11:47:14 crc kubenswrapper[5072]: E1124 11:47:14.212136 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8ddc412-753d-44ff-9ac9-39a003a786dd" containerName="ssh-known-hosts-edpm-deployment" Nov 24 11:47:14 crc kubenswrapper[5072]: I1124 11:47:14.212157 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8ddc412-753d-44ff-9ac9-39a003a786dd" containerName="ssh-known-hosts-edpm-deployment" Nov 24 11:47:14 crc kubenswrapper[5072]: I1124 11:47:14.214014 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8ddc412-753d-44ff-9ac9-39a003a786dd" containerName="ssh-known-hosts-edpm-deployment" Nov 24 11:47:14 crc kubenswrapper[5072]: I1124 11:47:14.214773 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9klcc" Nov 24 11:47:14 crc kubenswrapper[5072]: I1124 11:47:14.217503 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 11:47:14 crc kubenswrapper[5072]: I1124 11:47:14.217909 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 11:47:14 crc kubenswrapper[5072]: I1124 11:47:14.218135 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-b6s7d" Nov 24 11:47:14 crc kubenswrapper[5072]: I1124 11:47:14.218551 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 11:47:14 crc kubenswrapper[5072]: I1124 11:47:14.218819 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 24 11:47:14 crc kubenswrapper[5072]: I1124 11:47:14.221237 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-9klcc"] Nov 24 11:47:14 crc kubenswrapper[5072]: I1124 11:47:14.314667 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d97f4dff-1854-4cf0-9546-1626e9a5856b-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-9klcc\" (UID: \"d97f4dff-1854-4cf0-9546-1626e9a5856b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9klcc" Nov 24 11:47:14 crc kubenswrapper[5072]: I1124 11:47:14.314731 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d97f4dff-1854-4cf0-9546-1626e9a5856b-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-9klcc\" (UID: \"d97f4dff-1854-4cf0-9546-1626e9a5856b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9klcc" Nov 24 11:47:14 crc kubenswrapper[5072]: I1124 11:47:14.314871 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d97f4dff-1854-4cf0-9546-1626e9a5856b-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-9klcc\" (UID: \"d97f4dff-1854-4cf0-9546-1626e9a5856b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9klcc" Nov 24 11:47:14 crc kubenswrapper[5072]: I1124 11:47:14.315119 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcks6\" (UniqueName: \"kubernetes.io/projected/d97f4dff-1854-4cf0-9546-1626e9a5856b-kube-api-access-fcks6\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-9klcc\" (UID: \"d97f4dff-1854-4cf0-9546-1626e9a5856b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9klcc" Nov 24 11:47:14 crc kubenswrapper[5072]: I1124 11:47:14.417427 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d97f4dff-1854-4cf0-9546-1626e9a5856b-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-9klcc\" (UID: \"d97f4dff-1854-4cf0-9546-1626e9a5856b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9klcc" Nov 24 11:47:14 crc kubenswrapper[5072]: I1124 11:47:14.417479 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d97f4dff-1854-4cf0-9546-1626e9a5856b-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-9klcc\" (UID: \"d97f4dff-1854-4cf0-9546-1626e9a5856b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9klcc" Nov 24 11:47:14 crc kubenswrapper[5072]: I1124 11:47:14.417505 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d97f4dff-1854-4cf0-9546-1626e9a5856b-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-9klcc\" (UID: \"d97f4dff-1854-4cf0-9546-1626e9a5856b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9klcc" Nov 24 11:47:14 crc kubenswrapper[5072]: I1124 11:47:14.417564 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcks6\" (UniqueName: \"kubernetes.io/projected/d97f4dff-1854-4cf0-9546-1626e9a5856b-kube-api-access-fcks6\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-9klcc\" (UID: \"d97f4dff-1854-4cf0-9546-1626e9a5856b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9klcc" Nov 24 11:47:14 crc kubenswrapper[5072]: I1124 11:47:14.421070 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d97f4dff-1854-4cf0-9546-1626e9a5856b-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-9klcc\" (UID: \"d97f4dff-1854-4cf0-9546-1626e9a5856b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9klcc" Nov 24 11:47:14 crc kubenswrapper[5072]: I1124 11:47:14.421753 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d97f4dff-1854-4cf0-9546-1626e9a5856b-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-9klcc\" (UID: \"d97f4dff-1854-4cf0-9546-1626e9a5856b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9klcc" Nov 24 11:47:14 crc kubenswrapper[5072]: I1124 11:47:14.422929 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d97f4dff-1854-4cf0-9546-1626e9a5856b-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-9klcc\" (UID: \"d97f4dff-1854-4cf0-9546-1626e9a5856b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9klcc" Nov 24 11:47:14 crc kubenswrapper[5072]: I1124 11:47:14.445579 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcks6\" (UniqueName: \"kubernetes.io/projected/d97f4dff-1854-4cf0-9546-1626e9a5856b-kube-api-access-fcks6\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-9klcc\" (UID: \"d97f4dff-1854-4cf0-9546-1626e9a5856b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9klcc" Nov 24 11:47:14 crc kubenswrapper[5072]: I1124 11:47:14.544936 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9klcc" Nov 24 11:47:15 crc kubenswrapper[5072]: I1124 11:47:15.130008 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-9klcc"] Nov 24 11:47:16 crc kubenswrapper[5072]: I1124 11:47:16.144207 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9klcc" event={"ID":"d97f4dff-1854-4cf0-9546-1626e9a5856b","Type":"ContainerStarted","Data":"346382f4db55d91dcda52b1edb86bc32b4f848353423d6df6c16d95529b6b62a"} Nov 24 11:47:16 crc kubenswrapper[5072]: I1124 11:47:16.144578 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9klcc" event={"ID":"d97f4dff-1854-4cf0-9546-1626e9a5856b","Type":"ContainerStarted","Data":"b0f5368f7e96ec2e9763c94e52778bb7dea2a67c28b880af37fbda7f0e40b228"} Nov 24 11:47:24 crc kubenswrapper[5072]: I1124 11:47:24.222936 5072 generic.go:334] "Generic (PLEG): container finished" podID="d97f4dff-1854-4cf0-9546-1626e9a5856b" containerID="346382f4db55d91dcda52b1edb86bc32b4f848353423d6df6c16d95529b6b62a" exitCode=0 Nov 24 11:47:24 crc kubenswrapper[5072]: I1124 11:47:24.222987 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9klcc" event={"ID":"d97f4dff-1854-4cf0-9546-1626e9a5856b","Type":"ContainerDied","Data":"346382f4db55d91dcda52b1edb86bc32b4f848353423d6df6c16d95529b6b62a"} Nov 24 11:47:25 crc kubenswrapper[5072]: I1124 11:47:25.639198 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9klcc" Nov 24 11:47:25 crc kubenswrapper[5072]: I1124 11:47:25.800126 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d97f4dff-1854-4cf0-9546-1626e9a5856b-ceph\") pod \"d97f4dff-1854-4cf0-9546-1626e9a5856b\" (UID: \"d97f4dff-1854-4cf0-9546-1626e9a5856b\") " Nov 24 11:47:25 crc kubenswrapper[5072]: I1124 11:47:25.800191 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d97f4dff-1854-4cf0-9546-1626e9a5856b-ssh-key\") pod \"d97f4dff-1854-4cf0-9546-1626e9a5856b\" (UID: \"d97f4dff-1854-4cf0-9546-1626e9a5856b\") " Nov 24 11:47:25 crc kubenswrapper[5072]: I1124 11:47:25.800403 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcks6\" (UniqueName: \"kubernetes.io/projected/d97f4dff-1854-4cf0-9546-1626e9a5856b-kube-api-access-fcks6\") pod \"d97f4dff-1854-4cf0-9546-1626e9a5856b\" (UID: \"d97f4dff-1854-4cf0-9546-1626e9a5856b\") " Nov 24 11:47:25 crc kubenswrapper[5072]: I1124 11:47:25.800441 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d97f4dff-1854-4cf0-9546-1626e9a5856b-inventory\") pod \"d97f4dff-1854-4cf0-9546-1626e9a5856b\" (UID: \"d97f4dff-1854-4cf0-9546-1626e9a5856b\") " Nov 24 11:47:25 crc kubenswrapper[5072]: I1124 11:47:25.805655 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d97f4dff-1854-4cf0-9546-1626e9a5856b-kube-api-access-fcks6" (OuterVolumeSpecName: "kube-api-access-fcks6") pod "d97f4dff-1854-4cf0-9546-1626e9a5856b" (UID: "d97f4dff-1854-4cf0-9546-1626e9a5856b"). InnerVolumeSpecName "kube-api-access-fcks6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:47:25 crc kubenswrapper[5072]: I1124 11:47:25.805767 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d97f4dff-1854-4cf0-9546-1626e9a5856b-ceph" (OuterVolumeSpecName: "ceph") pod "d97f4dff-1854-4cf0-9546-1626e9a5856b" (UID: "d97f4dff-1854-4cf0-9546-1626e9a5856b"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:47:25 crc kubenswrapper[5072]: I1124 11:47:25.824532 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d97f4dff-1854-4cf0-9546-1626e9a5856b-inventory" (OuterVolumeSpecName: "inventory") pod "d97f4dff-1854-4cf0-9546-1626e9a5856b" (UID: "d97f4dff-1854-4cf0-9546-1626e9a5856b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:47:25 crc kubenswrapper[5072]: I1124 11:47:25.825436 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d97f4dff-1854-4cf0-9546-1626e9a5856b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "d97f4dff-1854-4cf0-9546-1626e9a5856b" (UID: "d97f4dff-1854-4cf0-9546-1626e9a5856b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:47:25 crc kubenswrapper[5072]: I1124 11:47:25.903013 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcks6\" (UniqueName: \"kubernetes.io/projected/d97f4dff-1854-4cf0-9546-1626e9a5856b-kube-api-access-fcks6\") on node \"crc\" DevicePath \"\"" Nov 24 11:47:25 crc kubenswrapper[5072]: I1124 11:47:25.903060 5072 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d97f4dff-1854-4cf0-9546-1626e9a5856b-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 11:47:25 crc kubenswrapper[5072]: I1124 11:47:25.903077 5072 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d97f4dff-1854-4cf0-9546-1626e9a5856b-ceph\") on node \"crc\" DevicePath \"\"" Nov 24 11:47:25 crc kubenswrapper[5072]: I1124 11:47:25.903095 5072 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d97f4dff-1854-4cf0-9546-1626e9a5856b-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 11:47:26 crc kubenswrapper[5072]: I1124 11:47:26.243938 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9klcc" event={"ID":"d97f4dff-1854-4cf0-9546-1626e9a5856b","Type":"ContainerDied","Data":"b0f5368f7e96ec2e9763c94e52778bb7dea2a67c28b880af37fbda7f0e40b228"} Nov 24 11:47:26 crc kubenswrapper[5072]: I1124 11:47:26.244237 5072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0f5368f7e96ec2e9763c94e52778bb7dea2a67c28b880af37fbda7f0e40b228" Nov 24 11:47:26 crc kubenswrapper[5072]: I1124 11:47:26.244042 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9klcc" Nov 24 11:47:26 crc kubenswrapper[5072]: I1124 11:47:26.341812 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fbs95"] Nov 24 11:47:26 crc kubenswrapper[5072]: E1124 11:47:26.342187 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d97f4dff-1854-4cf0-9546-1626e9a5856b" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 24 11:47:26 crc kubenswrapper[5072]: I1124 11:47:26.342205 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="d97f4dff-1854-4cf0-9546-1626e9a5856b" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 24 11:47:26 crc kubenswrapper[5072]: I1124 11:47:26.342431 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="d97f4dff-1854-4cf0-9546-1626e9a5856b" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 24 11:47:26 crc kubenswrapper[5072]: I1124 11:47:26.343153 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fbs95" Nov 24 11:47:26 crc kubenswrapper[5072]: I1124 11:47:26.346576 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 24 11:47:26 crc kubenswrapper[5072]: I1124 11:47:26.347043 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 11:47:26 crc kubenswrapper[5072]: I1124 11:47:26.347339 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-b6s7d" Nov 24 11:47:26 crc kubenswrapper[5072]: I1124 11:47:26.347513 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 11:47:26 crc kubenswrapper[5072]: I1124 11:47:26.347671 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 11:47:26 crc kubenswrapper[5072]: I1124 11:47:26.355978 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fbs95"] Nov 24 11:47:26 crc kubenswrapper[5072]: I1124 11:47:26.415407 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c82x5\" (UniqueName: \"kubernetes.io/projected/ed449e35-f14d-45cf-b172-49441c6d676a-kube-api-access-c82x5\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-fbs95\" (UID: \"ed449e35-f14d-45cf-b172-49441c6d676a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fbs95" Nov 24 11:47:26 crc kubenswrapper[5072]: I1124 11:47:26.415543 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ed449e35-f14d-45cf-b172-49441c6d676a-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-fbs95\" (UID: \"ed449e35-f14d-45cf-b172-49441c6d676a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fbs95" Nov 24 11:47:26 crc kubenswrapper[5072]: I1124 11:47:26.415627 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ed449e35-f14d-45cf-b172-49441c6d676a-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-fbs95\" (UID: \"ed449e35-f14d-45cf-b172-49441c6d676a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fbs95" Nov 24 11:47:26 crc kubenswrapper[5072]: I1124 11:47:26.415801 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ed449e35-f14d-45cf-b172-49441c6d676a-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-fbs95\" (UID: \"ed449e35-f14d-45cf-b172-49441c6d676a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fbs95" Nov 24 11:47:26 crc kubenswrapper[5072]: I1124 11:47:26.516717 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ed449e35-f14d-45cf-b172-49441c6d676a-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-fbs95\" (UID: \"ed449e35-f14d-45cf-b172-49441c6d676a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fbs95" Nov 24 11:47:26 crc kubenswrapper[5072]: I1124 11:47:26.516778 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c82x5\" (UniqueName: \"kubernetes.io/projected/ed449e35-f14d-45cf-b172-49441c6d676a-kube-api-access-c82x5\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-fbs95\" (UID: \"ed449e35-f14d-45cf-b172-49441c6d676a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fbs95" Nov 24 11:47:26 crc kubenswrapper[5072]: I1124 11:47:26.516823 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ed449e35-f14d-45cf-b172-49441c6d676a-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-fbs95\" (UID: \"ed449e35-f14d-45cf-b172-49441c6d676a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fbs95" Nov 24 11:47:26 crc kubenswrapper[5072]: I1124 11:47:26.516866 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ed449e35-f14d-45cf-b172-49441c6d676a-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-fbs95\" (UID: \"ed449e35-f14d-45cf-b172-49441c6d676a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fbs95" Nov 24 11:47:26 crc kubenswrapper[5072]: I1124 11:47:26.520733 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ed449e35-f14d-45cf-b172-49441c6d676a-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-fbs95\" (UID: \"ed449e35-f14d-45cf-b172-49441c6d676a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fbs95" Nov 24 11:47:26 crc kubenswrapper[5072]: I1124 11:47:26.521155 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ed449e35-f14d-45cf-b172-49441c6d676a-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-fbs95\" (UID: \"ed449e35-f14d-45cf-b172-49441c6d676a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fbs95" Nov 24 11:47:26 crc kubenswrapper[5072]: I1124 11:47:26.521708 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ed449e35-f14d-45cf-b172-49441c6d676a-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-fbs95\" (UID: \"ed449e35-f14d-45cf-b172-49441c6d676a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fbs95" Nov 24 11:47:26 crc kubenswrapper[5072]: I1124 11:47:26.537240 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c82x5\" (UniqueName: \"kubernetes.io/projected/ed449e35-f14d-45cf-b172-49441c6d676a-kube-api-access-c82x5\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-fbs95\" (UID: \"ed449e35-f14d-45cf-b172-49441c6d676a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fbs95" Nov 24 11:47:26 crc kubenswrapper[5072]: I1124 11:47:26.724861 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fbs95" Nov 24 11:47:27 crc kubenswrapper[5072]: I1124 11:47:27.243007 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fbs95"] Nov 24 11:47:27 crc kubenswrapper[5072]: I1124 11:47:27.250718 5072 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 11:47:28 crc kubenswrapper[5072]: I1124 11:47:28.262454 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fbs95" event={"ID":"ed449e35-f14d-45cf-b172-49441c6d676a","Type":"ContainerStarted","Data":"434202070eff8935cb7b57b6cdf6ab566e2ba503d40151dbcb243766c4b27225"} Nov 24 11:47:28 crc kubenswrapper[5072]: I1124 11:47:28.264395 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fbs95" event={"ID":"ed449e35-f14d-45cf-b172-49441c6d676a","Type":"ContainerStarted","Data":"4b865573f0d809d9aaafbba3a4c538cf592b33eb43842dd940e7052dae6b7ac6"} Nov 24 11:47:28 crc kubenswrapper[5072]: I1124 11:47:28.284319 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fbs95" podStartSLOduration=1.7591173260000001 podStartE2EDuration="2.284302208s" podCreationTimestamp="2025-11-24 11:47:26 +0000 UTC" firstStartedPulling="2025-11-24 11:47:27.25052306 +0000 UTC m=+2298.962047536" lastFinishedPulling="2025-11-24 11:47:27.775707942 +0000 UTC m=+2299.487232418" observedRunningTime="2025-11-24 11:47:28.279215103 +0000 UTC m=+2299.990739589" watchObservedRunningTime="2025-11-24 11:47:28.284302208 +0000 UTC m=+2299.995826684" Nov 24 11:47:29 crc kubenswrapper[5072]: I1124 11:47:29.022065 5072 scope.go:117] "RemoveContainer" containerID="6821956e4cab86ef1bb97ee072ae286fa9afb6be72f793a93d8280a527b7f493" Nov 24 11:47:29 crc kubenswrapper[5072]: E1124 11:47:29.022544 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 11:47:38 crc kubenswrapper[5072]: I1124 11:47:38.359629 5072 generic.go:334] "Generic (PLEG): container finished" podID="ed449e35-f14d-45cf-b172-49441c6d676a" containerID="434202070eff8935cb7b57b6cdf6ab566e2ba503d40151dbcb243766c4b27225" exitCode=0 Nov 24 11:47:38 crc kubenswrapper[5072]: I1124 11:47:38.359754 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fbs95" event={"ID":"ed449e35-f14d-45cf-b172-49441c6d676a","Type":"ContainerDied","Data":"434202070eff8935cb7b57b6cdf6ab566e2ba503d40151dbcb243766c4b27225"} Nov 24 11:47:39 crc kubenswrapper[5072]: I1124 11:47:39.824187 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fbs95" Nov 24 11:47:40 crc kubenswrapper[5072]: I1124 11:47:40.014802 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ed449e35-f14d-45cf-b172-49441c6d676a-ceph\") pod \"ed449e35-f14d-45cf-b172-49441c6d676a\" (UID: \"ed449e35-f14d-45cf-b172-49441c6d676a\") " Nov 24 11:47:40 crc kubenswrapper[5072]: I1124 11:47:40.014959 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ed449e35-f14d-45cf-b172-49441c6d676a-inventory\") pod \"ed449e35-f14d-45cf-b172-49441c6d676a\" (UID: \"ed449e35-f14d-45cf-b172-49441c6d676a\") " Nov 24 11:47:40 crc kubenswrapper[5072]: I1124 11:47:40.014992 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ed449e35-f14d-45cf-b172-49441c6d676a-ssh-key\") pod \"ed449e35-f14d-45cf-b172-49441c6d676a\" (UID: \"ed449e35-f14d-45cf-b172-49441c6d676a\") " Nov 24 11:47:40 crc kubenswrapper[5072]: I1124 11:47:40.015010 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c82x5\" (UniqueName: \"kubernetes.io/projected/ed449e35-f14d-45cf-b172-49441c6d676a-kube-api-access-c82x5\") pod \"ed449e35-f14d-45cf-b172-49441c6d676a\" (UID: \"ed449e35-f14d-45cf-b172-49441c6d676a\") " Nov 24 11:47:40 crc kubenswrapper[5072]: I1124 11:47:40.021681 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed449e35-f14d-45cf-b172-49441c6d676a-kube-api-access-c82x5" (OuterVolumeSpecName: "kube-api-access-c82x5") pod "ed449e35-f14d-45cf-b172-49441c6d676a" (UID: "ed449e35-f14d-45cf-b172-49441c6d676a"). InnerVolumeSpecName "kube-api-access-c82x5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:47:40 crc kubenswrapper[5072]: I1124 11:47:40.022084 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed449e35-f14d-45cf-b172-49441c6d676a-ceph" (OuterVolumeSpecName: "ceph") pod "ed449e35-f14d-45cf-b172-49441c6d676a" (UID: "ed449e35-f14d-45cf-b172-49441c6d676a"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:47:40 crc kubenswrapper[5072]: I1124 11:47:40.039601 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed449e35-f14d-45cf-b172-49441c6d676a-inventory" (OuterVolumeSpecName: "inventory") pod "ed449e35-f14d-45cf-b172-49441c6d676a" (UID: "ed449e35-f14d-45cf-b172-49441c6d676a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:47:40 crc kubenswrapper[5072]: I1124 11:47:40.044040 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed449e35-f14d-45cf-b172-49441c6d676a-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "ed449e35-f14d-45cf-b172-49441c6d676a" (UID: "ed449e35-f14d-45cf-b172-49441c6d676a"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:47:40 crc kubenswrapper[5072]: I1124 11:47:40.116898 5072 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ed449e35-f14d-45cf-b172-49441c6d676a-ceph\") on node \"crc\" DevicePath \"\"" Nov 24 11:47:40 crc kubenswrapper[5072]: I1124 11:47:40.116938 5072 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ed449e35-f14d-45cf-b172-49441c6d676a-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 11:47:40 crc kubenswrapper[5072]: I1124 11:47:40.116951 5072 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ed449e35-f14d-45cf-b172-49441c6d676a-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 11:47:40 crc kubenswrapper[5072]: I1124 11:47:40.116962 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c82x5\" (UniqueName: \"kubernetes.io/projected/ed449e35-f14d-45cf-b172-49441c6d676a-kube-api-access-c82x5\") on node \"crc\" DevicePath \"\"" Nov 24 11:47:40 crc kubenswrapper[5072]: I1124 11:47:40.380319 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fbs95" event={"ID":"ed449e35-f14d-45cf-b172-49441c6d676a","Type":"ContainerDied","Data":"4b865573f0d809d9aaafbba3a4c538cf592b33eb43842dd940e7052dae6b7ac6"} Nov 24 11:47:40 crc kubenswrapper[5072]: I1124 11:47:40.380413 5072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b865573f0d809d9aaafbba3a4c538cf592b33eb43842dd940e7052dae6b7ac6" Nov 24 11:47:40 crc kubenswrapper[5072]: I1124 11:47:40.380501 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fbs95" Nov 24 11:47:40 crc kubenswrapper[5072]: I1124 11:47:40.471510 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dcmv7"] Nov 24 11:47:40 crc kubenswrapper[5072]: E1124 11:47:40.471856 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed449e35-f14d-45cf-b172-49441c6d676a" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 24 11:47:40 crc kubenswrapper[5072]: I1124 11:47:40.471870 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed449e35-f14d-45cf-b172-49441c6d676a" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 24 11:47:40 crc kubenswrapper[5072]: I1124 11:47:40.472043 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed449e35-f14d-45cf-b172-49441c6d676a" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 24 11:47:40 crc kubenswrapper[5072]: I1124 11:47:40.472589 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dcmv7" Nov 24 11:47:40 crc kubenswrapper[5072]: I1124 11:47:40.474826 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 24 11:47:40 crc kubenswrapper[5072]: I1124 11:47:40.475994 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 11:47:40 crc kubenswrapper[5072]: I1124 11:47:40.476466 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 11:47:40 crc kubenswrapper[5072]: I1124 11:47:40.476691 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Nov 24 11:47:40 crc kubenswrapper[5072]: I1124 11:47:40.477037 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Nov 24 11:47:40 crc kubenswrapper[5072]: I1124 11:47:40.477116 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 11:47:40 crc kubenswrapper[5072]: I1124 11:47:40.477166 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-b6s7d" Nov 24 11:47:40 crc kubenswrapper[5072]: I1124 11:47:40.477469 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Nov 24 11:47:40 crc kubenswrapper[5072]: I1124 11:47:40.496442 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dcmv7"] Nov 24 11:47:40 crc kubenswrapper[5072]: I1124 11:47:40.525018 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/55863054-3da4-4d20-80f7-9dd43d6ce388-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dcmv7\" (UID: \"55863054-3da4-4d20-80f7-9dd43d6ce388\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dcmv7" Nov 24 11:47:40 crc kubenswrapper[5072]: I1124 11:47:40.525077 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55863054-3da4-4d20-80f7-9dd43d6ce388-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dcmv7\" (UID: \"55863054-3da4-4d20-80f7-9dd43d6ce388\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dcmv7" Nov 24 11:47:40 crc kubenswrapper[5072]: I1124 11:47:40.525105 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/55863054-3da4-4d20-80f7-9dd43d6ce388-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dcmv7\" (UID: \"55863054-3da4-4d20-80f7-9dd43d6ce388\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dcmv7" Nov 24 11:47:40 crc kubenswrapper[5072]: I1124 11:47:40.525137 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/55863054-3da4-4d20-80f7-9dd43d6ce388-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dcmv7\" (UID: \"55863054-3da4-4d20-80f7-9dd43d6ce388\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dcmv7" Nov 24 11:47:40 crc kubenswrapper[5072]: I1124 11:47:40.525156 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55863054-3da4-4d20-80f7-9dd43d6ce388-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dcmv7\" (UID: \"55863054-3da4-4d20-80f7-9dd43d6ce388\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dcmv7" Nov 24 11:47:40 crc kubenswrapper[5072]: I1124 11:47:40.525174 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55863054-3da4-4d20-80f7-9dd43d6ce388-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dcmv7\" (UID: \"55863054-3da4-4d20-80f7-9dd43d6ce388\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dcmv7" Nov 24 11:47:40 crc kubenswrapper[5072]: I1124 11:47:40.525209 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qc8hp\" (UniqueName: \"kubernetes.io/projected/55863054-3da4-4d20-80f7-9dd43d6ce388-kube-api-access-qc8hp\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dcmv7\" (UID: \"55863054-3da4-4d20-80f7-9dd43d6ce388\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dcmv7" Nov 24 11:47:40 crc kubenswrapper[5072]: I1124 11:47:40.525231 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/55863054-3da4-4d20-80f7-9dd43d6ce388-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dcmv7\" (UID: \"55863054-3da4-4d20-80f7-9dd43d6ce388\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dcmv7" Nov 24 11:47:40 crc kubenswrapper[5072]: I1124 11:47:40.525252 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55863054-3da4-4d20-80f7-9dd43d6ce388-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dcmv7\" (UID: \"55863054-3da4-4d20-80f7-9dd43d6ce388\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dcmv7" Nov 24 11:47:40 crc kubenswrapper[5072]: I1124 11:47:40.525280 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55863054-3da4-4d20-80f7-9dd43d6ce388-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dcmv7\" (UID: \"55863054-3da4-4d20-80f7-9dd43d6ce388\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dcmv7" Nov 24 11:47:40 crc kubenswrapper[5072]: I1124 11:47:40.525308 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/55863054-3da4-4d20-80f7-9dd43d6ce388-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dcmv7\" (UID: \"55863054-3da4-4d20-80f7-9dd43d6ce388\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dcmv7" Nov 24 11:47:40 crc kubenswrapper[5072]: I1124 11:47:40.525328 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55863054-3da4-4d20-80f7-9dd43d6ce388-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dcmv7\" (UID: \"55863054-3da4-4d20-80f7-9dd43d6ce388\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dcmv7" Nov 24 11:47:40 crc kubenswrapper[5072]: I1124 11:47:40.525343 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/55863054-3da4-4d20-80f7-9dd43d6ce388-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dcmv7\" (UID: \"55863054-3da4-4d20-80f7-9dd43d6ce388\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dcmv7" Nov 24 11:47:40 crc kubenswrapper[5072]: I1124 11:47:40.626991 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qc8hp\" (UniqueName: \"kubernetes.io/projected/55863054-3da4-4d20-80f7-9dd43d6ce388-kube-api-access-qc8hp\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dcmv7\" (UID: \"55863054-3da4-4d20-80f7-9dd43d6ce388\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dcmv7" Nov 24 11:47:40 crc kubenswrapper[5072]: I1124 11:47:40.627063 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/55863054-3da4-4d20-80f7-9dd43d6ce388-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dcmv7\" (UID: \"55863054-3da4-4d20-80f7-9dd43d6ce388\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dcmv7" Nov 24 11:47:40 crc kubenswrapper[5072]: I1124 11:47:40.627099 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55863054-3da4-4d20-80f7-9dd43d6ce388-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dcmv7\" (UID: \"55863054-3da4-4d20-80f7-9dd43d6ce388\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dcmv7" Nov 24 11:47:40 crc kubenswrapper[5072]: I1124 11:47:40.627151 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55863054-3da4-4d20-80f7-9dd43d6ce388-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dcmv7\" (UID: \"55863054-3da4-4d20-80f7-9dd43d6ce388\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dcmv7" Nov 24 11:47:40 crc kubenswrapper[5072]: I1124 11:47:40.627203 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/55863054-3da4-4d20-80f7-9dd43d6ce388-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dcmv7\" (UID: \"55863054-3da4-4d20-80f7-9dd43d6ce388\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dcmv7" Nov 24 11:47:40 crc kubenswrapper[5072]: I1124 11:47:40.627232 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55863054-3da4-4d20-80f7-9dd43d6ce388-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dcmv7\" (UID: \"55863054-3da4-4d20-80f7-9dd43d6ce388\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dcmv7" Nov 24 11:47:40 crc kubenswrapper[5072]: I1124 11:47:40.627296 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/55863054-3da4-4d20-80f7-9dd43d6ce388-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dcmv7\" (UID: \"55863054-3da4-4d20-80f7-9dd43d6ce388\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dcmv7" Nov 24 11:47:40 crc kubenswrapper[5072]: I1124 11:47:40.627342 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/55863054-3da4-4d20-80f7-9dd43d6ce388-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dcmv7\" (UID: \"55863054-3da4-4d20-80f7-9dd43d6ce388\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dcmv7" Nov 24 11:47:40 crc kubenswrapper[5072]: I1124 11:47:40.627409 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55863054-3da4-4d20-80f7-9dd43d6ce388-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dcmv7\" (UID: \"55863054-3da4-4d20-80f7-9dd43d6ce388\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dcmv7" Nov 24 11:47:40 crc kubenswrapper[5072]: I1124 11:47:40.627448 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/55863054-3da4-4d20-80f7-9dd43d6ce388-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dcmv7\" (UID: \"55863054-3da4-4d20-80f7-9dd43d6ce388\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dcmv7" Nov 24 11:47:40 crc kubenswrapper[5072]: I1124 11:47:40.627493 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/55863054-3da4-4d20-80f7-9dd43d6ce388-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dcmv7\" (UID: \"55863054-3da4-4d20-80f7-9dd43d6ce388\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dcmv7" Nov 24 11:47:40 crc kubenswrapper[5072]: I1124 11:47:40.627524 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55863054-3da4-4d20-80f7-9dd43d6ce388-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dcmv7\" (UID: \"55863054-3da4-4d20-80f7-9dd43d6ce388\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dcmv7" Nov 24 11:47:40 crc kubenswrapper[5072]: I1124 11:47:40.627552 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55863054-3da4-4d20-80f7-9dd43d6ce388-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dcmv7\" (UID: \"55863054-3da4-4d20-80f7-9dd43d6ce388\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dcmv7" Nov 24 11:47:40 crc kubenswrapper[5072]: I1124 11:47:40.632729 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55863054-3da4-4d20-80f7-9dd43d6ce388-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dcmv7\" (UID: \"55863054-3da4-4d20-80f7-9dd43d6ce388\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dcmv7" Nov 24 11:47:40 crc kubenswrapper[5072]: I1124 11:47:40.633642 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/55863054-3da4-4d20-80f7-9dd43d6ce388-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dcmv7\" (UID: \"55863054-3da4-4d20-80f7-9dd43d6ce388\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dcmv7" Nov 24 11:47:40 crc kubenswrapper[5072]: I1124 11:47:40.638919 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/55863054-3da4-4d20-80f7-9dd43d6ce388-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dcmv7\" (UID: \"55863054-3da4-4d20-80f7-9dd43d6ce388\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dcmv7" Nov 24 11:47:40 crc kubenswrapper[5072]: I1124 11:47:40.639778 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55863054-3da4-4d20-80f7-9dd43d6ce388-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dcmv7\" (UID: \"55863054-3da4-4d20-80f7-9dd43d6ce388\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dcmv7" Nov 24 11:47:40 crc kubenswrapper[5072]: I1124 11:47:40.642743 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55863054-3da4-4d20-80f7-9dd43d6ce388-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dcmv7\" (UID: \"55863054-3da4-4d20-80f7-9dd43d6ce388\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dcmv7" Nov 24 11:47:40 crc kubenswrapper[5072]: I1124 11:47:40.643035 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/55863054-3da4-4d20-80f7-9dd43d6ce388-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dcmv7\" (UID: \"55863054-3da4-4d20-80f7-9dd43d6ce388\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dcmv7" Nov 24 11:47:40 crc kubenswrapper[5072]: I1124 11:47:40.644061 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55863054-3da4-4d20-80f7-9dd43d6ce388-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dcmv7\" (UID: \"55863054-3da4-4d20-80f7-9dd43d6ce388\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dcmv7" Nov 24 11:47:40 crc kubenswrapper[5072]: I1124 11:47:40.644948 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55863054-3da4-4d20-80f7-9dd43d6ce388-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dcmv7\" (UID: \"55863054-3da4-4d20-80f7-9dd43d6ce388\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dcmv7" Nov 24 11:47:40 crc kubenswrapper[5072]: I1124 11:47:40.645322 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55863054-3da4-4d20-80f7-9dd43d6ce388-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dcmv7\" (UID: \"55863054-3da4-4d20-80f7-9dd43d6ce388\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dcmv7" Nov 24 11:47:40 crc kubenswrapper[5072]: I1124 11:47:40.650239 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/55863054-3da4-4d20-80f7-9dd43d6ce388-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dcmv7\" (UID: \"55863054-3da4-4d20-80f7-9dd43d6ce388\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dcmv7" Nov 24 11:47:40 crc kubenswrapper[5072]: I1124 11:47:40.651746 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/55863054-3da4-4d20-80f7-9dd43d6ce388-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dcmv7\" (UID: \"55863054-3da4-4d20-80f7-9dd43d6ce388\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dcmv7" Nov 24 11:47:40 crc kubenswrapper[5072]: I1124 11:47:40.652940 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/55863054-3da4-4d20-80f7-9dd43d6ce388-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dcmv7\" (UID: \"55863054-3da4-4d20-80f7-9dd43d6ce388\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dcmv7" Nov 24 11:47:40 crc kubenswrapper[5072]: I1124 11:47:40.654989 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qc8hp\" (UniqueName: \"kubernetes.io/projected/55863054-3da4-4d20-80f7-9dd43d6ce388-kube-api-access-qc8hp\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dcmv7\" (UID: \"55863054-3da4-4d20-80f7-9dd43d6ce388\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dcmv7" Nov 24 11:47:40 crc kubenswrapper[5072]: I1124 11:47:40.796970 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dcmv7" Nov 24 11:47:41 crc kubenswrapper[5072]: I1124 11:47:41.374472 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dcmv7"] Nov 24 11:47:41 crc kubenswrapper[5072]: I1124 11:47:41.389527 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dcmv7" event={"ID":"55863054-3da4-4d20-80f7-9dd43d6ce388","Type":"ContainerStarted","Data":"0e4ca0d304b4232a1df5b3ddd827c3ff17d3e9edfe47cdb81217a69d5745d16c"} Nov 24 11:47:42 crc kubenswrapper[5072]: I1124 11:47:42.017211 5072 scope.go:117] "RemoveContainer" containerID="6821956e4cab86ef1bb97ee072ae286fa9afb6be72f793a93d8280a527b7f493" Nov 24 11:47:42 crc kubenswrapper[5072]: E1124 11:47:42.018071 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 11:47:42 crc kubenswrapper[5072]: I1124 11:47:42.398923 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dcmv7" event={"ID":"55863054-3da4-4d20-80f7-9dd43d6ce388","Type":"ContainerStarted","Data":"18c26255a07805ec98958828a6234e871c8835182d618a7e8f39ade348d290f4"} Nov 24 11:47:56 crc kubenswrapper[5072]: I1124 11:47:56.016906 5072 scope.go:117] "RemoveContainer" containerID="6821956e4cab86ef1bb97ee072ae286fa9afb6be72f793a93d8280a527b7f493" Nov 24 11:47:56 crc kubenswrapper[5072]: E1124 11:47:56.018043 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 11:48:07 crc kubenswrapper[5072]: I1124 11:48:07.017263 5072 scope.go:117] "RemoveContainer" containerID="6821956e4cab86ef1bb97ee072ae286fa9afb6be72f793a93d8280a527b7f493" Nov 24 11:48:07 crc kubenswrapper[5072]: E1124 11:48:07.018042 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 11:48:17 crc kubenswrapper[5072]: I1124 11:48:17.732761 5072 generic.go:334] "Generic (PLEG): container finished" podID="55863054-3da4-4d20-80f7-9dd43d6ce388" containerID="18c26255a07805ec98958828a6234e871c8835182d618a7e8f39ade348d290f4" exitCode=0 Nov 24 11:48:17 crc kubenswrapper[5072]: I1124 11:48:17.732890 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dcmv7" event={"ID":"55863054-3da4-4d20-80f7-9dd43d6ce388","Type":"ContainerDied","Data":"18c26255a07805ec98958828a6234e871c8835182d618a7e8f39ade348d290f4"} Nov 24 11:48:19 crc kubenswrapper[5072]: I1124 11:48:19.031208 5072 scope.go:117] "RemoveContainer" containerID="6821956e4cab86ef1bb97ee072ae286fa9afb6be72f793a93d8280a527b7f493" Nov 24 11:48:19 crc kubenswrapper[5072]: E1124 11:48:19.031534 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 11:48:19 crc kubenswrapper[5072]: I1124 11:48:19.146690 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dcmv7" Nov 24 11:48:19 crc kubenswrapper[5072]: I1124 11:48:19.222578 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55863054-3da4-4d20-80f7-9dd43d6ce388-nova-combined-ca-bundle\") pod \"55863054-3da4-4d20-80f7-9dd43d6ce388\" (UID: \"55863054-3da4-4d20-80f7-9dd43d6ce388\") " Nov 24 11:48:19 crc kubenswrapper[5072]: I1124 11:48:19.222647 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55863054-3da4-4d20-80f7-9dd43d6ce388-ovn-combined-ca-bundle\") pod \"55863054-3da4-4d20-80f7-9dd43d6ce388\" (UID: \"55863054-3da4-4d20-80f7-9dd43d6ce388\") " Nov 24 11:48:19 crc kubenswrapper[5072]: I1124 11:48:19.222696 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55863054-3da4-4d20-80f7-9dd43d6ce388-neutron-metadata-combined-ca-bundle\") pod \"55863054-3da4-4d20-80f7-9dd43d6ce388\" (UID: \"55863054-3da4-4d20-80f7-9dd43d6ce388\") " Nov 24 11:48:19 crc kubenswrapper[5072]: I1124 11:48:19.222721 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55863054-3da4-4d20-80f7-9dd43d6ce388-libvirt-combined-ca-bundle\") pod \"55863054-3da4-4d20-80f7-9dd43d6ce388\" (UID: \"55863054-3da4-4d20-80f7-9dd43d6ce388\") " Nov 24 11:48:19 crc kubenswrapper[5072]: I1124 11:48:19.222741 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/55863054-3da4-4d20-80f7-9dd43d6ce388-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"55863054-3da4-4d20-80f7-9dd43d6ce388\" (UID: \"55863054-3da4-4d20-80f7-9dd43d6ce388\") " Nov 24 11:48:19 crc kubenswrapper[5072]: I1124 11:48:19.222763 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/55863054-3da4-4d20-80f7-9dd43d6ce388-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"55863054-3da4-4d20-80f7-9dd43d6ce388\" (UID: \"55863054-3da4-4d20-80f7-9dd43d6ce388\") " Nov 24 11:48:19 crc kubenswrapper[5072]: I1124 11:48:19.222786 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55863054-3da4-4d20-80f7-9dd43d6ce388-repo-setup-combined-ca-bundle\") pod \"55863054-3da4-4d20-80f7-9dd43d6ce388\" (UID: \"55863054-3da4-4d20-80f7-9dd43d6ce388\") " Nov 24 11:48:19 crc kubenswrapper[5072]: I1124 11:48:19.222801 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/55863054-3da4-4d20-80f7-9dd43d6ce388-inventory\") pod \"55863054-3da4-4d20-80f7-9dd43d6ce388\" (UID: \"55863054-3da4-4d20-80f7-9dd43d6ce388\") " Nov 24 11:48:19 crc kubenswrapper[5072]: I1124 11:48:19.222844 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/55863054-3da4-4d20-80f7-9dd43d6ce388-ssh-key\") pod \"55863054-3da4-4d20-80f7-9dd43d6ce388\" (UID: \"55863054-3da4-4d20-80f7-9dd43d6ce388\") " Nov 24 11:48:19 crc kubenswrapper[5072]: I1124 11:48:19.222864 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qc8hp\" (UniqueName: \"kubernetes.io/projected/55863054-3da4-4d20-80f7-9dd43d6ce388-kube-api-access-qc8hp\") pod \"55863054-3da4-4d20-80f7-9dd43d6ce388\" (UID: \"55863054-3da4-4d20-80f7-9dd43d6ce388\") " Nov 24 11:48:19 crc kubenswrapper[5072]: I1124 11:48:19.222907 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/55863054-3da4-4d20-80f7-9dd43d6ce388-ceph\") pod \"55863054-3da4-4d20-80f7-9dd43d6ce388\" (UID: \"55863054-3da4-4d20-80f7-9dd43d6ce388\") " Nov 24 11:48:19 crc kubenswrapper[5072]: I1124 11:48:19.222931 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55863054-3da4-4d20-80f7-9dd43d6ce388-bootstrap-combined-ca-bundle\") pod \"55863054-3da4-4d20-80f7-9dd43d6ce388\" (UID: \"55863054-3da4-4d20-80f7-9dd43d6ce388\") " Nov 24 11:48:19 crc kubenswrapper[5072]: I1124 11:48:19.222993 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/55863054-3da4-4d20-80f7-9dd43d6ce388-openstack-edpm-ipam-ovn-default-certs-0\") pod \"55863054-3da4-4d20-80f7-9dd43d6ce388\" (UID: \"55863054-3da4-4d20-80f7-9dd43d6ce388\") " Nov 24 11:48:19 crc kubenswrapper[5072]: I1124 11:48:19.228737 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55863054-3da4-4d20-80f7-9dd43d6ce388-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "55863054-3da4-4d20-80f7-9dd43d6ce388" (UID: "55863054-3da4-4d20-80f7-9dd43d6ce388"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:48:19 crc kubenswrapper[5072]: I1124 11:48:19.229825 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55863054-3da4-4d20-80f7-9dd43d6ce388-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "55863054-3da4-4d20-80f7-9dd43d6ce388" (UID: "55863054-3da4-4d20-80f7-9dd43d6ce388"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:48:19 crc kubenswrapper[5072]: I1124 11:48:19.230397 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55863054-3da4-4d20-80f7-9dd43d6ce388-kube-api-access-qc8hp" (OuterVolumeSpecName: "kube-api-access-qc8hp") pod "55863054-3da4-4d20-80f7-9dd43d6ce388" (UID: "55863054-3da4-4d20-80f7-9dd43d6ce388"). InnerVolumeSpecName "kube-api-access-qc8hp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:48:19 crc kubenswrapper[5072]: I1124 11:48:19.230616 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55863054-3da4-4d20-80f7-9dd43d6ce388-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "55863054-3da4-4d20-80f7-9dd43d6ce388" (UID: "55863054-3da4-4d20-80f7-9dd43d6ce388"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:48:19 crc kubenswrapper[5072]: I1124 11:48:19.231000 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55863054-3da4-4d20-80f7-9dd43d6ce388-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "55863054-3da4-4d20-80f7-9dd43d6ce388" (UID: "55863054-3da4-4d20-80f7-9dd43d6ce388"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:48:19 crc kubenswrapper[5072]: I1124 11:48:19.232020 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55863054-3da4-4d20-80f7-9dd43d6ce388-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "55863054-3da4-4d20-80f7-9dd43d6ce388" (UID: "55863054-3da4-4d20-80f7-9dd43d6ce388"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:48:19 crc kubenswrapper[5072]: I1124 11:48:19.232421 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55863054-3da4-4d20-80f7-9dd43d6ce388-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "55863054-3da4-4d20-80f7-9dd43d6ce388" (UID: "55863054-3da4-4d20-80f7-9dd43d6ce388"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:48:19 crc kubenswrapper[5072]: I1124 11:48:19.232465 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55863054-3da4-4d20-80f7-9dd43d6ce388-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "55863054-3da4-4d20-80f7-9dd43d6ce388" (UID: "55863054-3da4-4d20-80f7-9dd43d6ce388"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:48:19 crc kubenswrapper[5072]: I1124 11:48:19.232555 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55863054-3da4-4d20-80f7-9dd43d6ce388-ceph" (OuterVolumeSpecName: "ceph") pod "55863054-3da4-4d20-80f7-9dd43d6ce388" (UID: "55863054-3da4-4d20-80f7-9dd43d6ce388"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:48:19 crc kubenswrapper[5072]: I1124 11:48:19.233214 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55863054-3da4-4d20-80f7-9dd43d6ce388-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "55863054-3da4-4d20-80f7-9dd43d6ce388" (UID: "55863054-3da4-4d20-80f7-9dd43d6ce388"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:48:19 crc kubenswrapper[5072]: I1124 11:48:19.237222 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55863054-3da4-4d20-80f7-9dd43d6ce388-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "55863054-3da4-4d20-80f7-9dd43d6ce388" (UID: "55863054-3da4-4d20-80f7-9dd43d6ce388"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:48:19 crc kubenswrapper[5072]: I1124 11:48:19.250995 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55863054-3da4-4d20-80f7-9dd43d6ce388-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "55863054-3da4-4d20-80f7-9dd43d6ce388" (UID: "55863054-3da4-4d20-80f7-9dd43d6ce388"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:48:19 crc kubenswrapper[5072]: I1124 11:48:19.260842 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55863054-3da4-4d20-80f7-9dd43d6ce388-inventory" (OuterVolumeSpecName: "inventory") pod "55863054-3da4-4d20-80f7-9dd43d6ce388" (UID: "55863054-3da4-4d20-80f7-9dd43d6ce388"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:48:19 crc kubenswrapper[5072]: I1124 11:48:19.325337 5072 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/55863054-3da4-4d20-80f7-9dd43d6ce388-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 11:48:19 crc kubenswrapper[5072]: I1124 11:48:19.325417 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qc8hp\" (UniqueName: \"kubernetes.io/projected/55863054-3da4-4d20-80f7-9dd43d6ce388-kube-api-access-qc8hp\") on node \"crc\" DevicePath \"\"" Nov 24 11:48:19 crc kubenswrapper[5072]: I1124 11:48:19.325438 5072 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/55863054-3da4-4d20-80f7-9dd43d6ce388-ceph\") on node \"crc\" DevicePath \"\"" Nov 24 11:48:19 crc kubenswrapper[5072]: I1124 11:48:19.325458 5072 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55863054-3da4-4d20-80f7-9dd43d6ce388-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:48:19 crc kubenswrapper[5072]: I1124 11:48:19.325480 5072 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/55863054-3da4-4d20-80f7-9dd43d6ce388-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 24 11:48:19 crc kubenswrapper[5072]: I1124 11:48:19.325500 5072 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55863054-3da4-4d20-80f7-9dd43d6ce388-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:48:19 crc kubenswrapper[5072]: I1124 11:48:19.325520 5072 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55863054-3da4-4d20-80f7-9dd43d6ce388-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:48:19 crc kubenswrapper[5072]: I1124 11:48:19.325539 5072 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55863054-3da4-4d20-80f7-9dd43d6ce388-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:48:19 crc kubenswrapper[5072]: I1124 11:48:19.325559 5072 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55863054-3da4-4d20-80f7-9dd43d6ce388-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:48:19 crc kubenswrapper[5072]: I1124 11:48:19.325580 5072 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/55863054-3da4-4d20-80f7-9dd43d6ce388-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 24 11:48:19 crc kubenswrapper[5072]: I1124 11:48:19.325600 5072 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/55863054-3da4-4d20-80f7-9dd43d6ce388-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 24 11:48:19 crc kubenswrapper[5072]: I1124 11:48:19.325620 5072 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/55863054-3da4-4d20-80f7-9dd43d6ce388-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 11:48:19 crc kubenswrapper[5072]: I1124 11:48:19.325638 5072 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55863054-3da4-4d20-80f7-9dd43d6ce388-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:48:19 crc kubenswrapper[5072]: I1124 11:48:19.753734 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dcmv7" event={"ID":"55863054-3da4-4d20-80f7-9dd43d6ce388","Type":"ContainerDied","Data":"0e4ca0d304b4232a1df5b3ddd827c3ff17d3e9edfe47cdb81217a69d5745d16c"} Nov 24 11:48:19 crc kubenswrapper[5072]: I1124 11:48:19.753774 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dcmv7" Nov 24 11:48:19 crc kubenswrapper[5072]: I1124 11:48:19.753788 5072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e4ca0d304b4232a1df5b3ddd827c3ff17d3e9edfe47cdb81217a69d5745d16c" Nov 24 11:48:19 crc kubenswrapper[5072]: I1124 11:48:19.880778 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-nr928"] Nov 24 11:48:19 crc kubenswrapper[5072]: E1124 11:48:19.881233 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55863054-3da4-4d20-80f7-9dd43d6ce388" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 24 11:48:19 crc kubenswrapper[5072]: I1124 11:48:19.881260 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="55863054-3da4-4d20-80f7-9dd43d6ce388" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 24 11:48:19 crc kubenswrapper[5072]: I1124 11:48:19.881516 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="55863054-3da4-4d20-80f7-9dd43d6ce388" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 24 11:48:19 crc kubenswrapper[5072]: I1124 11:48:19.882246 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-nr928" Nov 24 11:48:19 crc kubenswrapper[5072]: I1124 11:48:19.884227 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 11:48:19 crc kubenswrapper[5072]: I1124 11:48:19.885105 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 11:48:19 crc kubenswrapper[5072]: I1124 11:48:19.885323 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-b6s7d" Nov 24 11:48:19 crc kubenswrapper[5072]: I1124 11:48:19.885607 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 11:48:19 crc kubenswrapper[5072]: I1124 11:48:19.885945 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 24 11:48:19 crc kubenswrapper[5072]: I1124 11:48:19.910732 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-nr928"] Nov 24 11:48:20 crc kubenswrapper[5072]: I1124 11:48:20.001671 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-d8k67"] Nov 24 11:48:20 crc kubenswrapper[5072]: I1124 11:48:20.003857 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d8k67" Nov 24 11:48:20 crc kubenswrapper[5072]: I1124 11:48:20.013881 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d8k67"] Nov 24 11:48:20 crc kubenswrapper[5072]: I1124 11:48:20.037821 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/95c83f58-e5a9-4038-ae80-2ba999d47b81-ssh-key\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-nr928\" (UID: \"95c83f58-e5a9-4038-ae80-2ba999d47b81\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-nr928" Nov 24 11:48:20 crc kubenswrapper[5072]: I1124 11:48:20.037879 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/95c83f58-e5a9-4038-ae80-2ba999d47b81-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-nr928\" (UID: \"95c83f58-e5a9-4038-ae80-2ba999d47b81\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-nr928" Nov 24 11:48:20 crc kubenswrapper[5072]: I1124 11:48:20.038205 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/95c83f58-e5a9-4038-ae80-2ba999d47b81-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-nr928\" (UID: \"95c83f58-e5a9-4038-ae80-2ba999d47b81\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-nr928" Nov 24 11:48:20 crc kubenswrapper[5072]: I1124 11:48:20.038245 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5hhn\" (UniqueName: \"kubernetes.io/projected/95c83f58-e5a9-4038-ae80-2ba999d47b81-kube-api-access-r5hhn\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-nr928\" (UID: \"95c83f58-e5a9-4038-ae80-2ba999d47b81\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-nr928" Nov 24 11:48:20 crc kubenswrapper[5072]: I1124 11:48:20.139566 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/95c83f58-e5a9-4038-ae80-2ba999d47b81-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-nr928\" (UID: \"95c83f58-e5a9-4038-ae80-2ba999d47b81\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-nr928" Nov 24 11:48:20 crc kubenswrapper[5072]: I1124 11:48:20.139615 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5hhn\" (UniqueName: \"kubernetes.io/projected/95c83f58-e5a9-4038-ae80-2ba999d47b81-kube-api-access-r5hhn\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-nr928\" (UID: \"95c83f58-e5a9-4038-ae80-2ba999d47b81\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-nr928" Nov 24 11:48:20 crc kubenswrapper[5072]: I1124 11:48:20.139708 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/95c83f58-e5a9-4038-ae80-2ba999d47b81-ssh-key\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-nr928\" (UID: \"95c83f58-e5a9-4038-ae80-2ba999d47b81\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-nr928" Nov 24 11:48:20 crc kubenswrapper[5072]: I1124 11:48:20.139742 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d52fce03-19bd-4073-ba53-3acaa9944571-catalog-content\") pod \"certified-operators-d8k67\" (UID: \"d52fce03-19bd-4073-ba53-3acaa9944571\") " pod="openshift-marketplace/certified-operators-d8k67" Nov 24 11:48:20 crc kubenswrapper[5072]: I1124 11:48:20.139793 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/95c83f58-e5a9-4038-ae80-2ba999d47b81-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-nr928\" (UID: \"95c83f58-e5a9-4038-ae80-2ba999d47b81\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-nr928" Nov 24 11:48:20 crc kubenswrapper[5072]: I1124 11:48:20.139823 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zmdt\" (UniqueName: \"kubernetes.io/projected/d52fce03-19bd-4073-ba53-3acaa9944571-kube-api-access-6zmdt\") pod \"certified-operators-d8k67\" (UID: \"d52fce03-19bd-4073-ba53-3acaa9944571\") " pod="openshift-marketplace/certified-operators-d8k67" Nov 24 11:48:20 crc kubenswrapper[5072]: I1124 11:48:20.139855 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d52fce03-19bd-4073-ba53-3acaa9944571-utilities\") pod \"certified-operators-d8k67\" (UID: \"d52fce03-19bd-4073-ba53-3acaa9944571\") " pod="openshift-marketplace/certified-operators-d8k67" Nov 24 11:48:20 crc kubenswrapper[5072]: I1124 11:48:20.145173 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/95c83f58-e5a9-4038-ae80-2ba999d47b81-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-nr928\" (UID: \"95c83f58-e5a9-4038-ae80-2ba999d47b81\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-nr928" Nov 24 11:48:20 crc kubenswrapper[5072]: I1124 11:48:20.146367 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/95c83f58-e5a9-4038-ae80-2ba999d47b81-ssh-key\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-nr928\" (UID: \"95c83f58-e5a9-4038-ae80-2ba999d47b81\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-nr928" Nov 24 11:48:20 crc kubenswrapper[5072]: I1124 11:48:20.146881 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/95c83f58-e5a9-4038-ae80-2ba999d47b81-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-nr928\" (UID: \"95c83f58-e5a9-4038-ae80-2ba999d47b81\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-nr928" Nov 24 11:48:20 crc kubenswrapper[5072]: I1124 11:48:20.157480 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5hhn\" (UniqueName: \"kubernetes.io/projected/95c83f58-e5a9-4038-ae80-2ba999d47b81-kube-api-access-r5hhn\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-nr928\" (UID: \"95c83f58-e5a9-4038-ae80-2ba999d47b81\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-nr928" Nov 24 11:48:20 crc kubenswrapper[5072]: I1124 11:48:20.200717 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-nr928" Nov 24 11:48:20 crc kubenswrapper[5072]: I1124 11:48:20.244360 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d52fce03-19bd-4073-ba53-3acaa9944571-catalog-content\") pod \"certified-operators-d8k67\" (UID: \"d52fce03-19bd-4073-ba53-3acaa9944571\") " pod="openshift-marketplace/certified-operators-d8k67" Nov 24 11:48:20 crc kubenswrapper[5072]: I1124 11:48:20.244455 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zmdt\" (UniqueName: \"kubernetes.io/projected/d52fce03-19bd-4073-ba53-3acaa9944571-kube-api-access-6zmdt\") pod \"certified-operators-d8k67\" (UID: \"d52fce03-19bd-4073-ba53-3acaa9944571\") " pod="openshift-marketplace/certified-operators-d8k67" Nov 24 11:48:20 crc kubenswrapper[5072]: I1124 11:48:20.244487 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d52fce03-19bd-4073-ba53-3acaa9944571-utilities\") pod \"certified-operators-d8k67\" (UID: \"d52fce03-19bd-4073-ba53-3acaa9944571\") " pod="openshift-marketplace/certified-operators-d8k67" Nov 24 11:48:20 crc kubenswrapper[5072]: I1124 11:48:20.244955 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d52fce03-19bd-4073-ba53-3acaa9944571-catalog-content\") pod \"certified-operators-d8k67\" (UID: \"d52fce03-19bd-4073-ba53-3acaa9944571\") " pod="openshift-marketplace/certified-operators-d8k67" Nov 24 11:48:20 crc kubenswrapper[5072]: I1124 11:48:20.247272 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d52fce03-19bd-4073-ba53-3acaa9944571-utilities\") pod \"certified-operators-d8k67\" (UID: \"d52fce03-19bd-4073-ba53-3acaa9944571\") " pod="openshift-marketplace/certified-operators-d8k67" Nov 24 11:48:20 crc kubenswrapper[5072]: I1124 11:48:20.264240 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zmdt\" (UniqueName: \"kubernetes.io/projected/d52fce03-19bd-4073-ba53-3acaa9944571-kube-api-access-6zmdt\") pod \"certified-operators-d8k67\" (UID: \"d52fce03-19bd-4073-ba53-3acaa9944571\") " pod="openshift-marketplace/certified-operators-d8k67" Nov 24 11:48:20 crc kubenswrapper[5072]: I1124 11:48:20.320354 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d8k67" Nov 24 11:48:20 crc kubenswrapper[5072]: W1124 11:48:20.818775 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod95c83f58_e5a9_4038_ae80_2ba999d47b81.slice/crio-57bdc759838b84fabed8966221cc695a77373c520ae4d4376f4789353d2ebceb WatchSource:0}: Error finding container 57bdc759838b84fabed8966221cc695a77373c520ae4d4376f4789353d2ebceb: Status 404 returned error can't find the container with id 57bdc759838b84fabed8966221cc695a77373c520ae4d4376f4789353d2ebceb Nov 24 11:48:20 crc kubenswrapper[5072]: I1124 11:48:20.822756 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-nr928"] Nov 24 11:48:20 crc kubenswrapper[5072]: I1124 11:48:20.861847 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d8k67"] Nov 24 11:48:20 crc kubenswrapper[5072]: W1124 11:48:20.869243 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd52fce03_19bd_4073_ba53_3acaa9944571.slice/crio-d54c3488490def88b8fc9340afba41f4a9befe8470216338af7cbc2d1910bc83 WatchSource:0}: Error finding container d54c3488490def88b8fc9340afba41f4a9befe8470216338af7cbc2d1910bc83: Status 404 returned error can't find the container with id d54c3488490def88b8fc9340afba41f4a9befe8470216338af7cbc2d1910bc83 Nov 24 11:48:21 crc kubenswrapper[5072]: I1124 11:48:21.774300 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-nr928" event={"ID":"95c83f58-e5a9-4038-ae80-2ba999d47b81","Type":"ContainerStarted","Data":"994a5ccd66fcb4d9ff40a2fd2889fe3d87c55c08c9055acf8e7553e609d3be0b"} Nov 24 11:48:21 crc kubenswrapper[5072]: I1124 11:48:21.775464 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-nr928" event={"ID":"95c83f58-e5a9-4038-ae80-2ba999d47b81","Type":"ContainerStarted","Data":"57bdc759838b84fabed8966221cc695a77373c520ae4d4376f4789353d2ebceb"} Nov 24 11:48:21 crc kubenswrapper[5072]: I1124 11:48:21.776029 5072 generic.go:334] "Generic (PLEG): container finished" podID="d52fce03-19bd-4073-ba53-3acaa9944571" containerID="c4c343bcad5e95c76bfe16e4f774af6d12cdc1a094c443067b187d1aa7059fb5" exitCode=0 Nov 24 11:48:21 crc kubenswrapper[5072]: I1124 11:48:21.776078 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d8k67" event={"ID":"d52fce03-19bd-4073-ba53-3acaa9944571","Type":"ContainerDied","Data":"c4c343bcad5e95c76bfe16e4f774af6d12cdc1a094c443067b187d1aa7059fb5"} Nov 24 11:48:21 crc kubenswrapper[5072]: I1124 11:48:21.776103 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d8k67" event={"ID":"d52fce03-19bd-4073-ba53-3acaa9944571","Type":"ContainerStarted","Data":"d54c3488490def88b8fc9340afba41f4a9befe8470216338af7cbc2d1910bc83"} Nov 24 11:48:21 crc kubenswrapper[5072]: I1124 11:48:21.805790 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-nr928" podStartSLOduration=2.213318834 podStartE2EDuration="2.80577379s" podCreationTimestamp="2025-11-24 11:48:19 +0000 UTC" firstStartedPulling="2025-11-24 11:48:20.821601256 +0000 UTC m=+2352.533125752" lastFinishedPulling="2025-11-24 11:48:21.414056232 +0000 UTC m=+2353.125580708" observedRunningTime="2025-11-24 11:48:21.799802514 +0000 UTC m=+2353.511326990" watchObservedRunningTime="2025-11-24 11:48:21.80577379 +0000 UTC m=+2353.517298256" Nov 24 11:48:22 crc kubenswrapper[5072]: I1124 11:48:22.786459 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d8k67" event={"ID":"d52fce03-19bd-4073-ba53-3acaa9944571","Type":"ContainerStarted","Data":"00cf12fd7a88f21b2cfe4711e193235f9abbd663951799f94c475e6f152c4f90"} Nov 24 11:48:23 crc kubenswrapper[5072]: I1124 11:48:23.802125 5072 generic.go:334] "Generic (PLEG): container finished" podID="d52fce03-19bd-4073-ba53-3acaa9944571" containerID="00cf12fd7a88f21b2cfe4711e193235f9abbd663951799f94c475e6f152c4f90" exitCode=0 Nov 24 11:48:23 crc kubenswrapper[5072]: I1124 11:48:23.802190 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d8k67" event={"ID":"d52fce03-19bd-4073-ba53-3acaa9944571","Type":"ContainerDied","Data":"00cf12fd7a88f21b2cfe4711e193235f9abbd663951799f94c475e6f152c4f90"} Nov 24 11:48:24 crc kubenswrapper[5072]: I1124 11:48:24.823911 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d8k67" event={"ID":"d52fce03-19bd-4073-ba53-3acaa9944571","Type":"ContainerStarted","Data":"78292ede29c3343eec99dbe2f05ec62d1eb1765ab594ede31eae8e2b2e209c97"} Nov 24 11:48:24 crc kubenswrapper[5072]: I1124 11:48:24.852655 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-d8k67" podStartSLOduration=3.416909884 podStartE2EDuration="5.852631519s" podCreationTimestamp="2025-11-24 11:48:19 +0000 UTC" firstStartedPulling="2025-11-24 11:48:21.777281944 +0000 UTC m=+2353.488806420" lastFinishedPulling="2025-11-24 11:48:24.213003579 +0000 UTC m=+2355.924528055" observedRunningTime="2025-11-24 11:48:24.8485889 +0000 UTC m=+2356.560113386" watchObservedRunningTime="2025-11-24 11:48:24.852631519 +0000 UTC m=+2356.564156025" Nov 24 11:48:27 crc kubenswrapper[5072]: E1124 11:48:27.557318 5072 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod95c83f58_e5a9_4038_ae80_2ba999d47b81.slice/crio-conmon-994a5ccd66fcb4d9ff40a2fd2889fe3d87c55c08c9055acf8e7553e609d3be0b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod95c83f58_e5a9_4038_ae80_2ba999d47b81.slice/crio-994a5ccd66fcb4d9ff40a2fd2889fe3d87c55c08c9055acf8e7553e609d3be0b.scope\": RecentStats: unable to find data in memory cache]" Nov 24 11:48:27 crc kubenswrapper[5072]: I1124 11:48:27.864507 5072 generic.go:334] "Generic (PLEG): container finished" podID="95c83f58-e5a9-4038-ae80-2ba999d47b81" containerID="994a5ccd66fcb4d9ff40a2fd2889fe3d87c55c08c9055acf8e7553e609d3be0b" exitCode=0 Nov 24 11:48:27 crc kubenswrapper[5072]: I1124 11:48:27.864585 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-nr928" event={"ID":"95c83f58-e5a9-4038-ae80-2ba999d47b81","Type":"ContainerDied","Data":"994a5ccd66fcb4d9ff40a2fd2889fe3d87c55c08c9055acf8e7553e609d3be0b"} Nov 24 11:48:29 crc kubenswrapper[5072]: I1124 11:48:29.303747 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-nr928" Nov 24 11:48:29 crc kubenswrapper[5072]: I1124 11:48:29.485974 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/95c83f58-e5a9-4038-ae80-2ba999d47b81-inventory\") pod \"95c83f58-e5a9-4038-ae80-2ba999d47b81\" (UID: \"95c83f58-e5a9-4038-ae80-2ba999d47b81\") " Nov 24 11:48:29 crc kubenswrapper[5072]: I1124 11:48:29.486155 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/95c83f58-e5a9-4038-ae80-2ba999d47b81-ceph\") pod \"95c83f58-e5a9-4038-ae80-2ba999d47b81\" (UID: \"95c83f58-e5a9-4038-ae80-2ba999d47b81\") " Nov 24 11:48:29 crc kubenswrapper[5072]: I1124 11:48:29.486177 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/95c83f58-e5a9-4038-ae80-2ba999d47b81-ssh-key\") pod \"95c83f58-e5a9-4038-ae80-2ba999d47b81\" (UID: \"95c83f58-e5a9-4038-ae80-2ba999d47b81\") " Nov 24 11:48:29 crc kubenswrapper[5072]: I1124 11:48:29.486216 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r5hhn\" (UniqueName: \"kubernetes.io/projected/95c83f58-e5a9-4038-ae80-2ba999d47b81-kube-api-access-r5hhn\") pod \"95c83f58-e5a9-4038-ae80-2ba999d47b81\" (UID: \"95c83f58-e5a9-4038-ae80-2ba999d47b81\") " Nov 24 11:48:29 crc kubenswrapper[5072]: I1124 11:48:29.491737 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95c83f58-e5a9-4038-ae80-2ba999d47b81-ceph" (OuterVolumeSpecName: "ceph") pod "95c83f58-e5a9-4038-ae80-2ba999d47b81" (UID: "95c83f58-e5a9-4038-ae80-2ba999d47b81"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:48:29 crc kubenswrapper[5072]: I1124 11:48:29.493263 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95c83f58-e5a9-4038-ae80-2ba999d47b81-kube-api-access-r5hhn" (OuterVolumeSpecName: "kube-api-access-r5hhn") pod "95c83f58-e5a9-4038-ae80-2ba999d47b81" (UID: "95c83f58-e5a9-4038-ae80-2ba999d47b81"). InnerVolumeSpecName "kube-api-access-r5hhn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:48:29 crc kubenswrapper[5072]: I1124 11:48:29.519512 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95c83f58-e5a9-4038-ae80-2ba999d47b81-inventory" (OuterVolumeSpecName: "inventory") pod "95c83f58-e5a9-4038-ae80-2ba999d47b81" (UID: "95c83f58-e5a9-4038-ae80-2ba999d47b81"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:48:29 crc kubenswrapper[5072]: I1124 11:48:29.521269 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95c83f58-e5a9-4038-ae80-2ba999d47b81-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "95c83f58-e5a9-4038-ae80-2ba999d47b81" (UID: "95c83f58-e5a9-4038-ae80-2ba999d47b81"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:48:29 crc kubenswrapper[5072]: I1124 11:48:29.590046 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r5hhn\" (UniqueName: \"kubernetes.io/projected/95c83f58-e5a9-4038-ae80-2ba999d47b81-kube-api-access-r5hhn\") on node \"crc\" DevicePath \"\"" Nov 24 11:48:29 crc kubenswrapper[5072]: I1124 11:48:29.590072 5072 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/95c83f58-e5a9-4038-ae80-2ba999d47b81-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 11:48:29 crc kubenswrapper[5072]: I1124 11:48:29.590082 5072 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/95c83f58-e5a9-4038-ae80-2ba999d47b81-ceph\") on node \"crc\" DevicePath \"\"" Nov 24 11:48:29 crc kubenswrapper[5072]: I1124 11:48:29.590089 5072 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/95c83f58-e5a9-4038-ae80-2ba999d47b81-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 11:48:29 crc kubenswrapper[5072]: I1124 11:48:29.885675 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-nr928" event={"ID":"95c83f58-e5a9-4038-ae80-2ba999d47b81","Type":"ContainerDied","Data":"57bdc759838b84fabed8966221cc695a77373c520ae4d4376f4789353d2ebceb"} Nov 24 11:48:29 crc kubenswrapper[5072]: I1124 11:48:29.885715 5072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="57bdc759838b84fabed8966221cc695a77373c520ae4d4376f4789353d2ebceb" Nov 24 11:48:29 crc kubenswrapper[5072]: I1124 11:48:29.885770 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-nr928" Nov 24 11:48:29 crc kubenswrapper[5072]: I1124 11:48:29.982739 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-qk9gt"] Nov 24 11:48:29 crc kubenswrapper[5072]: E1124 11:48:29.983269 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95c83f58-e5a9-4038-ae80-2ba999d47b81" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Nov 24 11:48:29 crc kubenswrapper[5072]: I1124 11:48:29.983295 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="95c83f58-e5a9-4038-ae80-2ba999d47b81" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Nov 24 11:48:29 crc kubenswrapper[5072]: I1124 11:48:29.983542 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="95c83f58-e5a9-4038-ae80-2ba999d47b81" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Nov 24 11:48:29 crc kubenswrapper[5072]: I1124 11:48:29.984242 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qk9gt" Nov 24 11:48:29 crc kubenswrapper[5072]: I1124 11:48:29.986258 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 24 11:48:29 crc kubenswrapper[5072]: I1124 11:48:29.988545 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-b6s7d" Nov 24 11:48:29 crc kubenswrapper[5072]: I1124 11:48:29.987064 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Nov 24 11:48:29 crc kubenswrapper[5072]: I1124 11:48:29.987208 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 11:48:29 crc kubenswrapper[5072]: I1124 11:48:29.987276 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 11:48:29 crc kubenswrapper[5072]: I1124 11:48:29.989551 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 11:48:29 crc kubenswrapper[5072]: I1124 11:48:29.996196 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-qk9gt"] Nov 24 11:48:30 crc kubenswrapper[5072]: I1124 11:48:30.018776 5072 scope.go:117] "RemoveContainer" containerID="6821956e4cab86ef1bb97ee072ae286fa9afb6be72f793a93d8280a527b7f493" Nov 24 11:48:30 crc kubenswrapper[5072]: E1124 11:48:30.020123 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 11:48:30 crc kubenswrapper[5072]: I1124 11:48:30.099485 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/60fbd22d-6dd6-4bdf-aa92-3b4682feeee0-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qk9gt\" (UID: \"60fbd22d-6dd6-4bdf-aa92-3b4682feeee0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qk9gt" Nov 24 11:48:30 crc kubenswrapper[5072]: I1124 11:48:30.100029 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/60fbd22d-6dd6-4bdf-aa92-3b4682feeee0-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qk9gt\" (UID: \"60fbd22d-6dd6-4bdf-aa92-3b4682feeee0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qk9gt" Nov 24 11:48:30 crc kubenswrapper[5072]: I1124 11:48:30.100074 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5hxw\" (UniqueName: \"kubernetes.io/projected/60fbd22d-6dd6-4bdf-aa92-3b4682feeee0-kube-api-access-s5hxw\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qk9gt\" (UID: \"60fbd22d-6dd6-4bdf-aa92-3b4682feeee0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qk9gt" Nov 24 11:48:30 crc kubenswrapper[5072]: I1124 11:48:30.100115 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/60fbd22d-6dd6-4bdf-aa92-3b4682feeee0-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qk9gt\" (UID: \"60fbd22d-6dd6-4bdf-aa92-3b4682feeee0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qk9gt" Nov 24 11:48:30 crc kubenswrapper[5072]: I1124 11:48:30.100157 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/60fbd22d-6dd6-4bdf-aa92-3b4682feeee0-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qk9gt\" (UID: \"60fbd22d-6dd6-4bdf-aa92-3b4682feeee0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qk9gt" Nov 24 11:48:30 crc kubenswrapper[5072]: I1124 11:48:30.100185 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60fbd22d-6dd6-4bdf-aa92-3b4682feeee0-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qk9gt\" (UID: \"60fbd22d-6dd6-4bdf-aa92-3b4682feeee0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qk9gt" Nov 24 11:48:30 crc kubenswrapper[5072]: I1124 11:48:30.201884 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/60fbd22d-6dd6-4bdf-aa92-3b4682feeee0-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qk9gt\" (UID: \"60fbd22d-6dd6-4bdf-aa92-3b4682feeee0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qk9gt" Nov 24 11:48:30 crc kubenswrapper[5072]: I1124 11:48:30.201955 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/60fbd22d-6dd6-4bdf-aa92-3b4682feeee0-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qk9gt\" (UID: \"60fbd22d-6dd6-4bdf-aa92-3b4682feeee0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qk9gt" Nov 24 11:48:30 crc kubenswrapper[5072]: I1124 11:48:30.201987 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5hxw\" (UniqueName: \"kubernetes.io/projected/60fbd22d-6dd6-4bdf-aa92-3b4682feeee0-kube-api-access-s5hxw\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qk9gt\" (UID: \"60fbd22d-6dd6-4bdf-aa92-3b4682feeee0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qk9gt" Nov 24 11:48:30 crc kubenswrapper[5072]: I1124 11:48:30.202027 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/60fbd22d-6dd6-4bdf-aa92-3b4682feeee0-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qk9gt\" (UID: \"60fbd22d-6dd6-4bdf-aa92-3b4682feeee0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qk9gt" Nov 24 11:48:30 crc kubenswrapper[5072]: I1124 11:48:30.202059 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/60fbd22d-6dd6-4bdf-aa92-3b4682feeee0-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qk9gt\" (UID: \"60fbd22d-6dd6-4bdf-aa92-3b4682feeee0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qk9gt" Nov 24 11:48:30 crc kubenswrapper[5072]: I1124 11:48:30.202093 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60fbd22d-6dd6-4bdf-aa92-3b4682feeee0-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qk9gt\" (UID: \"60fbd22d-6dd6-4bdf-aa92-3b4682feeee0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qk9gt" Nov 24 11:48:30 crc kubenswrapper[5072]: I1124 11:48:30.203217 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/60fbd22d-6dd6-4bdf-aa92-3b4682feeee0-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qk9gt\" (UID: \"60fbd22d-6dd6-4bdf-aa92-3b4682feeee0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qk9gt" Nov 24 11:48:30 crc kubenswrapper[5072]: I1124 11:48:30.206728 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/60fbd22d-6dd6-4bdf-aa92-3b4682feeee0-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qk9gt\" (UID: \"60fbd22d-6dd6-4bdf-aa92-3b4682feeee0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qk9gt" Nov 24 11:48:30 crc kubenswrapper[5072]: I1124 11:48:30.206769 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/60fbd22d-6dd6-4bdf-aa92-3b4682feeee0-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qk9gt\" (UID: \"60fbd22d-6dd6-4bdf-aa92-3b4682feeee0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qk9gt" Nov 24 11:48:30 crc kubenswrapper[5072]: I1124 11:48:30.207025 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/60fbd22d-6dd6-4bdf-aa92-3b4682feeee0-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qk9gt\" (UID: \"60fbd22d-6dd6-4bdf-aa92-3b4682feeee0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qk9gt" Nov 24 11:48:30 crc kubenswrapper[5072]: I1124 11:48:30.208259 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60fbd22d-6dd6-4bdf-aa92-3b4682feeee0-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qk9gt\" (UID: \"60fbd22d-6dd6-4bdf-aa92-3b4682feeee0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qk9gt" Nov 24 11:48:30 crc kubenswrapper[5072]: I1124 11:48:30.219261 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5hxw\" (UniqueName: \"kubernetes.io/projected/60fbd22d-6dd6-4bdf-aa92-3b4682feeee0-kube-api-access-s5hxw\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qk9gt\" (UID: \"60fbd22d-6dd6-4bdf-aa92-3b4682feeee0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qk9gt" Nov 24 11:48:30 crc kubenswrapper[5072]: I1124 11:48:30.307821 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qk9gt" Nov 24 11:48:30 crc kubenswrapper[5072]: I1124 11:48:30.321522 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-d8k67" Nov 24 11:48:30 crc kubenswrapper[5072]: I1124 11:48:30.321573 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-d8k67" Nov 24 11:48:30 crc kubenswrapper[5072]: I1124 11:48:30.369583 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-d8k67" Nov 24 11:48:30 crc kubenswrapper[5072]: I1124 11:48:30.854742 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-qk9gt"] Nov 24 11:48:30 crc kubenswrapper[5072]: I1124 11:48:30.896669 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qk9gt" event={"ID":"60fbd22d-6dd6-4bdf-aa92-3b4682feeee0","Type":"ContainerStarted","Data":"22ccacea161693b79660da995522f0c56d523755f6c6cfc01ae8a535ae1e9f0c"} Nov 24 11:48:30 crc kubenswrapper[5072]: I1124 11:48:30.944598 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-d8k67" Nov 24 11:48:30 crc kubenswrapper[5072]: I1124 11:48:30.985280 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d8k67"] Nov 24 11:48:31 crc kubenswrapper[5072]: I1124 11:48:31.935284 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qk9gt" event={"ID":"60fbd22d-6dd6-4bdf-aa92-3b4682feeee0","Type":"ContainerStarted","Data":"a13f14f34b2715d9842f15a2d3f1645ed3bedc94fa7bd78b8df7afbbc81498b2"} Nov 24 11:48:31 crc kubenswrapper[5072]: I1124 11:48:31.966523 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qk9gt" podStartSLOduration=2.497919093 podStartE2EDuration="2.966501261s" podCreationTimestamp="2025-11-24 11:48:29 +0000 UTC" firstStartedPulling="2025-11-24 11:48:30.864599598 +0000 UTC m=+2362.576124074" lastFinishedPulling="2025-11-24 11:48:31.333181746 +0000 UTC m=+2363.044706242" observedRunningTime="2025-11-24 11:48:31.959150941 +0000 UTC m=+2363.670675427" watchObservedRunningTime="2025-11-24 11:48:31.966501261 +0000 UTC m=+2363.678025737" Nov 24 11:48:32 crc kubenswrapper[5072]: I1124 11:48:32.945065 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-d8k67" podUID="d52fce03-19bd-4073-ba53-3acaa9944571" containerName="registry-server" containerID="cri-o://78292ede29c3343eec99dbe2f05ec62d1eb1765ab594ede31eae8e2b2e209c97" gracePeriod=2 Nov 24 11:48:33 crc kubenswrapper[5072]: I1124 11:48:33.419567 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d8k67" Nov 24 11:48:33 crc kubenswrapper[5072]: I1124 11:48:33.558657 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d52fce03-19bd-4073-ba53-3acaa9944571-utilities\") pod \"d52fce03-19bd-4073-ba53-3acaa9944571\" (UID: \"d52fce03-19bd-4073-ba53-3acaa9944571\") " Nov 24 11:48:33 crc kubenswrapper[5072]: I1124 11:48:33.558765 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6zmdt\" (UniqueName: \"kubernetes.io/projected/d52fce03-19bd-4073-ba53-3acaa9944571-kube-api-access-6zmdt\") pod \"d52fce03-19bd-4073-ba53-3acaa9944571\" (UID: \"d52fce03-19bd-4073-ba53-3acaa9944571\") " Nov 24 11:48:33 crc kubenswrapper[5072]: I1124 11:48:33.559815 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d52fce03-19bd-4073-ba53-3acaa9944571-catalog-content\") pod \"d52fce03-19bd-4073-ba53-3acaa9944571\" (UID: \"d52fce03-19bd-4073-ba53-3acaa9944571\") " Nov 24 11:48:33 crc kubenswrapper[5072]: I1124 11:48:33.559896 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d52fce03-19bd-4073-ba53-3acaa9944571-utilities" (OuterVolumeSpecName: "utilities") pod "d52fce03-19bd-4073-ba53-3acaa9944571" (UID: "d52fce03-19bd-4073-ba53-3acaa9944571"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:48:33 crc kubenswrapper[5072]: I1124 11:48:33.560161 5072 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d52fce03-19bd-4073-ba53-3acaa9944571-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 11:48:33 crc kubenswrapper[5072]: I1124 11:48:33.565039 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d52fce03-19bd-4073-ba53-3acaa9944571-kube-api-access-6zmdt" (OuterVolumeSpecName: "kube-api-access-6zmdt") pod "d52fce03-19bd-4073-ba53-3acaa9944571" (UID: "d52fce03-19bd-4073-ba53-3acaa9944571"). InnerVolumeSpecName "kube-api-access-6zmdt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:48:33 crc kubenswrapper[5072]: I1124 11:48:33.611273 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d52fce03-19bd-4073-ba53-3acaa9944571-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d52fce03-19bd-4073-ba53-3acaa9944571" (UID: "d52fce03-19bd-4073-ba53-3acaa9944571"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:48:33 crc kubenswrapper[5072]: I1124 11:48:33.661930 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6zmdt\" (UniqueName: \"kubernetes.io/projected/d52fce03-19bd-4073-ba53-3acaa9944571-kube-api-access-6zmdt\") on node \"crc\" DevicePath \"\"" Nov 24 11:48:33 crc kubenswrapper[5072]: I1124 11:48:33.662162 5072 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d52fce03-19bd-4073-ba53-3acaa9944571-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 11:48:33 crc kubenswrapper[5072]: I1124 11:48:33.955269 5072 generic.go:334] "Generic (PLEG): container finished" podID="d52fce03-19bd-4073-ba53-3acaa9944571" containerID="78292ede29c3343eec99dbe2f05ec62d1eb1765ab594ede31eae8e2b2e209c97" exitCode=0 Nov 24 11:48:33 crc kubenswrapper[5072]: I1124 11:48:33.955339 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d8k67" event={"ID":"d52fce03-19bd-4073-ba53-3acaa9944571","Type":"ContainerDied","Data":"78292ede29c3343eec99dbe2f05ec62d1eb1765ab594ede31eae8e2b2e209c97"} Nov 24 11:48:33 crc kubenswrapper[5072]: I1124 11:48:33.955671 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d8k67" event={"ID":"d52fce03-19bd-4073-ba53-3acaa9944571","Type":"ContainerDied","Data":"d54c3488490def88b8fc9340afba41f4a9befe8470216338af7cbc2d1910bc83"} Nov 24 11:48:33 crc kubenswrapper[5072]: I1124 11:48:33.955720 5072 scope.go:117] "RemoveContainer" containerID="78292ede29c3343eec99dbe2f05ec62d1eb1765ab594ede31eae8e2b2e209c97" Nov 24 11:48:33 crc kubenswrapper[5072]: I1124 11:48:33.955357 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d8k67" Nov 24 11:48:33 crc kubenswrapper[5072]: I1124 11:48:33.988514 5072 scope.go:117] "RemoveContainer" containerID="00cf12fd7a88f21b2cfe4711e193235f9abbd663951799f94c475e6f152c4f90" Nov 24 11:48:34 crc kubenswrapper[5072]: I1124 11:48:34.016673 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d8k67"] Nov 24 11:48:34 crc kubenswrapper[5072]: I1124 11:48:34.018780 5072 scope.go:117] "RemoveContainer" containerID="c4c343bcad5e95c76bfe16e4f774af6d12cdc1a094c443067b187d1aa7059fb5" Nov 24 11:48:34 crc kubenswrapper[5072]: I1124 11:48:34.025148 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-d8k67"] Nov 24 11:48:34 crc kubenswrapper[5072]: I1124 11:48:34.059814 5072 scope.go:117] "RemoveContainer" containerID="78292ede29c3343eec99dbe2f05ec62d1eb1765ab594ede31eae8e2b2e209c97" Nov 24 11:48:34 crc kubenswrapper[5072]: E1124 11:48:34.060202 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78292ede29c3343eec99dbe2f05ec62d1eb1765ab594ede31eae8e2b2e209c97\": container with ID starting with 78292ede29c3343eec99dbe2f05ec62d1eb1765ab594ede31eae8e2b2e209c97 not found: ID does not exist" containerID="78292ede29c3343eec99dbe2f05ec62d1eb1765ab594ede31eae8e2b2e209c97" Nov 24 11:48:34 crc kubenswrapper[5072]: I1124 11:48:34.060248 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78292ede29c3343eec99dbe2f05ec62d1eb1765ab594ede31eae8e2b2e209c97"} err="failed to get container status \"78292ede29c3343eec99dbe2f05ec62d1eb1765ab594ede31eae8e2b2e209c97\": rpc error: code = NotFound desc = could not find container \"78292ede29c3343eec99dbe2f05ec62d1eb1765ab594ede31eae8e2b2e209c97\": container with ID starting with 78292ede29c3343eec99dbe2f05ec62d1eb1765ab594ede31eae8e2b2e209c97 not found: ID does not exist" Nov 24 11:48:34 crc kubenswrapper[5072]: I1124 11:48:34.060272 5072 scope.go:117] "RemoveContainer" containerID="00cf12fd7a88f21b2cfe4711e193235f9abbd663951799f94c475e6f152c4f90" Nov 24 11:48:34 crc kubenswrapper[5072]: E1124 11:48:34.061074 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00cf12fd7a88f21b2cfe4711e193235f9abbd663951799f94c475e6f152c4f90\": container with ID starting with 00cf12fd7a88f21b2cfe4711e193235f9abbd663951799f94c475e6f152c4f90 not found: ID does not exist" containerID="00cf12fd7a88f21b2cfe4711e193235f9abbd663951799f94c475e6f152c4f90" Nov 24 11:48:34 crc kubenswrapper[5072]: I1124 11:48:34.061102 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00cf12fd7a88f21b2cfe4711e193235f9abbd663951799f94c475e6f152c4f90"} err="failed to get container status \"00cf12fd7a88f21b2cfe4711e193235f9abbd663951799f94c475e6f152c4f90\": rpc error: code = NotFound desc = could not find container \"00cf12fd7a88f21b2cfe4711e193235f9abbd663951799f94c475e6f152c4f90\": container with ID starting with 00cf12fd7a88f21b2cfe4711e193235f9abbd663951799f94c475e6f152c4f90 not found: ID does not exist" Nov 24 11:48:34 crc kubenswrapper[5072]: I1124 11:48:34.061116 5072 scope.go:117] "RemoveContainer" containerID="c4c343bcad5e95c76bfe16e4f774af6d12cdc1a094c443067b187d1aa7059fb5" Nov 24 11:48:34 crc kubenswrapper[5072]: E1124 11:48:34.061428 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4c343bcad5e95c76bfe16e4f774af6d12cdc1a094c443067b187d1aa7059fb5\": container with ID starting with c4c343bcad5e95c76bfe16e4f774af6d12cdc1a094c443067b187d1aa7059fb5 not found: ID does not exist" containerID="c4c343bcad5e95c76bfe16e4f774af6d12cdc1a094c443067b187d1aa7059fb5" Nov 24 11:48:34 crc kubenswrapper[5072]: I1124 11:48:34.061456 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4c343bcad5e95c76bfe16e4f774af6d12cdc1a094c443067b187d1aa7059fb5"} err="failed to get container status \"c4c343bcad5e95c76bfe16e4f774af6d12cdc1a094c443067b187d1aa7059fb5\": rpc error: code = NotFound desc = could not find container \"c4c343bcad5e95c76bfe16e4f774af6d12cdc1a094c443067b187d1aa7059fb5\": container with ID starting with c4c343bcad5e95c76bfe16e4f774af6d12cdc1a094c443067b187d1aa7059fb5 not found: ID does not exist" Nov 24 11:48:35 crc kubenswrapper[5072]: I1124 11:48:35.028606 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d52fce03-19bd-4073-ba53-3acaa9944571" path="/var/lib/kubelet/pods/d52fce03-19bd-4073-ba53-3acaa9944571/volumes" Nov 24 11:48:43 crc kubenswrapper[5072]: I1124 11:48:43.017012 5072 scope.go:117] "RemoveContainer" containerID="6821956e4cab86ef1bb97ee072ae286fa9afb6be72f793a93d8280a527b7f493" Nov 24 11:48:43 crc kubenswrapper[5072]: E1124 11:48:43.017960 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 11:48:54 crc kubenswrapper[5072]: I1124 11:48:54.017263 5072 scope.go:117] "RemoveContainer" containerID="6821956e4cab86ef1bb97ee072ae286fa9afb6be72f793a93d8280a527b7f493" Nov 24 11:48:54 crc kubenswrapper[5072]: E1124 11:48:54.018216 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 11:49:08 crc kubenswrapper[5072]: I1124 11:49:08.017351 5072 scope.go:117] "RemoveContainer" containerID="6821956e4cab86ef1bb97ee072ae286fa9afb6be72f793a93d8280a527b7f493" Nov 24 11:49:08 crc kubenswrapper[5072]: E1124 11:49:08.018575 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 11:49:21 crc kubenswrapper[5072]: I1124 11:49:21.016314 5072 scope.go:117] "RemoveContainer" containerID="6821956e4cab86ef1bb97ee072ae286fa9afb6be72f793a93d8280a527b7f493" Nov 24 11:49:21 crc kubenswrapper[5072]: E1124 11:49:21.017425 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 11:49:36 crc kubenswrapper[5072]: I1124 11:49:36.016821 5072 scope.go:117] "RemoveContainer" containerID="6821956e4cab86ef1bb97ee072ae286fa9afb6be72f793a93d8280a527b7f493" Nov 24 11:49:36 crc kubenswrapper[5072]: E1124 11:49:36.017630 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 11:49:49 crc kubenswrapper[5072]: I1124 11:49:49.021529 5072 scope.go:117] "RemoveContainer" containerID="6821956e4cab86ef1bb97ee072ae286fa9afb6be72f793a93d8280a527b7f493" Nov 24 11:49:49 crc kubenswrapper[5072]: E1124 11:49:49.022345 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 11:49:50 crc kubenswrapper[5072]: I1124 11:49:50.798971 5072 generic.go:334] "Generic (PLEG): container finished" podID="60fbd22d-6dd6-4bdf-aa92-3b4682feeee0" containerID="a13f14f34b2715d9842f15a2d3f1645ed3bedc94fa7bd78b8df7afbbc81498b2" exitCode=0 Nov 24 11:49:50 crc kubenswrapper[5072]: I1124 11:49:50.799051 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qk9gt" event={"ID":"60fbd22d-6dd6-4bdf-aa92-3b4682feeee0","Type":"ContainerDied","Data":"a13f14f34b2715d9842f15a2d3f1645ed3bedc94fa7bd78b8df7afbbc81498b2"} Nov 24 11:49:52 crc kubenswrapper[5072]: I1124 11:49:52.264083 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qk9gt" Nov 24 11:49:52 crc kubenswrapper[5072]: I1124 11:49:52.358903 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/60fbd22d-6dd6-4bdf-aa92-3b4682feeee0-inventory\") pod \"60fbd22d-6dd6-4bdf-aa92-3b4682feeee0\" (UID: \"60fbd22d-6dd6-4bdf-aa92-3b4682feeee0\") " Nov 24 11:49:52 crc kubenswrapper[5072]: I1124 11:49:52.359146 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/60fbd22d-6dd6-4bdf-aa92-3b4682feeee0-ssh-key\") pod \"60fbd22d-6dd6-4bdf-aa92-3b4682feeee0\" (UID: \"60fbd22d-6dd6-4bdf-aa92-3b4682feeee0\") " Nov 24 11:49:52 crc kubenswrapper[5072]: I1124 11:49:52.359203 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/60fbd22d-6dd6-4bdf-aa92-3b4682feeee0-ceph\") pod \"60fbd22d-6dd6-4bdf-aa92-3b4682feeee0\" (UID: \"60fbd22d-6dd6-4bdf-aa92-3b4682feeee0\") " Nov 24 11:49:52 crc kubenswrapper[5072]: I1124 11:49:52.359234 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/60fbd22d-6dd6-4bdf-aa92-3b4682feeee0-ovncontroller-config-0\") pod \"60fbd22d-6dd6-4bdf-aa92-3b4682feeee0\" (UID: \"60fbd22d-6dd6-4bdf-aa92-3b4682feeee0\") " Nov 24 11:49:52 crc kubenswrapper[5072]: I1124 11:49:52.359347 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60fbd22d-6dd6-4bdf-aa92-3b4682feeee0-ovn-combined-ca-bundle\") pod \"60fbd22d-6dd6-4bdf-aa92-3b4682feeee0\" (UID: \"60fbd22d-6dd6-4bdf-aa92-3b4682feeee0\") " Nov 24 11:49:52 crc kubenswrapper[5072]: I1124 11:49:52.359403 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5hxw\" (UniqueName: \"kubernetes.io/projected/60fbd22d-6dd6-4bdf-aa92-3b4682feeee0-kube-api-access-s5hxw\") pod \"60fbd22d-6dd6-4bdf-aa92-3b4682feeee0\" (UID: \"60fbd22d-6dd6-4bdf-aa92-3b4682feeee0\") " Nov 24 11:49:52 crc kubenswrapper[5072]: I1124 11:49:52.365952 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60fbd22d-6dd6-4bdf-aa92-3b4682feeee0-ceph" (OuterVolumeSpecName: "ceph") pod "60fbd22d-6dd6-4bdf-aa92-3b4682feeee0" (UID: "60fbd22d-6dd6-4bdf-aa92-3b4682feeee0"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:49:52 crc kubenswrapper[5072]: I1124 11:49:52.369615 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60fbd22d-6dd6-4bdf-aa92-3b4682feeee0-kube-api-access-s5hxw" (OuterVolumeSpecName: "kube-api-access-s5hxw") pod "60fbd22d-6dd6-4bdf-aa92-3b4682feeee0" (UID: "60fbd22d-6dd6-4bdf-aa92-3b4682feeee0"). InnerVolumeSpecName "kube-api-access-s5hxw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:49:52 crc kubenswrapper[5072]: I1124 11:49:52.371556 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60fbd22d-6dd6-4bdf-aa92-3b4682feeee0-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "60fbd22d-6dd6-4bdf-aa92-3b4682feeee0" (UID: "60fbd22d-6dd6-4bdf-aa92-3b4682feeee0"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:49:52 crc kubenswrapper[5072]: I1124 11:49:52.391952 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60fbd22d-6dd6-4bdf-aa92-3b4682feeee0-inventory" (OuterVolumeSpecName: "inventory") pod "60fbd22d-6dd6-4bdf-aa92-3b4682feeee0" (UID: "60fbd22d-6dd6-4bdf-aa92-3b4682feeee0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:49:52 crc kubenswrapper[5072]: I1124 11:49:52.400488 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60fbd22d-6dd6-4bdf-aa92-3b4682feeee0-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "60fbd22d-6dd6-4bdf-aa92-3b4682feeee0" (UID: "60fbd22d-6dd6-4bdf-aa92-3b4682feeee0"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:49:52 crc kubenswrapper[5072]: I1124 11:49:52.403128 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60fbd22d-6dd6-4bdf-aa92-3b4682feeee0-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "60fbd22d-6dd6-4bdf-aa92-3b4682feeee0" (UID: "60fbd22d-6dd6-4bdf-aa92-3b4682feeee0"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:49:52 crc kubenswrapper[5072]: I1124 11:49:52.465680 5072 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60fbd22d-6dd6-4bdf-aa92-3b4682feeee0-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:49:52 crc kubenswrapper[5072]: I1124 11:49:52.465722 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s5hxw\" (UniqueName: \"kubernetes.io/projected/60fbd22d-6dd6-4bdf-aa92-3b4682feeee0-kube-api-access-s5hxw\") on node \"crc\" DevicePath \"\"" Nov 24 11:49:52 crc kubenswrapper[5072]: I1124 11:49:52.465733 5072 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/60fbd22d-6dd6-4bdf-aa92-3b4682feeee0-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 11:49:52 crc kubenswrapper[5072]: I1124 11:49:52.465742 5072 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/60fbd22d-6dd6-4bdf-aa92-3b4682feeee0-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 11:49:52 crc kubenswrapper[5072]: I1124 11:49:52.465751 5072 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/60fbd22d-6dd6-4bdf-aa92-3b4682feeee0-ceph\") on node \"crc\" DevicePath \"\"" Nov 24 11:49:52 crc kubenswrapper[5072]: I1124 11:49:52.465759 5072 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/60fbd22d-6dd6-4bdf-aa92-3b4682feeee0-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Nov 24 11:49:52 crc kubenswrapper[5072]: I1124 11:49:52.817474 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qk9gt" event={"ID":"60fbd22d-6dd6-4bdf-aa92-3b4682feeee0","Type":"ContainerDied","Data":"22ccacea161693b79660da995522f0c56d523755f6c6cfc01ae8a535ae1e9f0c"} Nov 24 11:49:52 crc kubenswrapper[5072]: I1124 11:49:52.817524 5072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22ccacea161693b79660da995522f0c56d523755f6c6cfc01ae8a535ae1e9f0c" Nov 24 11:49:52 crc kubenswrapper[5072]: I1124 11:49:52.817535 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qk9gt" Nov 24 11:49:52 crc kubenswrapper[5072]: I1124 11:49:52.907503 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pfz95"] Nov 24 11:49:52 crc kubenswrapper[5072]: E1124 11:49:52.907852 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d52fce03-19bd-4073-ba53-3acaa9944571" containerName="extract-content" Nov 24 11:49:52 crc kubenswrapper[5072]: I1124 11:49:52.907868 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="d52fce03-19bd-4073-ba53-3acaa9944571" containerName="extract-content" Nov 24 11:49:52 crc kubenswrapper[5072]: E1124 11:49:52.907886 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d52fce03-19bd-4073-ba53-3acaa9944571" containerName="extract-utilities" Nov 24 11:49:52 crc kubenswrapper[5072]: I1124 11:49:52.907892 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="d52fce03-19bd-4073-ba53-3acaa9944571" containerName="extract-utilities" Nov 24 11:49:52 crc kubenswrapper[5072]: E1124 11:49:52.907908 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d52fce03-19bd-4073-ba53-3acaa9944571" containerName="registry-server" Nov 24 11:49:52 crc kubenswrapper[5072]: I1124 11:49:52.907915 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="d52fce03-19bd-4073-ba53-3acaa9944571" containerName="registry-server" Nov 24 11:49:52 crc kubenswrapper[5072]: E1124 11:49:52.907933 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60fbd22d-6dd6-4bdf-aa92-3b4682feeee0" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 24 11:49:52 crc kubenswrapper[5072]: I1124 11:49:52.907942 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="60fbd22d-6dd6-4bdf-aa92-3b4682feeee0" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 24 11:49:52 crc kubenswrapper[5072]: I1124 11:49:52.908122 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="d52fce03-19bd-4073-ba53-3acaa9944571" containerName="registry-server" Nov 24 11:49:52 crc kubenswrapper[5072]: I1124 11:49:52.908157 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="60fbd22d-6dd6-4bdf-aa92-3b4682feeee0" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 24 11:49:52 crc kubenswrapper[5072]: I1124 11:49:52.908733 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pfz95" Nov 24 11:49:52 crc kubenswrapper[5072]: I1124 11:49:52.912665 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 24 11:49:52 crc kubenswrapper[5072]: I1124 11:49:52.912747 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Nov 24 11:49:52 crc kubenswrapper[5072]: I1124 11:49:52.913960 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 11:49:52 crc kubenswrapper[5072]: I1124 11:49:52.914102 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 11:49:52 crc kubenswrapper[5072]: I1124 11:49:52.914128 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-b6s7d" Nov 24 11:49:52 crc kubenswrapper[5072]: I1124 11:49:52.914056 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 11:49:52 crc kubenswrapper[5072]: I1124 11:49:52.915305 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Nov 24 11:49:52 crc kubenswrapper[5072]: I1124 11:49:52.917228 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pfz95"] Nov 24 11:49:52 crc kubenswrapper[5072]: I1124 11:49:52.974211 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/45051007-ac2c-49b5-acda-c9fdccd8cf9d-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pfz95\" (UID: \"45051007-ac2c-49b5-acda-c9fdccd8cf9d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pfz95" Nov 24 11:49:52 crc kubenswrapper[5072]: I1124 11:49:52.974257 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/45051007-ac2c-49b5-acda-c9fdccd8cf9d-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pfz95\" (UID: \"45051007-ac2c-49b5-acda-c9fdccd8cf9d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pfz95" Nov 24 11:49:52 crc kubenswrapper[5072]: I1124 11:49:52.974316 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lrnr\" (UniqueName: \"kubernetes.io/projected/45051007-ac2c-49b5-acda-c9fdccd8cf9d-kube-api-access-6lrnr\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pfz95\" (UID: \"45051007-ac2c-49b5-acda-c9fdccd8cf9d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pfz95" Nov 24 11:49:52 crc kubenswrapper[5072]: I1124 11:49:52.974355 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/45051007-ac2c-49b5-acda-c9fdccd8cf9d-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pfz95\" (UID: \"45051007-ac2c-49b5-acda-c9fdccd8cf9d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pfz95" Nov 24 11:49:52 crc kubenswrapper[5072]: I1124 11:49:52.974398 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/45051007-ac2c-49b5-acda-c9fdccd8cf9d-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pfz95\" (UID: \"45051007-ac2c-49b5-acda-c9fdccd8cf9d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pfz95" Nov 24 11:49:52 crc kubenswrapper[5072]: I1124 11:49:52.974421 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45051007-ac2c-49b5-acda-c9fdccd8cf9d-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pfz95\" (UID: \"45051007-ac2c-49b5-acda-c9fdccd8cf9d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pfz95" Nov 24 11:49:52 crc kubenswrapper[5072]: I1124 11:49:52.974715 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/45051007-ac2c-49b5-acda-c9fdccd8cf9d-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pfz95\" (UID: \"45051007-ac2c-49b5-acda-c9fdccd8cf9d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pfz95" Nov 24 11:49:53 crc kubenswrapper[5072]: I1124 11:49:53.076342 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/45051007-ac2c-49b5-acda-c9fdccd8cf9d-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pfz95\" (UID: \"45051007-ac2c-49b5-acda-c9fdccd8cf9d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pfz95" Nov 24 11:49:53 crc kubenswrapper[5072]: I1124 11:49:53.076439 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/45051007-ac2c-49b5-acda-c9fdccd8cf9d-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pfz95\" (UID: \"45051007-ac2c-49b5-acda-c9fdccd8cf9d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pfz95" Nov 24 11:49:53 crc kubenswrapper[5072]: I1124 11:49:53.076475 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/45051007-ac2c-49b5-acda-c9fdccd8cf9d-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pfz95\" (UID: \"45051007-ac2c-49b5-acda-c9fdccd8cf9d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pfz95" Nov 24 11:49:53 crc kubenswrapper[5072]: I1124 11:49:53.076541 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lrnr\" (UniqueName: \"kubernetes.io/projected/45051007-ac2c-49b5-acda-c9fdccd8cf9d-kube-api-access-6lrnr\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pfz95\" (UID: \"45051007-ac2c-49b5-acda-c9fdccd8cf9d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pfz95" Nov 24 11:49:53 crc kubenswrapper[5072]: I1124 11:49:53.076587 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/45051007-ac2c-49b5-acda-c9fdccd8cf9d-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pfz95\" (UID: \"45051007-ac2c-49b5-acda-c9fdccd8cf9d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pfz95" Nov 24 11:49:53 crc kubenswrapper[5072]: I1124 11:49:53.076627 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/45051007-ac2c-49b5-acda-c9fdccd8cf9d-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pfz95\" (UID: \"45051007-ac2c-49b5-acda-c9fdccd8cf9d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pfz95" Nov 24 11:49:53 crc kubenswrapper[5072]: I1124 11:49:53.076655 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45051007-ac2c-49b5-acda-c9fdccd8cf9d-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pfz95\" (UID: \"45051007-ac2c-49b5-acda-c9fdccd8cf9d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pfz95" Nov 24 11:49:53 crc kubenswrapper[5072]: I1124 11:49:53.080791 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/45051007-ac2c-49b5-acda-c9fdccd8cf9d-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pfz95\" (UID: \"45051007-ac2c-49b5-acda-c9fdccd8cf9d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pfz95" Nov 24 11:49:53 crc kubenswrapper[5072]: I1124 11:49:53.081042 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/45051007-ac2c-49b5-acda-c9fdccd8cf9d-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pfz95\" (UID: \"45051007-ac2c-49b5-acda-c9fdccd8cf9d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pfz95" Nov 24 11:49:53 crc kubenswrapper[5072]: I1124 11:49:53.081156 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/45051007-ac2c-49b5-acda-c9fdccd8cf9d-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pfz95\" (UID: \"45051007-ac2c-49b5-acda-c9fdccd8cf9d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pfz95" Nov 24 11:49:53 crc kubenswrapper[5072]: I1124 11:49:53.082749 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45051007-ac2c-49b5-acda-c9fdccd8cf9d-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pfz95\" (UID: \"45051007-ac2c-49b5-acda-c9fdccd8cf9d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pfz95" Nov 24 11:49:53 crc kubenswrapper[5072]: I1124 11:49:53.082759 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/45051007-ac2c-49b5-acda-c9fdccd8cf9d-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pfz95\" (UID: \"45051007-ac2c-49b5-acda-c9fdccd8cf9d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pfz95" Nov 24 11:49:53 crc kubenswrapper[5072]: I1124 11:49:53.084836 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/45051007-ac2c-49b5-acda-c9fdccd8cf9d-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pfz95\" (UID: \"45051007-ac2c-49b5-acda-c9fdccd8cf9d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pfz95" Nov 24 11:49:53 crc kubenswrapper[5072]: I1124 11:49:53.094836 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lrnr\" (UniqueName: \"kubernetes.io/projected/45051007-ac2c-49b5-acda-c9fdccd8cf9d-kube-api-access-6lrnr\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pfz95\" (UID: \"45051007-ac2c-49b5-acda-c9fdccd8cf9d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pfz95" Nov 24 11:49:53 crc kubenswrapper[5072]: I1124 11:49:53.224949 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pfz95" Nov 24 11:49:53 crc kubenswrapper[5072]: I1124 11:49:53.774291 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pfz95"] Nov 24 11:49:53 crc kubenswrapper[5072]: I1124 11:49:53.827766 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pfz95" event={"ID":"45051007-ac2c-49b5-acda-c9fdccd8cf9d","Type":"ContainerStarted","Data":"ace4ba101bc252d603813cc02cd9a914c4f8e9ec96504b1fb52d039f800365eb"} Nov 24 11:49:54 crc kubenswrapper[5072]: I1124 11:49:54.837007 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pfz95" event={"ID":"45051007-ac2c-49b5-acda-c9fdccd8cf9d","Type":"ContainerStarted","Data":"a4bed5f04e4f439bc2b123bc0bc44572817858714a526105e11cd67c9b850bb3"} Nov 24 11:49:54 crc kubenswrapper[5072]: I1124 11:49:54.862659 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pfz95" podStartSLOduration=2.479488175 podStartE2EDuration="2.862637883s" podCreationTimestamp="2025-11-24 11:49:52 +0000 UTC" firstStartedPulling="2025-11-24 11:49:53.788050268 +0000 UTC m=+2445.499574744" lastFinishedPulling="2025-11-24 11:49:54.171199966 +0000 UTC m=+2445.882724452" observedRunningTime="2025-11-24 11:49:54.857979639 +0000 UTC m=+2446.569504125" watchObservedRunningTime="2025-11-24 11:49:54.862637883 +0000 UTC m=+2446.574162359" Nov 24 11:50:03 crc kubenswrapper[5072]: I1124 11:50:03.018670 5072 scope.go:117] "RemoveContainer" containerID="6821956e4cab86ef1bb97ee072ae286fa9afb6be72f793a93d8280a527b7f493" Nov 24 11:50:03 crc kubenswrapper[5072]: E1124 11:50:03.019895 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 11:50:15 crc kubenswrapper[5072]: I1124 11:50:15.017477 5072 scope.go:117] "RemoveContainer" containerID="6821956e4cab86ef1bb97ee072ae286fa9afb6be72f793a93d8280a527b7f493" Nov 24 11:50:15 crc kubenswrapper[5072]: E1124 11:50:15.018155 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 11:50:28 crc kubenswrapper[5072]: I1124 11:50:28.017359 5072 scope.go:117] "RemoveContainer" containerID="6821956e4cab86ef1bb97ee072ae286fa9afb6be72f793a93d8280a527b7f493" Nov 24 11:50:28 crc kubenswrapper[5072]: E1124 11:50:28.019843 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 11:50:42 crc kubenswrapper[5072]: I1124 11:50:42.017786 5072 scope.go:117] "RemoveContainer" containerID="6821956e4cab86ef1bb97ee072ae286fa9afb6be72f793a93d8280a527b7f493" Nov 24 11:50:42 crc kubenswrapper[5072]: E1124 11:50:42.019010 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 11:50:55 crc kubenswrapper[5072]: I1124 11:50:55.580236 5072 generic.go:334] "Generic (PLEG): container finished" podID="45051007-ac2c-49b5-acda-c9fdccd8cf9d" containerID="a4bed5f04e4f439bc2b123bc0bc44572817858714a526105e11cd67c9b850bb3" exitCode=0 Nov 24 11:50:55 crc kubenswrapper[5072]: I1124 11:50:55.580387 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pfz95" event={"ID":"45051007-ac2c-49b5-acda-c9fdccd8cf9d","Type":"ContainerDied","Data":"a4bed5f04e4f439bc2b123bc0bc44572817858714a526105e11cd67c9b850bb3"} Nov 24 11:50:56 crc kubenswrapper[5072]: I1124 11:50:56.016485 5072 scope.go:117] "RemoveContainer" containerID="6821956e4cab86ef1bb97ee072ae286fa9afb6be72f793a93d8280a527b7f493" Nov 24 11:50:56 crc kubenswrapper[5072]: E1124 11:50:56.016800 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 11:50:57 crc kubenswrapper[5072]: I1124 11:50:57.008389 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pfz95" Nov 24 11:50:57 crc kubenswrapper[5072]: I1124 11:50:57.109660 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/45051007-ac2c-49b5-acda-c9fdccd8cf9d-neutron-ovn-metadata-agent-neutron-config-0\") pod \"45051007-ac2c-49b5-acda-c9fdccd8cf9d\" (UID: \"45051007-ac2c-49b5-acda-c9fdccd8cf9d\") " Nov 24 11:50:57 crc kubenswrapper[5072]: I1124 11:50:57.109761 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/45051007-ac2c-49b5-acda-c9fdccd8cf9d-ceph\") pod \"45051007-ac2c-49b5-acda-c9fdccd8cf9d\" (UID: \"45051007-ac2c-49b5-acda-c9fdccd8cf9d\") " Nov 24 11:50:57 crc kubenswrapper[5072]: I1124 11:50:57.109796 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/45051007-ac2c-49b5-acda-c9fdccd8cf9d-ssh-key\") pod \"45051007-ac2c-49b5-acda-c9fdccd8cf9d\" (UID: \"45051007-ac2c-49b5-acda-c9fdccd8cf9d\") " Nov 24 11:50:57 crc kubenswrapper[5072]: I1124 11:50:57.109853 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/45051007-ac2c-49b5-acda-c9fdccd8cf9d-inventory\") pod \"45051007-ac2c-49b5-acda-c9fdccd8cf9d\" (UID: \"45051007-ac2c-49b5-acda-c9fdccd8cf9d\") " Nov 24 11:50:57 crc kubenswrapper[5072]: I1124 11:50:57.110007 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45051007-ac2c-49b5-acda-c9fdccd8cf9d-neutron-metadata-combined-ca-bundle\") pod \"45051007-ac2c-49b5-acda-c9fdccd8cf9d\" (UID: \"45051007-ac2c-49b5-acda-c9fdccd8cf9d\") " Nov 24 11:50:57 crc kubenswrapper[5072]: I1124 11:50:57.110050 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/45051007-ac2c-49b5-acda-c9fdccd8cf9d-nova-metadata-neutron-config-0\") pod \"45051007-ac2c-49b5-acda-c9fdccd8cf9d\" (UID: \"45051007-ac2c-49b5-acda-c9fdccd8cf9d\") " Nov 24 11:50:57 crc kubenswrapper[5072]: I1124 11:50:57.110136 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6lrnr\" (UniqueName: \"kubernetes.io/projected/45051007-ac2c-49b5-acda-c9fdccd8cf9d-kube-api-access-6lrnr\") pod \"45051007-ac2c-49b5-acda-c9fdccd8cf9d\" (UID: \"45051007-ac2c-49b5-acda-c9fdccd8cf9d\") " Nov 24 11:50:57 crc kubenswrapper[5072]: I1124 11:50:57.115591 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45051007-ac2c-49b5-acda-c9fdccd8cf9d-kube-api-access-6lrnr" (OuterVolumeSpecName: "kube-api-access-6lrnr") pod "45051007-ac2c-49b5-acda-c9fdccd8cf9d" (UID: "45051007-ac2c-49b5-acda-c9fdccd8cf9d"). InnerVolumeSpecName "kube-api-access-6lrnr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:50:57 crc kubenswrapper[5072]: I1124 11:50:57.115658 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45051007-ac2c-49b5-acda-c9fdccd8cf9d-ceph" (OuterVolumeSpecName: "ceph") pod "45051007-ac2c-49b5-acda-c9fdccd8cf9d" (UID: "45051007-ac2c-49b5-acda-c9fdccd8cf9d"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:50:57 crc kubenswrapper[5072]: I1124 11:50:57.123122 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45051007-ac2c-49b5-acda-c9fdccd8cf9d-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "45051007-ac2c-49b5-acda-c9fdccd8cf9d" (UID: "45051007-ac2c-49b5-acda-c9fdccd8cf9d"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:50:57 crc kubenswrapper[5072]: I1124 11:50:57.135537 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45051007-ac2c-49b5-acda-c9fdccd8cf9d-inventory" (OuterVolumeSpecName: "inventory") pod "45051007-ac2c-49b5-acda-c9fdccd8cf9d" (UID: "45051007-ac2c-49b5-acda-c9fdccd8cf9d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:50:57 crc kubenswrapper[5072]: I1124 11:50:57.137027 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45051007-ac2c-49b5-acda-c9fdccd8cf9d-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "45051007-ac2c-49b5-acda-c9fdccd8cf9d" (UID: "45051007-ac2c-49b5-acda-c9fdccd8cf9d"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:50:57 crc kubenswrapper[5072]: I1124 11:50:57.138752 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45051007-ac2c-49b5-acda-c9fdccd8cf9d-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "45051007-ac2c-49b5-acda-c9fdccd8cf9d" (UID: "45051007-ac2c-49b5-acda-c9fdccd8cf9d"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:50:57 crc kubenswrapper[5072]: I1124 11:50:57.144123 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45051007-ac2c-49b5-acda-c9fdccd8cf9d-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "45051007-ac2c-49b5-acda-c9fdccd8cf9d" (UID: "45051007-ac2c-49b5-acda-c9fdccd8cf9d"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:50:57 crc kubenswrapper[5072]: I1124 11:50:57.212058 5072 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45051007-ac2c-49b5-acda-c9fdccd8cf9d-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:50:57 crc kubenswrapper[5072]: I1124 11:50:57.212110 5072 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/45051007-ac2c-49b5-acda-c9fdccd8cf9d-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 24 11:50:57 crc kubenswrapper[5072]: I1124 11:50:57.212125 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6lrnr\" (UniqueName: \"kubernetes.io/projected/45051007-ac2c-49b5-acda-c9fdccd8cf9d-kube-api-access-6lrnr\") on node \"crc\" DevicePath \"\"" Nov 24 11:50:57 crc kubenswrapper[5072]: I1124 11:50:57.212139 5072 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/45051007-ac2c-49b5-acda-c9fdccd8cf9d-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 24 11:50:57 crc kubenswrapper[5072]: I1124 11:50:57.212153 5072 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/45051007-ac2c-49b5-acda-c9fdccd8cf9d-ceph\") on node \"crc\" DevicePath \"\"" Nov 24 11:50:57 crc kubenswrapper[5072]: I1124 11:50:57.212165 5072 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/45051007-ac2c-49b5-acda-c9fdccd8cf9d-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 11:50:57 crc kubenswrapper[5072]: I1124 11:50:57.212180 5072 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/45051007-ac2c-49b5-acda-c9fdccd8cf9d-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 11:50:57 crc kubenswrapper[5072]: I1124 11:50:57.604044 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pfz95" event={"ID":"45051007-ac2c-49b5-acda-c9fdccd8cf9d","Type":"ContainerDied","Data":"ace4ba101bc252d603813cc02cd9a914c4f8e9ec96504b1fb52d039f800365eb"} Nov 24 11:50:57 crc kubenswrapper[5072]: I1124 11:50:57.604093 5072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ace4ba101bc252d603813cc02cd9a914c4f8e9ec96504b1fb52d039f800365eb" Nov 24 11:50:57 crc kubenswrapper[5072]: I1124 11:50:57.604095 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pfz95" Nov 24 11:50:57 crc kubenswrapper[5072]: I1124 11:50:57.750087 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-n6dbq"] Nov 24 11:50:57 crc kubenswrapper[5072]: E1124 11:50:57.752975 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45051007-ac2c-49b5-acda-c9fdccd8cf9d" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 24 11:50:57 crc kubenswrapper[5072]: I1124 11:50:57.753009 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="45051007-ac2c-49b5-acda-c9fdccd8cf9d" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 24 11:50:57 crc kubenswrapper[5072]: I1124 11:50:57.753242 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="45051007-ac2c-49b5-acda-c9fdccd8cf9d" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 24 11:50:57 crc kubenswrapper[5072]: I1124 11:50:57.753898 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-n6dbq" Nov 24 11:50:57 crc kubenswrapper[5072]: I1124 11:50:57.758741 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 24 11:50:57 crc kubenswrapper[5072]: I1124 11:50:57.758800 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 11:50:57 crc kubenswrapper[5072]: I1124 11:50:57.758762 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Nov 24 11:50:57 crc kubenswrapper[5072]: I1124 11:50:57.759236 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 11:50:57 crc kubenswrapper[5072]: I1124 11:50:57.759611 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 11:50:57 crc kubenswrapper[5072]: I1124 11:50:57.759950 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-b6s7d" Nov 24 11:50:57 crc kubenswrapper[5072]: I1124 11:50:57.780279 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-n6dbq"] Nov 24 11:50:57 crc kubenswrapper[5072]: I1124 11:50:57.823304 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/619cab13-44ee-48c6-bf40-4baddd9ad88e-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-n6dbq\" (UID: \"619cab13-44ee-48c6-bf40-4baddd9ad88e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-n6dbq" Nov 24 11:50:57 crc kubenswrapper[5072]: I1124 11:50:57.823498 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/619cab13-44ee-48c6-bf40-4baddd9ad88e-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-n6dbq\" (UID: \"619cab13-44ee-48c6-bf40-4baddd9ad88e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-n6dbq" Nov 24 11:50:57 crc kubenswrapper[5072]: I1124 11:50:57.823544 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/619cab13-44ee-48c6-bf40-4baddd9ad88e-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-n6dbq\" (UID: \"619cab13-44ee-48c6-bf40-4baddd9ad88e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-n6dbq" Nov 24 11:50:57 crc kubenswrapper[5072]: I1124 11:50:57.823606 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kj858\" (UniqueName: \"kubernetes.io/projected/619cab13-44ee-48c6-bf40-4baddd9ad88e-kube-api-access-kj858\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-n6dbq\" (UID: \"619cab13-44ee-48c6-bf40-4baddd9ad88e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-n6dbq" Nov 24 11:50:57 crc kubenswrapper[5072]: I1124 11:50:57.823812 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/619cab13-44ee-48c6-bf40-4baddd9ad88e-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-n6dbq\" (UID: \"619cab13-44ee-48c6-bf40-4baddd9ad88e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-n6dbq" Nov 24 11:50:57 crc kubenswrapper[5072]: I1124 11:50:57.823898 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/619cab13-44ee-48c6-bf40-4baddd9ad88e-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-n6dbq\" (UID: \"619cab13-44ee-48c6-bf40-4baddd9ad88e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-n6dbq" Nov 24 11:50:57 crc kubenswrapper[5072]: I1124 11:50:57.925457 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kj858\" (UniqueName: \"kubernetes.io/projected/619cab13-44ee-48c6-bf40-4baddd9ad88e-kube-api-access-kj858\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-n6dbq\" (UID: \"619cab13-44ee-48c6-bf40-4baddd9ad88e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-n6dbq" Nov 24 11:50:57 crc kubenswrapper[5072]: I1124 11:50:57.925903 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/619cab13-44ee-48c6-bf40-4baddd9ad88e-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-n6dbq\" (UID: \"619cab13-44ee-48c6-bf40-4baddd9ad88e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-n6dbq" Nov 24 11:50:57 crc kubenswrapper[5072]: I1124 11:50:57.926083 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/619cab13-44ee-48c6-bf40-4baddd9ad88e-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-n6dbq\" (UID: \"619cab13-44ee-48c6-bf40-4baddd9ad88e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-n6dbq" Nov 24 11:50:57 crc kubenswrapper[5072]: I1124 11:50:57.926278 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/619cab13-44ee-48c6-bf40-4baddd9ad88e-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-n6dbq\" (UID: \"619cab13-44ee-48c6-bf40-4baddd9ad88e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-n6dbq" Nov 24 11:50:57 crc kubenswrapper[5072]: I1124 11:50:57.926691 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/619cab13-44ee-48c6-bf40-4baddd9ad88e-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-n6dbq\" (UID: \"619cab13-44ee-48c6-bf40-4baddd9ad88e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-n6dbq" Nov 24 11:50:57 crc kubenswrapper[5072]: I1124 11:50:57.926986 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/619cab13-44ee-48c6-bf40-4baddd9ad88e-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-n6dbq\" (UID: \"619cab13-44ee-48c6-bf40-4baddd9ad88e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-n6dbq" Nov 24 11:50:57 crc kubenswrapper[5072]: I1124 11:50:57.931094 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/619cab13-44ee-48c6-bf40-4baddd9ad88e-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-n6dbq\" (UID: \"619cab13-44ee-48c6-bf40-4baddd9ad88e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-n6dbq" Nov 24 11:50:57 crc kubenswrapper[5072]: I1124 11:50:57.931768 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/619cab13-44ee-48c6-bf40-4baddd9ad88e-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-n6dbq\" (UID: \"619cab13-44ee-48c6-bf40-4baddd9ad88e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-n6dbq" Nov 24 11:50:57 crc kubenswrapper[5072]: I1124 11:50:57.932108 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/619cab13-44ee-48c6-bf40-4baddd9ad88e-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-n6dbq\" (UID: \"619cab13-44ee-48c6-bf40-4baddd9ad88e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-n6dbq" Nov 24 11:50:57 crc kubenswrapper[5072]: I1124 11:50:57.932173 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/619cab13-44ee-48c6-bf40-4baddd9ad88e-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-n6dbq\" (UID: \"619cab13-44ee-48c6-bf40-4baddd9ad88e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-n6dbq" Nov 24 11:50:57 crc kubenswrapper[5072]: I1124 11:50:57.932292 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/619cab13-44ee-48c6-bf40-4baddd9ad88e-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-n6dbq\" (UID: \"619cab13-44ee-48c6-bf40-4baddd9ad88e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-n6dbq" Nov 24 11:50:57 crc kubenswrapper[5072]: I1124 11:50:57.946725 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kj858\" (UniqueName: \"kubernetes.io/projected/619cab13-44ee-48c6-bf40-4baddd9ad88e-kube-api-access-kj858\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-n6dbq\" (UID: \"619cab13-44ee-48c6-bf40-4baddd9ad88e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-n6dbq" Nov 24 11:50:58 crc kubenswrapper[5072]: I1124 11:50:58.080186 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-n6dbq" Nov 24 11:50:58 crc kubenswrapper[5072]: I1124 11:50:58.585940 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-n6dbq"] Nov 24 11:50:58 crc kubenswrapper[5072]: I1124 11:50:58.610930 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-n6dbq" event={"ID":"619cab13-44ee-48c6-bf40-4baddd9ad88e","Type":"ContainerStarted","Data":"84096f54463fcf256dd47f63d58e08b7dc8b48e43bae9f497afa4eb4bcd6901f"} Nov 24 11:51:00 crc kubenswrapper[5072]: I1124 11:51:00.628751 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-n6dbq" event={"ID":"619cab13-44ee-48c6-bf40-4baddd9ad88e","Type":"ContainerStarted","Data":"62867d302e0084cb5e3a52f7fa2e8f52babcf1175c612a7728a55b928eae693a"} Nov 24 11:51:00 crc kubenswrapper[5072]: I1124 11:51:00.657782 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-n6dbq" podStartSLOduration=2.802398333 podStartE2EDuration="3.657754137s" podCreationTimestamp="2025-11-24 11:50:57 +0000 UTC" firstStartedPulling="2025-11-24 11:50:58.581252745 +0000 UTC m=+2510.292777221" lastFinishedPulling="2025-11-24 11:50:59.436608549 +0000 UTC m=+2511.148133025" observedRunningTime="2025-11-24 11:51:00.650272824 +0000 UTC m=+2512.361797310" watchObservedRunningTime="2025-11-24 11:51:00.657754137 +0000 UTC m=+2512.369278633" Nov 24 11:51:09 crc kubenswrapper[5072]: I1124 11:51:09.028615 5072 scope.go:117] "RemoveContainer" containerID="6821956e4cab86ef1bb97ee072ae286fa9afb6be72f793a93d8280a527b7f493" Nov 24 11:51:09 crc kubenswrapper[5072]: E1124 11:51:09.029897 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 11:51:20 crc kubenswrapper[5072]: I1124 11:51:20.016892 5072 scope.go:117] "RemoveContainer" containerID="6821956e4cab86ef1bb97ee072ae286fa9afb6be72f793a93d8280a527b7f493" Nov 24 11:51:20 crc kubenswrapper[5072]: E1124 11:51:20.017565 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 11:51:33 crc kubenswrapper[5072]: I1124 11:51:33.016454 5072 scope.go:117] "RemoveContainer" containerID="6821956e4cab86ef1bb97ee072ae286fa9afb6be72f793a93d8280a527b7f493" Nov 24 11:51:33 crc kubenswrapper[5072]: E1124 11:51:33.017302 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 11:51:44 crc kubenswrapper[5072]: I1124 11:51:44.016761 5072 scope.go:117] "RemoveContainer" containerID="6821956e4cab86ef1bb97ee072ae286fa9afb6be72f793a93d8280a527b7f493" Nov 24 11:51:45 crc kubenswrapper[5072]: I1124 11:51:45.108255 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" event={"ID":"85ee6420-36f0-467c-acf4-ebea8b02c8d5","Type":"ContainerStarted","Data":"214cd3fb3c364f4c0eb062815b36644ab6af47ce8000f33d400642a27a4dd0ec"} Nov 24 11:53:49 crc kubenswrapper[5072]: I1124 11:53:49.469766 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-d6gm2"] Nov 24 11:53:49 crc kubenswrapper[5072]: I1124 11:53:49.472580 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d6gm2" Nov 24 11:53:49 crc kubenswrapper[5072]: I1124 11:53:49.484851 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-d6gm2"] Nov 24 11:53:49 crc kubenswrapper[5072]: I1124 11:53:49.518008 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6620e1f4-f4f9-47c5-8419-25052e232a8e-utilities\") pod \"redhat-marketplace-d6gm2\" (UID: \"6620e1f4-f4f9-47c5-8419-25052e232a8e\") " pod="openshift-marketplace/redhat-marketplace-d6gm2" Nov 24 11:53:49 crc kubenswrapper[5072]: I1124 11:53:49.518113 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6620e1f4-f4f9-47c5-8419-25052e232a8e-catalog-content\") pod \"redhat-marketplace-d6gm2\" (UID: \"6620e1f4-f4f9-47c5-8419-25052e232a8e\") " pod="openshift-marketplace/redhat-marketplace-d6gm2" Nov 24 11:53:49 crc kubenswrapper[5072]: I1124 11:53:49.518140 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sk8t\" (UniqueName: \"kubernetes.io/projected/6620e1f4-f4f9-47c5-8419-25052e232a8e-kube-api-access-2sk8t\") pod \"redhat-marketplace-d6gm2\" (UID: \"6620e1f4-f4f9-47c5-8419-25052e232a8e\") " pod="openshift-marketplace/redhat-marketplace-d6gm2" Nov 24 11:53:49 crc kubenswrapper[5072]: I1124 11:53:49.619563 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6620e1f4-f4f9-47c5-8419-25052e232a8e-utilities\") pod \"redhat-marketplace-d6gm2\" (UID: \"6620e1f4-f4f9-47c5-8419-25052e232a8e\") " pod="openshift-marketplace/redhat-marketplace-d6gm2" Nov 24 11:53:49 crc kubenswrapper[5072]: I1124 11:53:49.619663 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6620e1f4-f4f9-47c5-8419-25052e232a8e-catalog-content\") pod \"redhat-marketplace-d6gm2\" (UID: \"6620e1f4-f4f9-47c5-8419-25052e232a8e\") " pod="openshift-marketplace/redhat-marketplace-d6gm2" Nov 24 11:53:49 crc kubenswrapper[5072]: I1124 11:53:49.619690 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2sk8t\" (UniqueName: \"kubernetes.io/projected/6620e1f4-f4f9-47c5-8419-25052e232a8e-kube-api-access-2sk8t\") pod \"redhat-marketplace-d6gm2\" (UID: \"6620e1f4-f4f9-47c5-8419-25052e232a8e\") " pod="openshift-marketplace/redhat-marketplace-d6gm2" Nov 24 11:53:49 crc kubenswrapper[5072]: I1124 11:53:49.620120 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6620e1f4-f4f9-47c5-8419-25052e232a8e-utilities\") pod \"redhat-marketplace-d6gm2\" (UID: \"6620e1f4-f4f9-47c5-8419-25052e232a8e\") " pod="openshift-marketplace/redhat-marketplace-d6gm2" Nov 24 11:53:49 crc kubenswrapper[5072]: I1124 11:53:49.620289 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6620e1f4-f4f9-47c5-8419-25052e232a8e-catalog-content\") pod \"redhat-marketplace-d6gm2\" (UID: \"6620e1f4-f4f9-47c5-8419-25052e232a8e\") " pod="openshift-marketplace/redhat-marketplace-d6gm2" Nov 24 11:53:49 crc kubenswrapper[5072]: I1124 11:53:49.643064 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2sk8t\" (UniqueName: \"kubernetes.io/projected/6620e1f4-f4f9-47c5-8419-25052e232a8e-kube-api-access-2sk8t\") pod \"redhat-marketplace-d6gm2\" (UID: \"6620e1f4-f4f9-47c5-8419-25052e232a8e\") " pod="openshift-marketplace/redhat-marketplace-d6gm2" Nov 24 11:53:49 crc kubenswrapper[5072]: I1124 11:53:49.794969 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d6gm2" Nov 24 11:53:50 crc kubenswrapper[5072]: I1124 11:53:50.244163 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-d6gm2"] Nov 24 11:53:50 crc kubenswrapper[5072]: I1124 11:53:50.355522 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d6gm2" event={"ID":"6620e1f4-f4f9-47c5-8419-25052e232a8e","Type":"ContainerStarted","Data":"db1b4a9bc69335f26fb054e5a2469b5147d633b5fac7a2160acd8472f6e78202"} Nov 24 11:53:51 crc kubenswrapper[5072]: I1124 11:53:51.364538 5072 generic.go:334] "Generic (PLEG): container finished" podID="6620e1f4-f4f9-47c5-8419-25052e232a8e" containerID="2c5690ee53e7208712c250f43e8972c86dadf3e54de1d519a64192ee67c4b89f" exitCode=0 Nov 24 11:53:51 crc kubenswrapper[5072]: I1124 11:53:51.364595 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d6gm2" event={"ID":"6620e1f4-f4f9-47c5-8419-25052e232a8e","Type":"ContainerDied","Data":"2c5690ee53e7208712c250f43e8972c86dadf3e54de1d519a64192ee67c4b89f"} Nov 24 11:53:51 crc kubenswrapper[5072]: I1124 11:53:51.366718 5072 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 11:53:53 crc kubenswrapper[5072]: I1124 11:53:53.399673 5072 generic.go:334] "Generic (PLEG): container finished" podID="6620e1f4-f4f9-47c5-8419-25052e232a8e" containerID="b21ee9ff26350ec9e422ce32ad81832dc7b1c95697af33c5984ae414daeb173e" exitCode=0 Nov 24 11:53:53 crc kubenswrapper[5072]: I1124 11:53:53.399807 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d6gm2" event={"ID":"6620e1f4-f4f9-47c5-8419-25052e232a8e","Type":"ContainerDied","Data":"b21ee9ff26350ec9e422ce32ad81832dc7b1c95697af33c5984ae414daeb173e"} Nov 24 11:53:54 crc kubenswrapper[5072]: I1124 11:53:54.416588 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d6gm2" event={"ID":"6620e1f4-f4f9-47c5-8419-25052e232a8e","Type":"ContainerStarted","Data":"0a6d4f087600dbe547466556674c7c762b78ddc02c0695e338bb41a0631cea87"} Nov 24 11:53:54 crc kubenswrapper[5072]: I1124 11:53:54.447803 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-d6gm2" podStartSLOduration=2.969790124 podStartE2EDuration="5.447776546s" podCreationTimestamp="2025-11-24 11:53:49 +0000 UTC" firstStartedPulling="2025-11-24 11:53:51.366448814 +0000 UTC m=+2683.077973290" lastFinishedPulling="2025-11-24 11:53:53.844435196 +0000 UTC m=+2685.555959712" observedRunningTime="2025-11-24 11:53:54.4330158 +0000 UTC m=+2686.144540286" watchObservedRunningTime="2025-11-24 11:53:54.447776546 +0000 UTC m=+2686.159301032" Nov 24 11:53:59 crc kubenswrapper[5072]: I1124 11:53:59.796325 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-d6gm2" Nov 24 11:53:59 crc kubenswrapper[5072]: I1124 11:53:59.797125 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-d6gm2" Nov 24 11:53:59 crc kubenswrapper[5072]: I1124 11:53:59.868623 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-d6gm2" Nov 24 11:54:00 crc kubenswrapper[5072]: I1124 11:54:00.535453 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-d6gm2" Nov 24 11:54:00 crc kubenswrapper[5072]: I1124 11:54:00.592445 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-d6gm2"] Nov 24 11:54:02 crc kubenswrapper[5072]: I1124 11:54:02.510455 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-d6gm2" podUID="6620e1f4-f4f9-47c5-8419-25052e232a8e" containerName="registry-server" containerID="cri-o://0a6d4f087600dbe547466556674c7c762b78ddc02c0695e338bb41a0631cea87" gracePeriod=2 Nov 24 11:54:03 crc kubenswrapper[5072]: I1124 11:54:03.021114 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d6gm2" Nov 24 11:54:03 crc kubenswrapper[5072]: I1124 11:54:03.070463 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2sk8t\" (UniqueName: \"kubernetes.io/projected/6620e1f4-f4f9-47c5-8419-25052e232a8e-kube-api-access-2sk8t\") pod \"6620e1f4-f4f9-47c5-8419-25052e232a8e\" (UID: \"6620e1f4-f4f9-47c5-8419-25052e232a8e\") " Nov 24 11:54:03 crc kubenswrapper[5072]: I1124 11:54:03.070515 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6620e1f4-f4f9-47c5-8419-25052e232a8e-utilities\") pod \"6620e1f4-f4f9-47c5-8419-25052e232a8e\" (UID: \"6620e1f4-f4f9-47c5-8419-25052e232a8e\") " Nov 24 11:54:03 crc kubenswrapper[5072]: I1124 11:54:03.070578 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6620e1f4-f4f9-47c5-8419-25052e232a8e-catalog-content\") pod \"6620e1f4-f4f9-47c5-8419-25052e232a8e\" (UID: \"6620e1f4-f4f9-47c5-8419-25052e232a8e\") " Nov 24 11:54:03 crc kubenswrapper[5072]: I1124 11:54:03.071588 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6620e1f4-f4f9-47c5-8419-25052e232a8e-utilities" (OuterVolumeSpecName: "utilities") pod "6620e1f4-f4f9-47c5-8419-25052e232a8e" (UID: "6620e1f4-f4f9-47c5-8419-25052e232a8e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:54:03 crc kubenswrapper[5072]: I1124 11:54:03.080593 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6620e1f4-f4f9-47c5-8419-25052e232a8e-kube-api-access-2sk8t" (OuterVolumeSpecName: "kube-api-access-2sk8t") pod "6620e1f4-f4f9-47c5-8419-25052e232a8e" (UID: "6620e1f4-f4f9-47c5-8419-25052e232a8e"). InnerVolumeSpecName "kube-api-access-2sk8t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:54:03 crc kubenswrapper[5072]: I1124 11:54:03.174794 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2sk8t\" (UniqueName: \"kubernetes.io/projected/6620e1f4-f4f9-47c5-8419-25052e232a8e-kube-api-access-2sk8t\") on node \"crc\" DevicePath \"\"" Nov 24 11:54:03 crc kubenswrapper[5072]: I1124 11:54:03.175398 5072 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6620e1f4-f4f9-47c5-8419-25052e232a8e-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 11:54:03 crc kubenswrapper[5072]: I1124 11:54:03.194793 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6620e1f4-f4f9-47c5-8419-25052e232a8e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6620e1f4-f4f9-47c5-8419-25052e232a8e" (UID: "6620e1f4-f4f9-47c5-8419-25052e232a8e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:54:03 crc kubenswrapper[5072]: I1124 11:54:03.276977 5072 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6620e1f4-f4f9-47c5-8419-25052e232a8e-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 11:54:03 crc kubenswrapper[5072]: I1124 11:54:03.521570 5072 generic.go:334] "Generic (PLEG): container finished" podID="6620e1f4-f4f9-47c5-8419-25052e232a8e" containerID="0a6d4f087600dbe547466556674c7c762b78ddc02c0695e338bb41a0631cea87" exitCode=0 Nov 24 11:54:03 crc kubenswrapper[5072]: I1124 11:54:03.522882 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d6gm2" event={"ID":"6620e1f4-f4f9-47c5-8419-25052e232a8e","Type":"ContainerDied","Data":"0a6d4f087600dbe547466556674c7c762b78ddc02c0695e338bb41a0631cea87"} Nov 24 11:54:03 crc kubenswrapper[5072]: I1124 11:54:03.523089 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d6gm2" event={"ID":"6620e1f4-f4f9-47c5-8419-25052e232a8e","Type":"ContainerDied","Data":"db1b4a9bc69335f26fb054e5a2469b5147d633b5fac7a2160acd8472f6e78202"} Nov 24 11:54:03 crc kubenswrapper[5072]: I1124 11:54:03.523216 5072 scope.go:117] "RemoveContainer" containerID="0a6d4f087600dbe547466556674c7c762b78ddc02c0695e338bb41a0631cea87" Nov 24 11:54:03 crc kubenswrapper[5072]: I1124 11:54:03.522928 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d6gm2" Nov 24 11:54:03 crc kubenswrapper[5072]: I1124 11:54:03.548316 5072 scope.go:117] "RemoveContainer" containerID="b21ee9ff26350ec9e422ce32ad81832dc7b1c95697af33c5984ae414daeb173e" Nov 24 11:54:03 crc kubenswrapper[5072]: I1124 11:54:03.574153 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-d6gm2"] Nov 24 11:54:03 crc kubenswrapper[5072]: I1124 11:54:03.580486 5072 scope.go:117] "RemoveContainer" containerID="2c5690ee53e7208712c250f43e8972c86dadf3e54de1d519a64192ee67c4b89f" Nov 24 11:54:03 crc kubenswrapper[5072]: I1124 11:54:03.586475 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-d6gm2"] Nov 24 11:54:03 crc kubenswrapper[5072]: I1124 11:54:03.638987 5072 scope.go:117] "RemoveContainer" containerID="0a6d4f087600dbe547466556674c7c762b78ddc02c0695e338bb41a0631cea87" Nov 24 11:54:03 crc kubenswrapper[5072]: E1124 11:54:03.639812 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a6d4f087600dbe547466556674c7c762b78ddc02c0695e338bb41a0631cea87\": container with ID starting with 0a6d4f087600dbe547466556674c7c762b78ddc02c0695e338bb41a0631cea87 not found: ID does not exist" containerID="0a6d4f087600dbe547466556674c7c762b78ddc02c0695e338bb41a0631cea87" Nov 24 11:54:03 crc kubenswrapper[5072]: I1124 11:54:03.639849 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a6d4f087600dbe547466556674c7c762b78ddc02c0695e338bb41a0631cea87"} err="failed to get container status \"0a6d4f087600dbe547466556674c7c762b78ddc02c0695e338bb41a0631cea87\": rpc error: code = NotFound desc = could not find container \"0a6d4f087600dbe547466556674c7c762b78ddc02c0695e338bb41a0631cea87\": container with ID starting with 0a6d4f087600dbe547466556674c7c762b78ddc02c0695e338bb41a0631cea87 not found: ID does not exist" Nov 24 11:54:03 crc kubenswrapper[5072]: I1124 11:54:03.639875 5072 scope.go:117] "RemoveContainer" containerID="b21ee9ff26350ec9e422ce32ad81832dc7b1c95697af33c5984ae414daeb173e" Nov 24 11:54:03 crc kubenswrapper[5072]: E1124 11:54:03.640207 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b21ee9ff26350ec9e422ce32ad81832dc7b1c95697af33c5984ae414daeb173e\": container with ID starting with b21ee9ff26350ec9e422ce32ad81832dc7b1c95697af33c5984ae414daeb173e not found: ID does not exist" containerID="b21ee9ff26350ec9e422ce32ad81832dc7b1c95697af33c5984ae414daeb173e" Nov 24 11:54:03 crc kubenswrapper[5072]: I1124 11:54:03.640235 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b21ee9ff26350ec9e422ce32ad81832dc7b1c95697af33c5984ae414daeb173e"} err="failed to get container status \"b21ee9ff26350ec9e422ce32ad81832dc7b1c95697af33c5984ae414daeb173e\": rpc error: code = NotFound desc = could not find container \"b21ee9ff26350ec9e422ce32ad81832dc7b1c95697af33c5984ae414daeb173e\": container with ID starting with b21ee9ff26350ec9e422ce32ad81832dc7b1c95697af33c5984ae414daeb173e not found: ID does not exist" Nov 24 11:54:03 crc kubenswrapper[5072]: I1124 11:54:03.640247 5072 scope.go:117] "RemoveContainer" containerID="2c5690ee53e7208712c250f43e8972c86dadf3e54de1d519a64192ee67c4b89f" Nov 24 11:54:03 crc kubenswrapper[5072]: E1124 11:54:03.640518 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c5690ee53e7208712c250f43e8972c86dadf3e54de1d519a64192ee67c4b89f\": container with ID starting with 2c5690ee53e7208712c250f43e8972c86dadf3e54de1d519a64192ee67c4b89f not found: ID does not exist" containerID="2c5690ee53e7208712c250f43e8972c86dadf3e54de1d519a64192ee67c4b89f" Nov 24 11:54:03 crc kubenswrapper[5072]: I1124 11:54:03.640542 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c5690ee53e7208712c250f43e8972c86dadf3e54de1d519a64192ee67c4b89f"} err="failed to get container status \"2c5690ee53e7208712c250f43e8972c86dadf3e54de1d519a64192ee67c4b89f\": rpc error: code = NotFound desc = could not find container \"2c5690ee53e7208712c250f43e8972c86dadf3e54de1d519a64192ee67c4b89f\": container with ID starting with 2c5690ee53e7208712c250f43e8972c86dadf3e54de1d519a64192ee67c4b89f not found: ID does not exist" Nov 24 11:54:05 crc kubenswrapper[5072]: I1124 11:54:05.037491 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6620e1f4-f4f9-47c5-8419-25052e232a8e" path="/var/lib/kubelet/pods/6620e1f4-f4f9-47c5-8419-25052e232a8e/volumes" Nov 24 11:54:13 crc kubenswrapper[5072]: I1124 11:54:13.645463 5072 patch_prober.go:28] interesting pod/machine-config-daemon-jfxnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 11:54:13 crc kubenswrapper[5072]: I1124 11:54:13.645934 5072 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 11:54:43 crc kubenswrapper[5072]: I1124 11:54:43.644902 5072 patch_prober.go:28] interesting pod/machine-config-daemon-jfxnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 11:54:43 crc kubenswrapper[5072]: I1124 11:54:43.645834 5072 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 11:55:13 crc kubenswrapper[5072]: I1124 11:55:13.645581 5072 patch_prober.go:28] interesting pod/machine-config-daemon-jfxnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 11:55:13 crc kubenswrapper[5072]: I1124 11:55:13.646553 5072 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 11:55:13 crc kubenswrapper[5072]: I1124 11:55:13.646632 5072 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" Nov 24 11:55:13 crc kubenswrapper[5072]: I1124 11:55:13.647862 5072 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"214cd3fb3c364f4c0eb062815b36644ab6af47ce8000f33d400642a27a4dd0ec"} pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 11:55:13 crc kubenswrapper[5072]: I1124 11:55:13.647990 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" containerName="machine-config-daemon" containerID="cri-o://214cd3fb3c364f4c0eb062815b36644ab6af47ce8000f33d400642a27a4dd0ec" gracePeriod=600 Nov 24 11:55:14 crc kubenswrapper[5072]: I1124 11:55:14.205237 5072 generic.go:334] "Generic (PLEG): container finished" podID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" containerID="214cd3fb3c364f4c0eb062815b36644ab6af47ce8000f33d400642a27a4dd0ec" exitCode=0 Nov 24 11:55:14 crc kubenswrapper[5072]: I1124 11:55:14.205399 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" event={"ID":"85ee6420-36f0-467c-acf4-ebea8b02c8d5","Type":"ContainerDied","Data":"214cd3fb3c364f4c0eb062815b36644ab6af47ce8000f33d400642a27a4dd0ec"} Nov 24 11:55:14 crc kubenswrapper[5072]: I1124 11:55:14.205618 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" event={"ID":"85ee6420-36f0-467c-acf4-ebea8b02c8d5","Type":"ContainerStarted","Data":"4c463b6823449c0875f1fec4633ea521827aee0fee045719621150bcb1ac1a4f"} Nov 24 11:55:14 crc kubenswrapper[5072]: I1124 11:55:14.205639 5072 scope.go:117] "RemoveContainer" containerID="6821956e4cab86ef1bb97ee072ae286fa9afb6be72f793a93d8280a527b7f493" Nov 24 11:55:24 crc kubenswrapper[5072]: I1124 11:55:24.293352 5072 generic.go:334] "Generic (PLEG): container finished" podID="619cab13-44ee-48c6-bf40-4baddd9ad88e" containerID="62867d302e0084cb5e3a52f7fa2e8f52babcf1175c612a7728a55b928eae693a" exitCode=0 Nov 24 11:55:24 crc kubenswrapper[5072]: I1124 11:55:24.293945 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-n6dbq" event={"ID":"619cab13-44ee-48c6-bf40-4baddd9ad88e","Type":"ContainerDied","Data":"62867d302e0084cb5e3a52f7fa2e8f52babcf1175c612a7728a55b928eae693a"} Nov 24 11:55:25 crc kubenswrapper[5072]: I1124 11:55:25.788205 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-n6dbq" Nov 24 11:55:25 crc kubenswrapper[5072]: I1124 11:55:25.848068 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/619cab13-44ee-48c6-bf40-4baddd9ad88e-libvirt-secret-0\") pod \"619cab13-44ee-48c6-bf40-4baddd9ad88e\" (UID: \"619cab13-44ee-48c6-bf40-4baddd9ad88e\") " Nov 24 11:55:25 crc kubenswrapper[5072]: I1124 11:55:25.848188 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/619cab13-44ee-48c6-bf40-4baddd9ad88e-ssh-key\") pod \"619cab13-44ee-48c6-bf40-4baddd9ad88e\" (UID: \"619cab13-44ee-48c6-bf40-4baddd9ad88e\") " Nov 24 11:55:25 crc kubenswrapper[5072]: I1124 11:55:25.848215 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/619cab13-44ee-48c6-bf40-4baddd9ad88e-ceph\") pod \"619cab13-44ee-48c6-bf40-4baddd9ad88e\" (UID: \"619cab13-44ee-48c6-bf40-4baddd9ad88e\") " Nov 24 11:55:25 crc kubenswrapper[5072]: I1124 11:55:25.848280 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/619cab13-44ee-48c6-bf40-4baddd9ad88e-inventory\") pod \"619cab13-44ee-48c6-bf40-4baddd9ad88e\" (UID: \"619cab13-44ee-48c6-bf40-4baddd9ad88e\") " Nov 24 11:55:25 crc kubenswrapper[5072]: I1124 11:55:25.848342 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/619cab13-44ee-48c6-bf40-4baddd9ad88e-libvirt-combined-ca-bundle\") pod \"619cab13-44ee-48c6-bf40-4baddd9ad88e\" (UID: \"619cab13-44ee-48c6-bf40-4baddd9ad88e\") " Nov 24 11:55:25 crc kubenswrapper[5072]: I1124 11:55:25.848387 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kj858\" (UniqueName: \"kubernetes.io/projected/619cab13-44ee-48c6-bf40-4baddd9ad88e-kube-api-access-kj858\") pod \"619cab13-44ee-48c6-bf40-4baddd9ad88e\" (UID: \"619cab13-44ee-48c6-bf40-4baddd9ad88e\") " Nov 24 11:55:25 crc kubenswrapper[5072]: I1124 11:55:25.857198 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/619cab13-44ee-48c6-bf40-4baddd9ad88e-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "619cab13-44ee-48c6-bf40-4baddd9ad88e" (UID: "619cab13-44ee-48c6-bf40-4baddd9ad88e"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:55:25 crc kubenswrapper[5072]: I1124 11:55:25.857246 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/619cab13-44ee-48c6-bf40-4baddd9ad88e-ceph" (OuterVolumeSpecName: "ceph") pod "619cab13-44ee-48c6-bf40-4baddd9ad88e" (UID: "619cab13-44ee-48c6-bf40-4baddd9ad88e"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:55:25 crc kubenswrapper[5072]: I1124 11:55:25.857324 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/619cab13-44ee-48c6-bf40-4baddd9ad88e-kube-api-access-kj858" (OuterVolumeSpecName: "kube-api-access-kj858") pod "619cab13-44ee-48c6-bf40-4baddd9ad88e" (UID: "619cab13-44ee-48c6-bf40-4baddd9ad88e"). InnerVolumeSpecName "kube-api-access-kj858". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:55:25 crc kubenswrapper[5072]: I1124 11:55:25.879728 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/619cab13-44ee-48c6-bf40-4baddd9ad88e-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "619cab13-44ee-48c6-bf40-4baddd9ad88e" (UID: "619cab13-44ee-48c6-bf40-4baddd9ad88e"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:55:25 crc kubenswrapper[5072]: I1124 11:55:25.891627 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/619cab13-44ee-48c6-bf40-4baddd9ad88e-inventory" (OuterVolumeSpecName: "inventory") pod "619cab13-44ee-48c6-bf40-4baddd9ad88e" (UID: "619cab13-44ee-48c6-bf40-4baddd9ad88e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:55:25 crc kubenswrapper[5072]: I1124 11:55:25.900116 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/619cab13-44ee-48c6-bf40-4baddd9ad88e-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "619cab13-44ee-48c6-bf40-4baddd9ad88e" (UID: "619cab13-44ee-48c6-bf40-4baddd9ad88e"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:55:25 crc kubenswrapper[5072]: I1124 11:55:25.953511 5072 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/619cab13-44ee-48c6-bf40-4baddd9ad88e-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Nov 24 11:55:25 crc kubenswrapper[5072]: I1124 11:55:25.953839 5072 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/619cab13-44ee-48c6-bf40-4baddd9ad88e-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 11:55:25 crc kubenswrapper[5072]: I1124 11:55:25.954125 5072 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/619cab13-44ee-48c6-bf40-4baddd9ad88e-ceph\") on node \"crc\" DevicePath \"\"" Nov 24 11:55:25 crc kubenswrapper[5072]: I1124 11:55:25.954477 5072 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/619cab13-44ee-48c6-bf40-4baddd9ad88e-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 11:55:25 crc kubenswrapper[5072]: I1124 11:55:25.954688 5072 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/619cab13-44ee-48c6-bf40-4baddd9ad88e-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:55:25 crc kubenswrapper[5072]: I1124 11:55:25.954832 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kj858\" (UniqueName: \"kubernetes.io/projected/619cab13-44ee-48c6-bf40-4baddd9ad88e-kube-api-access-kj858\") on node \"crc\" DevicePath \"\"" Nov 24 11:55:26 crc kubenswrapper[5072]: I1124 11:55:26.316669 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-n6dbq" event={"ID":"619cab13-44ee-48c6-bf40-4baddd9ad88e","Type":"ContainerDied","Data":"84096f54463fcf256dd47f63d58e08b7dc8b48e43bae9f497afa4eb4bcd6901f"} Nov 24 11:55:26 crc kubenswrapper[5072]: I1124 11:55:26.316740 5072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="84096f54463fcf256dd47f63d58e08b7dc8b48e43bae9f497afa4eb4bcd6901f" Nov 24 11:55:26 crc kubenswrapper[5072]: I1124 11:55:26.316832 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-n6dbq" Nov 24 11:55:26 crc kubenswrapper[5072]: I1124 11:55:26.449814 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-gpkb7"] Nov 24 11:55:26 crc kubenswrapper[5072]: E1124 11:55:26.453616 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6620e1f4-f4f9-47c5-8419-25052e232a8e" containerName="extract-content" Nov 24 11:55:26 crc kubenswrapper[5072]: I1124 11:55:26.453646 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="6620e1f4-f4f9-47c5-8419-25052e232a8e" containerName="extract-content" Nov 24 11:55:26 crc kubenswrapper[5072]: E1124 11:55:26.453670 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="619cab13-44ee-48c6-bf40-4baddd9ad88e" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 24 11:55:26 crc kubenswrapper[5072]: I1124 11:55:26.453679 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="619cab13-44ee-48c6-bf40-4baddd9ad88e" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 24 11:55:26 crc kubenswrapper[5072]: E1124 11:55:26.453693 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6620e1f4-f4f9-47c5-8419-25052e232a8e" containerName="extract-utilities" Nov 24 11:55:26 crc kubenswrapper[5072]: I1124 11:55:26.453702 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="6620e1f4-f4f9-47c5-8419-25052e232a8e" containerName="extract-utilities" Nov 24 11:55:26 crc kubenswrapper[5072]: E1124 11:55:26.453723 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6620e1f4-f4f9-47c5-8419-25052e232a8e" containerName="registry-server" Nov 24 11:55:26 crc kubenswrapper[5072]: I1124 11:55:26.453731 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="6620e1f4-f4f9-47c5-8419-25052e232a8e" containerName="registry-server" Nov 24 11:55:26 crc kubenswrapper[5072]: I1124 11:55:26.453942 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="619cab13-44ee-48c6-bf40-4baddd9ad88e" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 24 11:55:26 crc kubenswrapper[5072]: I1124 11:55:26.453962 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="6620e1f4-f4f9-47c5-8419-25052e232a8e" containerName="registry-server" Nov 24 11:55:26 crc kubenswrapper[5072]: I1124 11:55:26.454707 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-gpkb7" Nov 24 11:55:26 crc kubenswrapper[5072]: I1124 11:55:26.458107 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ceph-nova" Nov 24 11:55:26 crc kubenswrapper[5072]: I1124 11:55:26.458412 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 11:55:26 crc kubenswrapper[5072]: I1124 11:55:26.458631 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Nov 24 11:55:26 crc kubenswrapper[5072]: I1124 11:55:26.459309 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 24 11:55:26 crc kubenswrapper[5072]: I1124 11:55:26.459364 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-b6s7d" Nov 24 11:55:26 crc kubenswrapper[5072]: I1124 11:55:26.459577 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Nov 24 11:55:26 crc kubenswrapper[5072]: I1124 11:55:26.459655 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Nov 24 11:55:26 crc kubenswrapper[5072]: I1124 11:55:26.459706 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 11:55:26 crc kubenswrapper[5072]: I1124 11:55:26.464328 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 11:55:26 crc kubenswrapper[5072]: I1124 11:55:26.465168 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-gpkb7"] Nov 24 11:55:26 crc kubenswrapper[5072]: I1124 11:55:26.574031 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a25d738b-a5be-44f2-86f2-9b554c3f7947-ssh-key\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-gpkb7\" (UID: \"a25d738b-a5be-44f2-86f2-9b554c3f7947\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-gpkb7" Nov 24 11:55:26 crc kubenswrapper[5072]: I1124 11:55:26.574083 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/a25d738b-a5be-44f2-86f2-9b554c3f7947-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-gpkb7\" (UID: \"a25d738b-a5be-44f2-86f2-9b554c3f7947\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-gpkb7" Nov 24 11:55:26 crc kubenswrapper[5072]: I1124 11:55:26.574112 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/a25d738b-a5be-44f2-86f2-9b554c3f7947-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-gpkb7\" (UID: \"a25d738b-a5be-44f2-86f2-9b554c3f7947\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-gpkb7" Nov 24 11:55:26 crc kubenswrapper[5072]: I1124 11:55:26.574156 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/a25d738b-a5be-44f2-86f2-9b554c3f7947-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-gpkb7\" (UID: \"a25d738b-a5be-44f2-86f2-9b554c3f7947\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-gpkb7" Nov 24 11:55:26 crc kubenswrapper[5072]: I1124 11:55:26.574196 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a25d738b-a5be-44f2-86f2-9b554c3f7947-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-gpkb7\" (UID: \"a25d738b-a5be-44f2-86f2-9b554c3f7947\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-gpkb7" Nov 24 11:55:26 crc kubenswrapper[5072]: I1124 11:55:26.574413 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/a25d738b-a5be-44f2-86f2-9b554c3f7947-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-gpkb7\" (UID: \"a25d738b-a5be-44f2-86f2-9b554c3f7947\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-gpkb7" Nov 24 11:55:26 crc kubenswrapper[5072]: I1124 11:55:26.574461 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/a25d738b-a5be-44f2-86f2-9b554c3f7947-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-gpkb7\" (UID: \"a25d738b-a5be-44f2-86f2-9b554c3f7947\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-gpkb7" Nov 24 11:55:26 crc kubenswrapper[5072]: I1124 11:55:26.574499 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hnbc\" (UniqueName: \"kubernetes.io/projected/a25d738b-a5be-44f2-86f2-9b554c3f7947-kube-api-access-9hnbc\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-gpkb7\" (UID: \"a25d738b-a5be-44f2-86f2-9b554c3f7947\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-gpkb7" Nov 24 11:55:26 crc kubenswrapper[5072]: I1124 11:55:26.574563 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a25d738b-a5be-44f2-86f2-9b554c3f7947-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-gpkb7\" (UID: \"a25d738b-a5be-44f2-86f2-9b554c3f7947\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-gpkb7" Nov 24 11:55:26 crc kubenswrapper[5072]: I1124 11:55:26.574707 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a25d738b-a5be-44f2-86f2-9b554c3f7947-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-gpkb7\" (UID: \"a25d738b-a5be-44f2-86f2-9b554c3f7947\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-gpkb7" Nov 24 11:55:26 crc kubenswrapper[5072]: I1124 11:55:26.574759 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/a25d738b-a5be-44f2-86f2-9b554c3f7947-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-gpkb7\" (UID: \"a25d738b-a5be-44f2-86f2-9b554c3f7947\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-gpkb7" Nov 24 11:55:26 crc kubenswrapper[5072]: I1124 11:55:26.676856 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a25d738b-a5be-44f2-86f2-9b554c3f7947-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-gpkb7\" (UID: \"a25d738b-a5be-44f2-86f2-9b554c3f7947\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-gpkb7" Nov 24 11:55:26 crc kubenswrapper[5072]: I1124 11:55:26.676897 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/a25d738b-a5be-44f2-86f2-9b554c3f7947-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-gpkb7\" (UID: \"a25d738b-a5be-44f2-86f2-9b554c3f7947\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-gpkb7" Nov 24 11:55:26 crc kubenswrapper[5072]: I1124 11:55:26.676987 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a25d738b-a5be-44f2-86f2-9b554c3f7947-ssh-key\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-gpkb7\" (UID: \"a25d738b-a5be-44f2-86f2-9b554c3f7947\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-gpkb7" Nov 24 11:55:26 crc kubenswrapper[5072]: I1124 11:55:26.677015 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/a25d738b-a5be-44f2-86f2-9b554c3f7947-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-gpkb7\" (UID: \"a25d738b-a5be-44f2-86f2-9b554c3f7947\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-gpkb7" Nov 24 11:55:26 crc kubenswrapper[5072]: I1124 11:55:26.677034 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/a25d738b-a5be-44f2-86f2-9b554c3f7947-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-gpkb7\" (UID: \"a25d738b-a5be-44f2-86f2-9b554c3f7947\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-gpkb7" Nov 24 11:55:26 crc kubenswrapper[5072]: I1124 11:55:26.677063 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/a25d738b-a5be-44f2-86f2-9b554c3f7947-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-gpkb7\" (UID: \"a25d738b-a5be-44f2-86f2-9b554c3f7947\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-gpkb7" Nov 24 11:55:26 crc kubenswrapper[5072]: I1124 11:55:26.677083 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a25d738b-a5be-44f2-86f2-9b554c3f7947-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-gpkb7\" (UID: \"a25d738b-a5be-44f2-86f2-9b554c3f7947\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-gpkb7" Nov 24 11:55:26 crc kubenswrapper[5072]: I1124 11:55:26.677120 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/a25d738b-a5be-44f2-86f2-9b554c3f7947-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-gpkb7\" (UID: \"a25d738b-a5be-44f2-86f2-9b554c3f7947\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-gpkb7" Nov 24 11:55:26 crc kubenswrapper[5072]: I1124 11:55:26.677137 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/a25d738b-a5be-44f2-86f2-9b554c3f7947-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-gpkb7\" (UID: \"a25d738b-a5be-44f2-86f2-9b554c3f7947\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-gpkb7" Nov 24 11:55:26 crc kubenswrapper[5072]: I1124 11:55:26.677154 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hnbc\" (UniqueName: \"kubernetes.io/projected/a25d738b-a5be-44f2-86f2-9b554c3f7947-kube-api-access-9hnbc\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-gpkb7\" (UID: \"a25d738b-a5be-44f2-86f2-9b554c3f7947\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-gpkb7" Nov 24 11:55:26 crc kubenswrapper[5072]: I1124 11:55:26.677170 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a25d738b-a5be-44f2-86f2-9b554c3f7947-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-gpkb7\" (UID: \"a25d738b-a5be-44f2-86f2-9b554c3f7947\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-gpkb7" Nov 24 11:55:26 crc kubenswrapper[5072]: I1124 11:55:26.678291 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/a25d738b-a5be-44f2-86f2-9b554c3f7947-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-gpkb7\" (UID: \"a25d738b-a5be-44f2-86f2-9b554c3f7947\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-gpkb7" Nov 24 11:55:26 crc kubenswrapper[5072]: I1124 11:55:26.679322 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/a25d738b-a5be-44f2-86f2-9b554c3f7947-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-gpkb7\" (UID: \"a25d738b-a5be-44f2-86f2-9b554c3f7947\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-gpkb7" Nov 24 11:55:26 crc kubenswrapper[5072]: I1124 11:55:26.681260 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/a25d738b-a5be-44f2-86f2-9b554c3f7947-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-gpkb7\" (UID: \"a25d738b-a5be-44f2-86f2-9b554c3f7947\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-gpkb7" Nov 24 11:55:26 crc kubenswrapper[5072]: I1124 11:55:26.681849 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/a25d738b-a5be-44f2-86f2-9b554c3f7947-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-gpkb7\" (UID: \"a25d738b-a5be-44f2-86f2-9b554c3f7947\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-gpkb7" Nov 24 11:55:26 crc kubenswrapper[5072]: I1124 11:55:26.683137 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a25d738b-a5be-44f2-86f2-9b554c3f7947-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-gpkb7\" (UID: \"a25d738b-a5be-44f2-86f2-9b554c3f7947\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-gpkb7" Nov 24 11:55:26 crc kubenswrapper[5072]: I1124 11:55:26.683625 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a25d738b-a5be-44f2-86f2-9b554c3f7947-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-gpkb7\" (UID: \"a25d738b-a5be-44f2-86f2-9b554c3f7947\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-gpkb7" Nov 24 11:55:26 crc kubenswrapper[5072]: I1124 11:55:26.684095 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a25d738b-a5be-44f2-86f2-9b554c3f7947-ssh-key\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-gpkb7\" (UID: \"a25d738b-a5be-44f2-86f2-9b554c3f7947\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-gpkb7" Nov 24 11:55:26 crc kubenswrapper[5072]: I1124 11:55:26.688890 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/a25d738b-a5be-44f2-86f2-9b554c3f7947-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-gpkb7\" (UID: \"a25d738b-a5be-44f2-86f2-9b554c3f7947\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-gpkb7" Nov 24 11:55:26 crc kubenswrapper[5072]: I1124 11:55:26.689173 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/a25d738b-a5be-44f2-86f2-9b554c3f7947-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-gpkb7\" (UID: \"a25d738b-a5be-44f2-86f2-9b554c3f7947\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-gpkb7" Nov 24 11:55:26 crc kubenswrapper[5072]: I1124 11:55:26.691003 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a25d738b-a5be-44f2-86f2-9b554c3f7947-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-gpkb7\" (UID: \"a25d738b-a5be-44f2-86f2-9b554c3f7947\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-gpkb7" Nov 24 11:55:26 crc kubenswrapper[5072]: I1124 11:55:26.712416 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hnbc\" (UniqueName: \"kubernetes.io/projected/a25d738b-a5be-44f2-86f2-9b554c3f7947-kube-api-access-9hnbc\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-gpkb7\" (UID: \"a25d738b-a5be-44f2-86f2-9b554c3f7947\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-gpkb7" Nov 24 11:55:26 crc kubenswrapper[5072]: I1124 11:55:26.773380 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-gpkb7" Nov 24 11:55:27 crc kubenswrapper[5072]: W1124 11:55:27.302672 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda25d738b_a5be_44f2_86f2_9b554c3f7947.slice/crio-3ed03a92b7885d141d3d22f36634b139cad8dfd22413037506038e7e1528bedf WatchSource:0}: Error finding container 3ed03a92b7885d141d3d22f36634b139cad8dfd22413037506038e7e1528bedf: Status 404 returned error can't find the container with id 3ed03a92b7885d141d3d22f36634b139cad8dfd22413037506038e7e1528bedf Nov 24 11:55:27 crc kubenswrapper[5072]: I1124 11:55:27.302812 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-gpkb7"] Nov 24 11:55:27 crc kubenswrapper[5072]: I1124 11:55:27.329699 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-gpkb7" event={"ID":"a25d738b-a5be-44f2-86f2-9b554c3f7947","Type":"ContainerStarted","Data":"3ed03a92b7885d141d3d22f36634b139cad8dfd22413037506038e7e1528bedf"} Nov 24 11:55:28 crc kubenswrapper[5072]: I1124 11:55:28.343847 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-gpkb7" event={"ID":"a25d738b-a5be-44f2-86f2-9b554c3f7947","Type":"ContainerStarted","Data":"2cac633ab3a6295e5e7cb8794ddbf13405cfeef872ac8d22190b684c3cfa7152"} Nov 24 11:55:28 crc kubenswrapper[5072]: I1124 11:55:28.370820 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-gpkb7" podStartSLOduration=1.767831963 podStartE2EDuration="2.370791853s" podCreationTimestamp="2025-11-24 11:55:26 +0000 UTC" firstStartedPulling="2025-11-24 11:55:27.305132725 +0000 UTC m=+2779.016657201" lastFinishedPulling="2025-11-24 11:55:27.908092585 +0000 UTC m=+2779.619617091" observedRunningTime="2025-11-24 11:55:28.363080982 +0000 UTC m=+2780.074605478" watchObservedRunningTime="2025-11-24 11:55:28.370791853 +0000 UTC m=+2780.082316349" Nov 24 11:56:36 crc kubenswrapper[5072]: I1124 11:56:36.991173 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5qvhl"] Nov 24 11:56:36 crc kubenswrapper[5072]: I1124 11:56:36.994069 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5qvhl" Nov 24 11:56:37 crc kubenswrapper[5072]: I1124 11:56:37.031342 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5qvhl"] Nov 24 11:56:37 crc kubenswrapper[5072]: I1124 11:56:37.090897 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5r6mp\" (UniqueName: \"kubernetes.io/projected/2d314ccb-ad5a-45d0-9529-f4358679c021-kube-api-access-5r6mp\") pod \"community-operators-5qvhl\" (UID: \"2d314ccb-ad5a-45d0-9529-f4358679c021\") " pod="openshift-marketplace/community-operators-5qvhl" Nov 24 11:56:37 crc kubenswrapper[5072]: I1124 11:56:37.091025 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d314ccb-ad5a-45d0-9529-f4358679c021-catalog-content\") pod \"community-operators-5qvhl\" (UID: \"2d314ccb-ad5a-45d0-9529-f4358679c021\") " pod="openshift-marketplace/community-operators-5qvhl" Nov 24 11:56:37 crc kubenswrapper[5072]: I1124 11:56:37.091083 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d314ccb-ad5a-45d0-9529-f4358679c021-utilities\") pod \"community-operators-5qvhl\" (UID: \"2d314ccb-ad5a-45d0-9529-f4358679c021\") " pod="openshift-marketplace/community-operators-5qvhl" Nov 24 11:56:37 crc kubenswrapper[5072]: I1124 11:56:37.192049 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5r6mp\" (UniqueName: \"kubernetes.io/projected/2d314ccb-ad5a-45d0-9529-f4358679c021-kube-api-access-5r6mp\") pod \"community-operators-5qvhl\" (UID: \"2d314ccb-ad5a-45d0-9529-f4358679c021\") " pod="openshift-marketplace/community-operators-5qvhl" Nov 24 11:56:37 crc kubenswrapper[5072]: I1124 11:56:37.192106 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d314ccb-ad5a-45d0-9529-f4358679c021-catalog-content\") pod \"community-operators-5qvhl\" (UID: \"2d314ccb-ad5a-45d0-9529-f4358679c021\") " pod="openshift-marketplace/community-operators-5qvhl" Nov 24 11:56:37 crc kubenswrapper[5072]: I1124 11:56:37.192140 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d314ccb-ad5a-45d0-9529-f4358679c021-utilities\") pod \"community-operators-5qvhl\" (UID: \"2d314ccb-ad5a-45d0-9529-f4358679c021\") " pod="openshift-marketplace/community-operators-5qvhl" Nov 24 11:56:37 crc kubenswrapper[5072]: I1124 11:56:37.192676 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d314ccb-ad5a-45d0-9529-f4358679c021-catalog-content\") pod \"community-operators-5qvhl\" (UID: \"2d314ccb-ad5a-45d0-9529-f4358679c021\") " pod="openshift-marketplace/community-operators-5qvhl" Nov 24 11:56:37 crc kubenswrapper[5072]: I1124 11:56:37.192702 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d314ccb-ad5a-45d0-9529-f4358679c021-utilities\") pod \"community-operators-5qvhl\" (UID: \"2d314ccb-ad5a-45d0-9529-f4358679c021\") " pod="openshift-marketplace/community-operators-5qvhl" Nov 24 11:56:37 crc kubenswrapper[5072]: I1124 11:56:37.214908 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5r6mp\" (UniqueName: \"kubernetes.io/projected/2d314ccb-ad5a-45d0-9529-f4358679c021-kube-api-access-5r6mp\") pod \"community-operators-5qvhl\" (UID: \"2d314ccb-ad5a-45d0-9529-f4358679c021\") " pod="openshift-marketplace/community-operators-5qvhl" Nov 24 11:56:37 crc kubenswrapper[5072]: I1124 11:56:37.325726 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5qvhl" Nov 24 11:56:37 crc kubenswrapper[5072]: I1124 11:56:37.861602 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5qvhl"] Nov 24 11:56:38 crc kubenswrapper[5072]: I1124 11:56:38.094763 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5qvhl" event={"ID":"2d314ccb-ad5a-45d0-9529-f4358679c021","Type":"ContainerStarted","Data":"09d7714c7bc4aa1b9ed14d720bd3e7a380bc8141521544bce2000c9c729ad3ea"} Nov 24 11:56:39 crc kubenswrapper[5072]: I1124 11:56:39.104237 5072 generic.go:334] "Generic (PLEG): container finished" podID="2d314ccb-ad5a-45d0-9529-f4358679c021" containerID="dba2516a8a26ae789768f63d9fc55adf988aa16e6340e8b2173341b473a4b08b" exitCode=0 Nov 24 11:56:39 crc kubenswrapper[5072]: I1124 11:56:39.104599 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5qvhl" event={"ID":"2d314ccb-ad5a-45d0-9529-f4358679c021","Type":"ContainerDied","Data":"dba2516a8a26ae789768f63d9fc55adf988aa16e6340e8b2173341b473a4b08b"} Nov 24 11:56:41 crc kubenswrapper[5072]: I1124 11:56:41.127706 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5qvhl" event={"ID":"2d314ccb-ad5a-45d0-9529-f4358679c021","Type":"ContainerStarted","Data":"493d1cb15cfaed71d919e4bda96e163e92a7ba73e360d63bce7d39a4e756ca56"} Nov 24 11:56:42 crc kubenswrapper[5072]: I1124 11:56:42.148457 5072 generic.go:334] "Generic (PLEG): container finished" podID="2d314ccb-ad5a-45d0-9529-f4358679c021" containerID="493d1cb15cfaed71d919e4bda96e163e92a7ba73e360d63bce7d39a4e756ca56" exitCode=0 Nov 24 11:56:42 crc kubenswrapper[5072]: I1124 11:56:42.149565 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5qvhl" event={"ID":"2d314ccb-ad5a-45d0-9529-f4358679c021","Type":"ContainerDied","Data":"493d1cb15cfaed71d919e4bda96e163e92a7ba73e360d63bce7d39a4e756ca56"} Nov 24 11:56:45 crc kubenswrapper[5072]: I1124 11:56:45.174934 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5qvhl" event={"ID":"2d314ccb-ad5a-45d0-9529-f4358679c021","Type":"ContainerStarted","Data":"1c96e22692bb53095e0b7c44f9d76297e74eb126803ffb650ce224708f1a8562"} Nov 24 11:56:45 crc kubenswrapper[5072]: I1124 11:56:45.201498 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5qvhl" podStartSLOduration=4.278639678 podStartE2EDuration="9.201478906s" podCreationTimestamp="2025-11-24 11:56:36 +0000 UTC" firstStartedPulling="2025-11-24 11:56:39.107018341 +0000 UTC m=+2850.818542837" lastFinishedPulling="2025-11-24 11:56:44.029857589 +0000 UTC m=+2855.741382065" observedRunningTime="2025-11-24 11:56:45.192439041 +0000 UTC m=+2856.903963527" watchObservedRunningTime="2025-11-24 11:56:45.201478906 +0000 UTC m=+2856.913003382" Nov 24 11:56:47 crc kubenswrapper[5072]: I1124 11:56:47.326884 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5qvhl" Nov 24 11:56:47 crc kubenswrapper[5072]: I1124 11:56:47.327987 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5qvhl" Nov 24 11:56:47 crc kubenswrapper[5072]: I1124 11:56:47.395028 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5qvhl" Nov 24 11:56:49 crc kubenswrapper[5072]: I1124 11:56:49.288539 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5qvhl" Nov 24 11:56:49 crc kubenswrapper[5072]: I1124 11:56:49.346172 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5qvhl"] Nov 24 11:56:51 crc kubenswrapper[5072]: I1124 11:56:51.232783 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5qvhl" podUID="2d314ccb-ad5a-45d0-9529-f4358679c021" containerName="registry-server" containerID="cri-o://1c96e22692bb53095e0b7c44f9d76297e74eb126803ffb650ce224708f1a8562" gracePeriod=2 Nov 24 11:56:52 crc kubenswrapper[5072]: I1124 11:56:52.245820 5072 generic.go:334] "Generic (PLEG): container finished" podID="2d314ccb-ad5a-45d0-9529-f4358679c021" containerID="1c96e22692bb53095e0b7c44f9d76297e74eb126803ffb650ce224708f1a8562" exitCode=0 Nov 24 11:56:52 crc kubenswrapper[5072]: I1124 11:56:52.246045 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5qvhl" event={"ID":"2d314ccb-ad5a-45d0-9529-f4358679c021","Type":"ContainerDied","Data":"1c96e22692bb53095e0b7c44f9d76297e74eb126803ffb650ce224708f1a8562"} Nov 24 11:56:54 crc kubenswrapper[5072]: I1124 11:56:54.399354 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5qvhl" Nov 24 11:56:54 crc kubenswrapper[5072]: I1124 11:56:54.453742 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5r6mp\" (UniqueName: \"kubernetes.io/projected/2d314ccb-ad5a-45d0-9529-f4358679c021-kube-api-access-5r6mp\") pod \"2d314ccb-ad5a-45d0-9529-f4358679c021\" (UID: \"2d314ccb-ad5a-45d0-9529-f4358679c021\") " Nov 24 11:56:54 crc kubenswrapper[5072]: I1124 11:56:54.454199 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d314ccb-ad5a-45d0-9529-f4358679c021-utilities\") pod \"2d314ccb-ad5a-45d0-9529-f4358679c021\" (UID: \"2d314ccb-ad5a-45d0-9529-f4358679c021\") " Nov 24 11:56:54 crc kubenswrapper[5072]: I1124 11:56:54.454306 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d314ccb-ad5a-45d0-9529-f4358679c021-catalog-content\") pod \"2d314ccb-ad5a-45d0-9529-f4358679c021\" (UID: \"2d314ccb-ad5a-45d0-9529-f4358679c021\") " Nov 24 11:56:54 crc kubenswrapper[5072]: I1124 11:56:54.455188 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d314ccb-ad5a-45d0-9529-f4358679c021-utilities" (OuterVolumeSpecName: "utilities") pod "2d314ccb-ad5a-45d0-9529-f4358679c021" (UID: "2d314ccb-ad5a-45d0-9529-f4358679c021"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:56:54 crc kubenswrapper[5072]: I1124 11:56:54.459086 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d314ccb-ad5a-45d0-9529-f4358679c021-kube-api-access-5r6mp" (OuterVolumeSpecName: "kube-api-access-5r6mp") pod "2d314ccb-ad5a-45d0-9529-f4358679c021" (UID: "2d314ccb-ad5a-45d0-9529-f4358679c021"). InnerVolumeSpecName "kube-api-access-5r6mp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:56:54 crc kubenswrapper[5072]: I1124 11:56:54.459501 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5r6mp\" (UniqueName: \"kubernetes.io/projected/2d314ccb-ad5a-45d0-9529-f4358679c021-kube-api-access-5r6mp\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:54 crc kubenswrapper[5072]: I1124 11:56:54.459523 5072 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d314ccb-ad5a-45d0-9529-f4358679c021-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:54 crc kubenswrapper[5072]: I1124 11:56:54.747817 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d314ccb-ad5a-45d0-9529-f4358679c021-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2d314ccb-ad5a-45d0-9529-f4358679c021" (UID: "2d314ccb-ad5a-45d0-9529-f4358679c021"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:56:54 crc kubenswrapper[5072]: I1124 11:56:54.764770 5072 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d314ccb-ad5a-45d0-9529-f4358679c021-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:55 crc kubenswrapper[5072]: I1124 11:56:55.274549 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5qvhl" event={"ID":"2d314ccb-ad5a-45d0-9529-f4358679c021","Type":"ContainerDied","Data":"09d7714c7bc4aa1b9ed14d720bd3e7a380bc8141521544bce2000c9c729ad3ea"} Nov 24 11:56:55 crc kubenswrapper[5072]: I1124 11:56:55.274626 5072 scope.go:117] "RemoveContainer" containerID="1c96e22692bb53095e0b7c44f9d76297e74eb126803ffb650ce224708f1a8562" Nov 24 11:56:55 crc kubenswrapper[5072]: I1124 11:56:55.274583 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5qvhl" Nov 24 11:56:55 crc kubenswrapper[5072]: I1124 11:56:55.301739 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5qvhl"] Nov 24 11:56:55 crc kubenswrapper[5072]: I1124 11:56:55.308814 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5qvhl"] Nov 24 11:56:55 crc kubenswrapper[5072]: I1124 11:56:55.315759 5072 scope.go:117] "RemoveContainer" containerID="493d1cb15cfaed71d919e4bda96e163e92a7ba73e360d63bce7d39a4e756ca56" Nov 24 11:56:55 crc kubenswrapper[5072]: I1124 11:56:55.334709 5072 scope.go:117] "RemoveContainer" containerID="dba2516a8a26ae789768f63d9fc55adf988aa16e6340e8b2173341b473a4b08b" Nov 24 11:56:57 crc kubenswrapper[5072]: I1124 11:56:57.030478 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d314ccb-ad5a-45d0-9529-f4358679c021" path="/var/lib/kubelet/pods/2d314ccb-ad5a-45d0-9529-f4358679c021/volumes" Nov 24 11:57:43 crc kubenswrapper[5072]: I1124 11:57:43.645513 5072 patch_prober.go:28] interesting pod/machine-config-daemon-jfxnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 11:57:43 crc kubenswrapper[5072]: I1124 11:57:43.646053 5072 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 11:58:13 crc kubenswrapper[5072]: I1124 11:58:13.645008 5072 patch_prober.go:28] interesting pod/machine-config-daemon-jfxnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 11:58:13 crc kubenswrapper[5072]: I1124 11:58:13.645613 5072 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 11:58:37 crc kubenswrapper[5072]: I1124 11:58:37.312465 5072 generic.go:334] "Generic (PLEG): container finished" podID="a25d738b-a5be-44f2-86f2-9b554c3f7947" containerID="2cac633ab3a6295e5e7cb8794ddbf13405cfeef872ac8d22190b684c3cfa7152" exitCode=0 Nov 24 11:58:37 crc kubenswrapper[5072]: I1124 11:58:37.312547 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-gpkb7" event={"ID":"a25d738b-a5be-44f2-86f2-9b554c3f7947","Type":"ContainerDied","Data":"2cac633ab3a6295e5e7cb8794ddbf13405cfeef872ac8d22190b684c3cfa7152"} Nov 24 11:58:38 crc kubenswrapper[5072]: I1124 11:58:38.758111 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-gpkb7" Nov 24 11:58:38 crc kubenswrapper[5072]: I1124 11:58:38.896662 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/a25d738b-a5be-44f2-86f2-9b554c3f7947-nova-cell1-compute-config-1\") pod \"a25d738b-a5be-44f2-86f2-9b554c3f7947\" (UID: \"a25d738b-a5be-44f2-86f2-9b554c3f7947\") " Nov 24 11:58:38 crc kubenswrapper[5072]: I1124 11:58:38.896935 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/a25d738b-a5be-44f2-86f2-9b554c3f7947-nova-cell1-compute-config-0\") pod \"a25d738b-a5be-44f2-86f2-9b554c3f7947\" (UID: \"a25d738b-a5be-44f2-86f2-9b554c3f7947\") " Nov 24 11:58:38 crc kubenswrapper[5072]: I1124 11:58:38.897013 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a25d738b-a5be-44f2-86f2-9b554c3f7947-nova-custom-ceph-combined-ca-bundle\") pod \"a25d738b-a5be-44f2-86f2-9b554c3f7947\" (UID: \"a25d738b-a5be-44f2-86f2-9b554c3f7947\") " Nov 24 11:58:38 crc kubenswrapper[5072]: I1124 11:58:38.897050 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/a25d738b-a5be-44f2-86f2-9b554c3f7947-nova-migration-ssh-key-1\") pod \"a25d738b-a5be-44f2-86f2-9b554c3f7947\" (UID: \"a25d738b-a5be-44f2-86f2-9b554c3f7947\") " Nov 24 11:58:38 crc kubenswrapper[5072]: I1124 11:58:38.897086 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a25d738b-a5be-44f2-86f2-9b554c3f7947-ssh-key\") pod \"a25d738b-a5be-44f2-86f2-9b554c3f7947\" (UID: \"a25d738b-a5be-44f2-86f2-9b554c3f7947\") " Nov 24 11:58:38 crc kubenswrapper[5072]: I1124 11:58:38.897135 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/a25d738b-a5be-44f2-86f2-9b554c3f7947-ceph-nova-0\") pod \"a25d738b-a5be-44f2-86f2-9b554c3f7947\" (UID: \"a25d738b-a5be-44f2-86f2-9b554c3f7947\") " Nov 24 11:58:38 crc kubenswrapper[5072]: I1124 11:58:38.897180 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9hnbc\" (UniqueName: \"kubernetes.io/projected/a25d738b-a5be-44f2-86f2-9b554c3f7947-kube-api-access-9hnbc\") pod \"a25d738b-a5be-44f2-86f2-9b554c3f7947\" (UID: \"a25d738b-a5be-44f2-86f2-9b554c3f7947\") " Nov 24 11:58:38 crc kubenswrapper[5072]: I1124 11:58:38.897224 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a25d738b-a5be-44f2-86f2-9b554c3f7947-inventory\") pod \"a25d738b-a5be-44f2-86f2-9b554c3f7947\" (UID: \"a25d738b-a5be-44f2-86f2-9b554c3f7947\") " Nov 24 11:58:38 crc kubenswrapper[5072]: I1124 11:58:38.897260 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/a25d738b-a5be-44f2-86f2-9b554c3f7947-nova-migration-ssh-key-0\") pod \"a25d738b-a5be-44f2-86f2-9b554c3f7947\" (UID: \"a25d738b-a5be-44f2-86f2-9b554c3f7947\") " Nov 24 11:58:38 crc kubenswrapper[5072]: I1124 11:58:38.897284 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a25d738b-a5be-44f2-86f2-9b554c3f7947-ceph\") pod \"a25d738b-a5be-44f2-86f2-9b554c3f7947\" (UID: \"a25d738b-a5be-44f2-86f2-9b554c3f7947\") " Nov 24 11:58:38 crc kubenswrapper[5072]: I1124 11:58:38.897360 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/a25d738b-a5be-44f2-86f2-9b554c3f7947-nova-extra-config-0\") pod \"a25d738b-a5be-44f2-86f2-9b554c3f7947\" (UID: \"a25d738b-a5be-44f2-86f2-9b554c3f7947\") " Nov 24 11:58:38 crc kubenswrapper[5072]: I1124 11:58:38.904477 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a25d738b-a5be-44f2-86f2-9b554c3f7947-nova-custom-ceph-combined-ca-bundle" (OuterVolumeSpecName: "nova-custom-ceph-combined-ca-bundle") pod "a25d738b-a5be-44f2-86f2-9b554c3f7947" (UID: "a25d738b-a5be-44f2-86f2-9b554c3f7947"). InnerVolumeSpecName "nova-custom-ceph-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:58:38 crc kubenswrapper[5072]: I1124 11:58:38.914193 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a25d738b-a5be-44f2-86f2-9b554c3f7947-kube-api-access-9hnbc" (OuterVolumeSpecName: "kube-api-access-9hnbc") pod "a25d738b-a5be-44f2-86f2-9b554c3f7947" (UID: "a25d738b-a5be-44f2-86f2-9b554c3f7947"). InnerVolumeSpecName "kube-api-access-9hnbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:58:38 crc kubenswrapper[5072]: I1124 11:58:38.914934 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a25d738b-a5be-44f2-86f2-9b554c3f7947-ceph" (OuterVolumeSpecName: "ceph") pod "a25d738b-a5be-44f2-86f2-9b554c3f7947" (UID: "a25d738b-a5be-44f2-86f2-9b554c3f7947"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:58:38 crc kubenswrapper[5072]: I1124 11:58:38.923804 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a25d738b-a5be-44f2-86f2-9b554c3f7947-ceph-nova-0" (OuterVolumeSpecName: "ceph-nova-0") pod "a25d738b-a5be-44f2-86f2-9b554c3f7947" (UID: "a25d738b-a5be-44f2-86f2-9b554c3f7947"). InnerVolumeSpecName "ceph-nova-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:58:38 crc kubenswrapper[5072]: I1124 11:58:38.929752 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a25d738b-a5be-44f2-86f2-9b554c3f7947-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "a25d738b-a5be-44f2-86f2-9b554c3f7947" (UID: "a25d738b-a5be-44f2-86f2-9b554c3f7947"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:58:38 crc kubenswrapper[5072]: I1124 11:58:38.931379 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a25d738b-a5be-44f2-86f2-9b554c3f7947-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "a25d738b-a5be-44f2-86f2-9b554c3f7947" (UID: "a25d738b-a5be-44f2-86f2-9b554c3f7947"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:58:38 crc kubenswrapper[5072]: I1124 11:58:38.932919 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a25d738b-a5be-44f2-86f2-9b554c3f7947-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "a25d738b-a5be-44f2-86f2-9b554c3f7947" (UID: "a25d738b-a5be-44f2-86f2-9b554c3f7947"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:58:38 crc kubenswrapper[5072]: I1124 11:58:38.933844 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a25d738b-a5be-44f2-86f2-9b554c3f7947-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a25d738b-a5be-44f2-86f2-9b554c3f7947" (UID: "a25d738b-a5be-44f2-86f2-9b554c3f7947"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:58:38 crc kubenswrapper[5072]: I1124 11:58:38.941116 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a25d738b-a5be-44f2-86f2-9b554c3f7947-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "a25d738b-a5be-44f2-86f2-9b554c3f7947" (UID: "a25d738b-a5be-44f2-86f2-9b554c3f7947"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:58:38 crc kubenswrapper[5072]: I1124 11:58:38.946812 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a25d738b-a5be-44f2-86f2-9b554c3f7947-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "a25d738b-a5be-44f2-86f2-9b554c3f7947" (UID: "a25d738b-a5be-44f2-86f2-9b554c3f7947"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:58:38 crc kubenswrapper[5072]: I1124 11:58:38.952655 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a25d738b-a5be-44f2-86f2-9b554c3f7947-inventory" (OuterVolumeSpecName: "inventory") pod "a25d738b-a5be-44f2-86f2-9b554c3f7947" (UID: "a25d738b-a5be-44f2-86f2-9b554c3f7947"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:58:39 crc kubenswrapper[5072]: I1124 11:58:39.000127 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9hnbc\" (UniqueName: \"kubernetes.io/projected/a25d738b-a5be-44f2-86f2-9b554c3f7947-kube-api-access-9hnbc\") on node \"crc\" DevicePath \"\"" Nov 24 11:58:39 crc kubenswrapper[5072]: I1124 11:58:39.000172 5072 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a25d738b-a5be-44f2-86f2-9b554c3f7947-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 11:58:39 crc kubenswrapper[5072]: I1124 11:58:39.000186 5072 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/a25d738b-a5be-44f2-86f2-9b554c3f7947-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Nov 24 11:58:39 crc kubenswrapper[5072]: I1124 11:58:39.000198 5072 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a25d738b-a5be-44f2-86f2-9b554c3f7947-ceph\") on node \"crc\" DevicePath \"\"" Nov 24 11:58:39 crc kubenswrapper[5072]: I1124 11:58:39.000212 5072 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/a25d738b-a5be-44f2-86f2-9b554c3f7947-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Nov 24 11:58:39 crc kubenswrapper[5072]: I1124 11:58:39.000224 5072 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/a25d738b-a5be-44f2-86f2-9b554c3f7947-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Nov 24 11:58:39 crc kubenswrapper[5072]: I1124 11:58:39.000236 5072 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/a25d738b-a5be-44f2-86f2-9b554c3f7947-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Nov 24 11:58:39 crc kubenswrapper[5072]: I1124 11:58:39.000247 5072 reconciler_common.go:293] "Volume detached for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a25d738b-a5be-44f2-86f2-9b554c3f7947-nova-custom-ceph-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:58:39 crc kubenswrapper[5072]: I1124 11:58:39.000259 5072 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/a25d738b-a5be-44f2-86f2-9b554c3f7947-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Nov 24 11:58:39 crc kubenswrapper[5072]: I1124 11:58:39.000270 5072 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a25d738b-a5be-44f2-86f2-9b554c3f7947-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 11:58:39 crc kubenswrapper[5072]: I1124 11:58:39.000281 5072 reconciler_common.go:293] "Volume detached for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/a25d738b-a5be-44f2-86f2-9b554c3f7947-ceph-nova-0\") on node \"crc\" DevicePath \"\"" Nov 24 11:58:39 crc kubenswrapper[5072]: I1124 11:58:39.333412 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-gpkb7" event={"ID":"a25d738b-a5be-44f2-86f2-9b554c3f7947","Type":"ContainerDied","Data":"3ed03a92b7885d141d3d22f36634b139cad8dfd22413037506038e7e1528bedf"} Nov 24 11:58:39 crc kubenswrapper[5072]: I1124 11:58:39.333730 5072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ed03a92b7885d141d3d22f36634b139cad8dfd22413037506038e7e1528bedf" Nov 24 11:58:39 crc kubenswrapper[5072]: I1124 11:58:39.333518 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-gpkb7" Nov 24 11:58:43 crc kubenswrapper[5072]: I1124 11:58:43.645312 5072 patch_prober.go:28] interesting pod/machine-config-daemon-jfxnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 11:58:43 crc kubenswrapper[5072]: I1124 11:58:43.645804 5072 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 11:58:43 crc kubenswrapper[5072]: I1124 11:58:43.645862 5072 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" Nov 24 11:58:43 crc kubenswrapper[5072]: I1124 11:58:43.646816 5072 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4c463b6823449c0875f1fec4633ea521827aee0fee045719621150bcb1ac1a4f"} pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 11:58:43 crc kubenswrapper[5072]: I1124 11:58:43.646910 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" containerName="machine-config-daemon" containerID="cri-o://4c463b6823449c0875f1fec4633ea521827aee0fee045719621150bcb1ac1a4f" gracePeriod=600 Nov 24 11:58:43 crc kubenswrapper[5072]: E1124 11:58:43.826570 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 11:58:44 crc kubenswrapper[5072]: I1124 11:58:44.380869 5072 generic.go:334] "Generic (PLEG): container finished" podID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" containerID="4c463b6823449c0875f1fec4633ea521827aee0fee045719621150bcb1ac1a4f" exitCode=0 Nov 24 11:58:44 crc kubenswrapper[5072]: I1124 11:58:44.380914 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" event={"ID":"85ee6420-36f0-467c-acf4-ebea8b02c8d5","Type":"ContainerDied","Data":"4c463b6823449c0875f1fec4633ea521827aee0fee045719621150bcb1ac1a4f"} Nov 24 11:58:44 crc kubenswrapper[5072]: I1124 11:58:44.381010 5072 scope.go:117] "RemoveContainer" containerID="214cd3fb3c364f4c0eb062815b36644ab6af47ce8000f33d400642a27a4dd0ec" Nov 24 11:58:44 crc kubenswrapper[5072]: I1124 11:58:44.381684 5072 scope.go:117] "RemoveContainer" containerID="4c463b6823449c0875f1fec4633ea521827aee0fee045719621150bcb1ac1a4f" Nov 24 11:58:44 crc kubenswrapper[5072]: E1124 11:58:44.381973 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.146454 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-volume1-0"] Nov 24 11:58:53 crc kubenswrapper[5072]: E1124 11:58:53.147330 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d314ccb-ad5a-45d0-9529-f4358679c021" containerName="registry-server" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.147344 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d314ccb-ad5a-45d0-9529-f4358679c021" containerName="registry-server" Nov 24 11:58:53 crc kubenswrapper[5072]: E1124 11:58:53.147359 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a25d738b-a5be-44f2-86f2-9b554c3f7947" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.147381 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="a25d738b-a5be-44f2-86f2-9b554c3f7947" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Nov 24 11:58:53 crc kubenswrapper[5072]: E1124 11:58:53.147398 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d314ccb-ad5a-45d0-9529-f4358679c021" containerName="extract-utilities" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.147405 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d314ccb-ad5a-45d0-9529-f4358679c021" containerName="extract-utilities" Nov 24 11:58:53 crc kubenswrapper[5072]: E1124 11:58:53.147414 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d314ccb-ad5a-45d0-9529-f4358679c021" containerName="extract-content" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.147420 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d314ccb-ad5a-45d0-9529-f4358679c021" containerName="extract-content" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.147602 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d314ccb-ad5a-45d0-9529-f4358679c021" containerName="registry-server" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.147617 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="a25d738b-a5be-44f2-86f2-9b554c3f7947" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.148656 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.154445 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.154590 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-volume1-config-data" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.164272 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.302364 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0-run\") pod \"cinder-volume-volume1-0\" (UID: \"9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0\") " pod="openstack/cinder-volume-volume1-0" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.303070 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-backup-0"] Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.314777 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0\") " pod="openstack/cinder-volume-volume1-0" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.314846 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0\") " pod="openstack/cinder-volume-volume1-0" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.314904 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0\") " pod="openstack/cinder-volume-volume1-0" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.314995 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0\") " pod="openstack/cinder-volume-volume1-0" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.315026 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0\") " pod="openstack/cinder-volume-volume1-0" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.315209 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0\") " pod="openstack/cinder-volume-volume1-0" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.315314 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0\") " pod="openstack/cinder-volume-volume1-0" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.315364 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0\") " pod="openstack/cinder-volume-volume1-0" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.315507 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0\") " pod="openstack/cinder-volume-volume1-0" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.315540 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0-sys\") pod \"cinder-volume-volume1-0\" (UID: \"9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0\") " pod="openstack/cinder-volume-volume1-0" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.315586 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0-dev\") pod \"cinder-volume-volume1-0\" (UID: \"9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0\") " pod="openstack/cinder-volume-volume1-0" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.315631 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0\") " pod="openstack/cinder-volume-volume1-0" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.315728 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vjxm\" (UniqueName: \"kubernetes.io/projected/9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0-kube-api-access-7vjxm\") pod \"cinder-volume-volume1-0\" (UID: \"9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0\") " pod="openstack/cinder-volume-volume1-0" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.315774 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0\") " pod="openstack/cinder-volume-volume1-0" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.315822 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0\") " pod="openstack/cinder-volume-volume1-0" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.318323 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.321541 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.324963 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.418228 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0\") " pod="openstack/cinder-volume-volume1-0" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.418277 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0\") " pod="openstack/cinder-volume-volume1-0" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.418309 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/e51194ec-7c1f-4609-996f-ee210bb13bb5-run\") pod \"cinder-backup-0\" (UID: \"e51194ec-7c1f-4609-996f-ee210bb13bb5\") " pod="openstack/cinder-backup-0" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.418327 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e51194ec-7c1f-4609-996f-ee210bb13bb5-config-data-custom\") pod \"cinder-backup-0\" (UID: \"e51194ec-7c1f-4609-996f-ee210bb13bb5\") " pod="openstack/cinder-backup-0" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.418349 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0\") " pod="openstack/cinder-volume-volume1-0" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.418366 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/e51194ec-7c1f-4609-996f-ee210bb13bb5-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"e51194ec-7c1f-4609-996f-ee210bb13bb5\") " pod="openstack/cinder-backup-0" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.418399 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0\") " pod="openstack/cinder-volume-volume1-0" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.418420 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/e51194ec-7c1f-4609-996f-ee210bb13bb5-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"e51194ec-7c1f-4609-996f-ee210bb13bb5\") " pod="openstack/cinder-backup-0" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.418437 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0\") " pod="openstack/cinder-volume-volume1-0" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.418524 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0\") " pod="openstack/cinder-volume-volume1-0" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.418614 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0-sys\") pod \"cinder-volume-volume1-0\" (UID: \"9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0\") " pod="openstack/cinder-volume-volume1-0" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.418647 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0-dev\") pod \"cinder-volume-volume1-0\" (UID: \"9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0\") " pod="openstack/cinder-volume-volume1-0" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.418673 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0\") " pod="openstack/cinder-volume-volume1-0" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.418692 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/e51194ec-7c1f-4609-996f-ee210bb13bb5-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"e51194ec-7c1f-4609-996f-ee210bb13bb5\") " pod="openstack/cinder-backup-0" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.418714 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e51194ec-7c1f-4609-996f-ee210bb13bb5-config-data\") pod \"cinder-backup-0\" (UID: \"e51194ec-7c1f-4609-996f-ee210bb13bb5\") " pod="openstack/cinder-backup-0" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.418748 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vjxm\" (UniqueName: \"kubernetes.io/projected/9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0-kube-api-access-7vjxm\") pod \"cinder-volume-volume1-0\" (UID: \"9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0\") " pod="openstack/cinder-volume-volume1-0" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.418771 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0\") " pod="openstack/cinder-volume-volume1-0" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.418795 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e51194ec-7c1f-4609-996f-ee210bb13bb5-scripts\") pod \"cinder-backup-0\" (UID: \"e51194ec-7c1f-4609-996f-ee210bb13bb5\") " pod="openstack/cinder-backup-0" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.418813 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0\") " pod="openstack/cinder-volume-volume1-0" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.418863 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/e51194ec-7c1f-4609-996f-ee210bb13bb5-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"e51194ec-7c1f-4609-996f-ee210bb13bb5\") " pod="openstack/cinder-backup-0" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.418885 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e51194ec-7c1f-4609-996f-ee210bb13bb5-lib-modules\") pod \"cinder-backup-0\" (UID: \"e51194ec-7c1f-4609-996f-ee210bb13bb5\") " pod="openstack/cinder-backup-0" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.418904 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0-run\") pod \"cinder-volume-volume1-0\" (UID: \"9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0\") " pod="openstack/cinder-volume-volume1-0" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.418928 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e51194ec-7c1f-4609-996f-ee210bb13bb5-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"e51194ec-7c1f-4609-996f-ee210bb13bb5\") " pod="openstack/cinder-backup-0" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.418958 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/e51194ec-7c1f-4609-996f-ee210bb13bb5-etc-nvme\") pod \"cinder-backup-0\" (UID: \"e51194ec-7c1f-4609-996f-ee210bb13bb5\") " pod="openstack/cinder-backup-0" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.418974 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/e51194ec-7c1f-4609-996f-ee210bb13bb5-sys\") pod \"cinder-backup-0\" (UID: \"e51194ec-7c1f-4609-996f-ee210bb13bb5\") " pod="openstack/cinder-backup-0" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.418994 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2v95s\" (UniqueName: \"kubernetes.io/projected/e51194ec-7c1f-4609-996f-ee210bb13bb5-kube-api-access-2v95s\") pod \"cinder-backup-0\" (UID: \"e51194ec-7c1f-4609-996f-ee210bb13bb5\") " pod="openstack/cinder-backup-0" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.419019 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0\") " pod="openstack/cinder-volume-volume1-0" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.419044 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0\") " pod="openstack/cinder-volume-volume1-0" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.419067 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/e51194ec-7c1f-4609-996f-ee210bb13bb5-dev\") pod \"cinder-backup-0\" (UID: \"e51194ec-7c1f-4609-996f-ee210bb13bb5\") " pod="openstack/cinder-backup-0" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.419084 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/e51194ec-7c1f-4609-996f-ee210bb13bb5-ceph\") pod \"cinder-backup-0\" (UID: \"e51194ec-7c1f-4609-996f-ee210bb13bb5\") " pod="openstack/cinder-backup-0" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.419105 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0\") " pod="openstack/cinder-volume-volume1-0" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.419124 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e51194ec-7c1f-4609-996f-ee210bb13bb5-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"e51194ec-7c1f-4609-996f-ee210bb13bb5\") " pod="openstack/cinder-backup-0" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.419210 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0\") " pod="openstack/cinder-volume-volume1-0" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.419590 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0\") " pod="openstack/cinder-volume-volume1-0" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.419833 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0-run\") pod \"cinder-volume-volume1-0\" (UID: \"9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0\") " pod="openstack/cinder-volume-volume1-0" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.419874 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0-dev\") pod \"cinder-volume-volume1-0\" (UID: \"9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0\") " pod="openstack/cinder-volume-volume1-0" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.419902 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0\") " pod="openstack/cinder-volume-volume1-0" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.420157 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0\") " pod="openstack/cinder-volume-volume1-0" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.420387 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0\") " pod="openstack/cinder-volume-volume1-0" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.420645 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0\") " pod="openstack/cinder-volume-volume1-0" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.420718 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0-sys\") pod \"cinder-volume-volume1-0\" (UID: \"9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0\") " pod="openstack/cinder-volume-volume1-0" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.420956 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0\") " pod="openstack/cinder-volume-volume1-0" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.424807 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0\") " pod="openstack/cinder-volume-volume1-0" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.425737 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0\") " pod="openstack/cinder-volume-volume1-0" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.425849 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0\") " pod="openstack/cinder-volume-volume1-0" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.433468 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0\") " pod="openstack/cinder-volume-volume1-0" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.440956 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0\") " pod="openstack/cinder-volume-volume1-0" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.442569 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vjxm\" (UniqueName: \"kubernetes.io/projected/9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0-kube-api-access-7vjxm\") pod \"cinder-volume-volume1-0\" (UID: \"9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0\") " pod="openstack/cinder-volume-volume1-0" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.520720 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e51194ec-7c1f-4609-996f-ee210bb13bb5-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"e51194ec-7c1f-4609-996f-ee210bb13bb5\") " pod="openstack/cinder-backup-0" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.520877 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e51194ec-7c1f-4609-996f-ee210bb13bb5-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"e51194ec-7c1f-4609-996f-ee210bb13bb5\") " pod="openstack/cinder-backup-0" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.521636 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/e51194ec-7c1f-4609-996f-ee210bb13bb5-run\") pod \"cinder-backup-0\" (UID: \"e51194ec-7c1f-4609-996f-ee210bb13bb5\") " pod="openstack/cinder-backup-0" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.521685 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e51194ec-7c1f-4609-996f-ee210bb13bb5-config-data-custom\") pod \"cinder-backup-0\" (UID: \"e51194ec-7c1f-4609-996f-ee210bb13bb5\") " pod="openstack/cinder-backup-0" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.521731 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/e51194ec-7c1f-4609-996f-ee210bb13bb5-run\") pod \"cinder-backup-0\" (UID: \"e51194ec-7c1f-4609-996f-ee210bb13bb5\") " pod="openstack/cinder-backup-0" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.521756 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/e51194ec-7c1f-4609-996f-ee210bb13bb5-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"e51194ec-7c1f-4609-996f-ee210bb13bb5\") " pod="openstack/cinder-backup-0" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.521822 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/e51194ec-7c1f-4609-996f-ee210bb13bb5-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"e51194ec-7c1f-4609-996f-ee210bb13bb5\") " pod="openstack/cinder-backup-0" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.521909 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/e51194ec-7c1f-4609-996f-ee210bb13bb5-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"e51194ec-7c1f-4609-996f-ee210bb13bb5\") " pod="openstack/cinder-backup-0" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.521944 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e51194ec-7c1f-4609-996f-ee210bb13bb5-config-data\") pod \"cinder-backup-0\" (UID: \"e51194ec-7c1f-4609-996f-ee210bb13bb5\") " pod="openstack/cinder-backup-0" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.522011 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e51194ec-7c1f-4609-996f-ee210bb13bb5-scripts\") pod \"cinder-backup-0\" (UID: \"e51194ec-7c1f-4609-996f-ee210bb13bb5\") " pod="openstack/cinder-backup-0" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.522129 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/e51194ec-7c1f-4609-996f-ee210bb13bb5-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"e51194ec-7c1f-4609-996f-ee210bb13bb5\") " pod="openstack/cinder-backup-0" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.522154 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e51194ec-7c1f-4609-996f-ee210bb13bb5-lib-modules\") pod \"cinder-backup-0\" (UID: \"e51194ec-7c1f-4609-996f-ee210bb13bb5\") " pod="openstack/cinder-backup-0" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.522183 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e51194ec-7c1f-4609-996f-ee210bb13bb5-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"e51194ec-7c1f-4609-996f-ee210bb13bb5\") " pod="openstack/cinder-backup-0" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.522234 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/e51194ec-7c1f-4609-996f-ee210bb13bb5-etc-nvme\") pod \"cinder-backup-0\" (UID: \"e51194ec-7c1f-4609-996f-ee210bb13bb5\") " pod="openstack/cinder-backup-0" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.522256 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/e51194ec-7c1f-4609-996f-ee210bb13bb5-sys\") pod \"cinder-backup-0\" (UID: \"e51194ec-7c1f-4609-996f-ee210bb13bb5\") " pod="openstack/cinder-backup-0" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.522277 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2v95s\" (UniqueName: \"kubernetes.io/projected/e51194ec-7c1f-4609-996f-ee210bb13bb5-kube-api-access-2v95s\") pod \"cinder-backup-0\" (UID: \"e51194ec-7c1f-4609-996f-ee210bb13bb5\") " pod="openstack/cinder-backup-0" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.522281 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/e51194ec-7c1f-4609-996f-ee210bb13bb5-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"e51194ec-7c1f-4609-996f-ee210bb13bb5\") " pod="openstack/cinder-backup-0" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.522329 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/e51194ec-7c1f-4609-996f-ee210bb13bb5-dev\") pod \"cinder-backup-0\" (UID: \"e51194ec-7c1f-4609-996f-ee210bb13bb5\") " pod="openstack/cinder-backup-0" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.522352 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/e51194ec-7c1f-4609-996f-ee210bb13bb5-ceph\") pod \"cinder-backup-0\" (UID: \"e51194ec-7c1f-4609-996f-ee210bb13bb5\") " pod="openstack/cinder-backup-0" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.522380 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/e51194ec-7c1f-4609-996f-ee210bb13bb5-etc-nvme\") pod \"cinder-backup-0\" (UID: \"e51194ec-7c1f-4609-996f-ee210bb13bb5\") " pod="openstack/cinder-backup-0" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.522487 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/e51194ec-7c1f-4609-996f-ee210bb13bb5-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"e51194ec-7c1f-4609-996f-ee210bb13bb5\") " pod="openstack/cinder-backup-0" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.522512 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/e51194ec-7c1f-4609-996f-ee210bb13bb5-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"e51194ec-7c1f-4609-996f-ee210bb13bb5\") " pod="openstack/cinder-backup-0" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.522533 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/e51194ec-7c1f-4609-996f-ee210bb13bb5-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"e51194ec-7c1f-4609-996f-ee210bb13bb5\") " pod="openstack/cinder-backup-0" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.522765 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/e51194ec-7c1f-4609-996f-ee210bb13bb5-sys\") pod \"cinder-backup-0\" (UID: \"e51194ec-7c1f-4609-996f-ee210bb13bb5\") " pod="openstack/cinder-backup-0" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.522795 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/e51194ec-7c1f-4609-996f-ee210bb13bb5-dev\") pod \"cinder-backup-0\" (UID: \"e51194ec-7c1f-4609-996f-ee210bb13bb5\") " pod="openstack/cinder-backup-0" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.522813 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e51194ec-7c1f-4609-996f-ee210bb13bb5-lib-modules\") pod \"cinder-backup-0\" (UID: \"e51194ec-7c1f-4609-996f-ee210bb13bb5\") " pod="openstack/cinder-backup-0" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.523044 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.526146 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e51194ec-7c1f-4609-996f-ee210bb13bb5-config-data-custom\") pod \"cinder-backup-0\" (UID: \"e51194ec-7c1f-4609-996f-ee210bb13bb5\") " pod="openstack/cinder-backup-0" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.526789 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e51194ec-7c1f-4609-996f-ee210bb13bb5-scripts\") pod \"cinder-backup-0\" (UID: \"e51194ec-7c1f-4609-996f-ee210bb13bb5\") " pod="openstack/cinder-backup-0" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.527847 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e51194ec-7c1f-4609-996f-ee210bb13bb5-config-data\") pod \"cinder-backup-0\" (UID: \"e51194ec-7c1f-4609-996f-ee210bb13bb5\") " pod="openstack/cinder-backup-0" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.528910 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e51194ec-7c1f-4609-996f-ee210bb13bb5-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"e51194ec-7c1f-4609-996f-ee210bb13bb5\") " pod="openstack/cinder-backup-0" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.532019 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/e51194ec-7c1f-4609-996f-ee210bb13bb5-ceph\") pod \"cinder-backup-0\" (UID: \"e51194ec-7c1f-4609-996f-ee210bb13bb5\") " pod="openstack/cinder-backup-0" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.546352 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2v95s\" (UniqueName: \"kubernetes.io/projected/e51194ec-7c1f-4609-996f-ee210bb13bb5-kube-api-access-2v95s\") pod \"cinder-backup-0\" (UID: \"e51194ec-7c1f-4609-996f-ee210bb13bb5\") " pod="openstack/cinder-backup-0" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.644276 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.724206 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-create-6hvhf"] Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.732814 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-6hvhf" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.758919 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-6hvhf"] Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.828192 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2b9ee49-0cbe-43d3-a768-74c71d0f79e8-operator-scripts\") pod \"manila-db-create-6hvhf\" (UID: \"e2b9ee49-0cbe-43d3-a768-74c71d0f79e8\") " pod="openstack/manila-db-create-6hvhf" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.828456 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2rh9\" (UniqueName: \"kubernetes.io/projected/e2b9ee49-0cbe-43d3-a768-74c71d0f79e8-kube-api-access-z2rh9\") pod \"manila-db-create-6hvhf\" (UID: \"e2b9ee49-0cbe-43d3-a768-74c71d0f79e8\") " pod="openstack/manila-db-create-6hvhf" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.854525 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-d2d4-account-create-hl6fw"] Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.856093 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-d2d4-account-create-hl6fw" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.862439 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-db-secret" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.893514 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-d2d4-account-create-hl6fw"] Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.930360 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2rh9\" (UniqueName: \"kubernetes.io/projected/e2b9ee49-0cbe-43d3-a768-74c71d0f79e8-kube-api-access-z2rh9\") pod \"manila-db-create-6hvhf\" (UID: \"e2b9ee49-0cbe-43d3-a768-74c71d0f79e8\") " pod="openstack/manila-db-create-6hvhf" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.930510 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2b9ee49-0cbe-43d3-a768-74c71d0f79e8-operator-scripts\") pod \"manila-db-create-6hvhf\" (UID: \"e2b9ee49-0cbe-43d3-a768-74c71d0f79e8\") " pod="openstack/manila-db-create-6hvhf" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.931538 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2b9ee49-0cbe-43d3-a768-74c71d0f79e8-operator-scripts\") pod \"manila-db-create-6hvhf\" (UID: \"e2b9ee49-0cbe-43d3-a768-74c71d0f79e8\") " pod="openstack/manila-db-create-6hvhf" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.931893 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-668c6889fc-xbssb"] Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.937416 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-668c6889fc-xbssb" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.951294 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.951606 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.951648 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.951990 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-5s8b2" Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.956166 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-668c6889fc-xbssb"] Nov 24 11:58:53 crc kubenswrapper[5072]: I1124 11:58:53.991213 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2rh9\" (UniqueName: \"kubernetes.io/projected/e2b9ee49-0cbe-43d3-a768-74c71d0f79e8-kube-api-access-z2rh9\") pod \"manila-db-create-6hvhf\" (UID: \"e2b9ee49-0cbe-43d3-a768-74c71d0f79e8\") " pod="openstack/manila-db-create-6hvhf" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.035408 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/18b8401a-38a6-41b3-abc0-d4924c551633-config-data\") pod \"horizon-668c6889fc-xbssb\" (UID: \"18b8401a-38a6-41b3-abc0-d4924c551633\") " pod="openstack/horizon-668c6889fc-xbssb" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.035879 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/18b8401a-38a6-41b3-abc0-d4924c551633-scripts\") pod \"horizon-668c6889fc-xbssb\" (UID: \"18b8401a-38a6-41b3-abc0-d4924c551633\") " pod="openstack/horizon-668c6889fc-xbssb" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.035989 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xh8s4\" (UniqueName: \"kubernetes.io/projected/feb68e18-e333-419a-acbf-7bc331cc35a8-kube-api-access-xh8s4\") pod \"manila-d2d4-account-create-hl6fw\" (UID: \"feb68e18-e333-419a-acbf-7bc331cc35a8\") " pod="openstack/manila-d2d4-account-create-hl6fw" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.036041 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-th9l9\" (UniqueName: \"kubernetes.io/projected/18b8401a-38a6-41b3-abc0-d4924c551633-kube-api-access-th9l9\") pod \"horizon-668c6889fc-xbssb\" (UID: \"18b8401a-38a6-41b3-abc0-d4924c551633\") " pod="openstack/horizon-668c6889fc-xbssb" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.036077 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/18b8401a-38a6-41b3-abc0-d4924c551633-horizon-secret-key\") pod \"horizon-668c6889fc-xbssb\" (UID: \"18b8401a-38a6-41b3-abc0-d4924c551633\") " pod="openstack/horizon-668c6889fc-xbssb" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.036107 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18b8401a-38a6-41b3-abc0-d4924c551633-logs\") pod \"horizon-668c6889fc-xbssb\" (UID: \"18b8401a-38a6-41b3-abc0-d4924c551633\") " pod="openstack/horizon-668c6889fc-xbssb" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.036198 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/feb68e18-e333-419a-acbf-7bc331cc35a8-operator-scripts\") pod \"manila-d2d4-account-create-hl6fw\" (UID: \"feb68e18-e333-419a-acbf-7bc331cc35a8\") " pod="openstack/manila-d2d4-account-create-hl6fw" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.053130 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.054771 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.059087 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.059116 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-bb4tx" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.059088 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.059089 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.069228 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-6hvhf" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.075612 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.081118 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.089762 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.102081 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.102254 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.119232 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.126579 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6ccd6d974c-ptg7b"] Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.128314 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6ccd6d974c-ptg7b" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.137725 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/feb68e18-e333-419a-acbf-7bc331cc35a8-operator-scripts\") pod \"manila-d2d4-account-create-hl6fw\" (UID: \"feb68e18-e333-419a-acbf-7bc331cc35a8\") " pod="openstack/manila-d2d4-account-create-hl6fw" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.137886 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/18b8401a-38a6-41b3-abc0-d4924c551633-config-data\") pod \"horizon-668c6889fc-xbssb\" (UID: \"18b8401a-38a6-41b3-abc0-d4924c551633\") " pod="openstack/horizon-668c6889fc-xbssb" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.137919 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/18b8401a-38a6-41b3-abc0-d4924c551633-scripts\") pod \"horizon-668c6889fc-xbssb\" (UID: \"18b8401a-38a6-41b3-abc0-d4924c551633\") " pod="openstack/horizon-668c6889fc-xbssb" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.138040 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xh8s4\" (UniqueName: \"kubernetes.io/projected/feb68e18-e333-419a-acbf-7bc331cc35a8-kube-api-access-xh8s4\") pod \"manila-d2d4-account-create-hl6fw\" (UID: \"feb68e18-e333-419a-acbf-7bc331cc35a8\") " pod="openstack/manila-d2d4-account-create-hl6fw" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.138089 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-th9l9\" (UniqueName: \"kubernetes.io/projected/18b8401a-38a6-41b3-abc0-d4924c551633-kube-api-access-th9l9\") pod \"horizon-668c6889fc-xbssb\" (UID: \"18b8401a-38a6-41b3-abc0-d4924c551633\") " pod="openstack/horizon-668c6889fc-xbssb" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.138158 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/18b8401a-38a6-41b3-abc0-d4924c551633-horizon-secret-key\") pod \"horizon-668c6889fc-xbssb\" (UID: \"18b8401a-38a6-41b3-abc0-d4924c551633\") " pod="openstack/horizon-668c6889fc-xbssb" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.138208 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18b8401a-38a6-41b3-abc0-d4924c551633-logs\") pod \"horizon-668c6889fc-xbssb\" (UID: \"18b8401a-38a6-41b3-abc0-d4924c551633\") " pod="openstack/horizon-668c6889fc-xbssb" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.139081 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18b8401a-38a6-41b3-abc0-d4924c551633-logs\") pod \"horizon-668c6889fc-xbssb\" (UID: \"18b8401a-38a6-41b3-abc0-d4924c551633\") " pod="openstack/horizon-668c6889fc-xbssb" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.140117 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/feb68e18-e333-419a-acbf-7bc331cc35a8-operator-scripts\") pod \"manila-d2d4-account-create-hl6fw\" (UID: \"feb68e18-e333-419a-acbf-7bc331cc35a8\") " pod="openstack/manila-d2d4-account-create-hl6fw" Nov 24 11:58:54 crc kubenswrapper[5072]: E1124 11:58:54.140887 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[ceph combined-ca-bundle config-data glance httpd-run kube-api-access-g2cfk logs public-tls-certs scripts], unattached volumes=[], failed to process volumes=[ceph combined-ca-bundle config-data glance httpd-run kube-api-access-g2cfk logs public-tls-certs scripts]: context canceled" pod="openstack/glance-default-external-api-0" podUID="b0488c3e-43d5-4f10-b3e4-d8904a296c40" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.141985 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/18b8401a-38a6-41b3-abc0-d4924c551633-scripts\") pod \"horizon-668c6889fc-xbssb\" (UID: \"18b8401a-38a6-41b3-abc0-d4924c551633\") " pod="openstack/horizon-668c6889fc-xbssb" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.142036 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/18b8401a-38a6-41b3-abc0-d4924c551633-config-data\") pod \"horizon-668c6889fc-xbssb\" (UID: \"18b8401a-38a6-41b3-abc0-d4924c551633\") " pod="openstack/horizon-668c6889fc-xbssb" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.149529 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.157043 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/18b8401a-38a6-41b3-abc0-d4924c551633-horizon-secret-key\") pod \"horizon-668c6889fc-xbssb\" (UID: \"18b8401a-38a6-41b3-abc0-d4924c551633\") " pod="openstack/horizon-668c6889fc-xbssb" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.162080 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-th9l9\" (UniqueName: \"kubernetes.io/projected/18b8401a-38a6-41b3-abc0-d4924c551633-kube-api-access-th9l9\") pod \"horizon-668c6889fc-xbssb\" (UID: \"18b8401a-38a6-41b3-abc0-d4924c551633\") " pod="openstack/horizon-668c6889fc-xbssb" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.162704 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xh8s4\" (UniqueName: \"kubernetes.io/projected/feb68e18-e333-419a-acbf-7bc331cc35a8-kube-api-access-xh8s4\") pod \"manila-d2d4-account-create-hl6fw\" (UID: \"feb68e18-e333-419a-acbf-7bc331cc35a8\") " pod="openstack/manila-d2d4-account-create-hl6fw" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.166392 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6ccd6d974c-ptg7b"] Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.240108 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bd3753f-127a-40e9-9406-3c34efbf1e17-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1bd3753f-127a-40e9-9406-3c34efbf1e17\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.240181 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0488c3e-43d5-4f10-b3e4-d8904a296c40-config-data\") pod \"glance-default-external-api-0\" (UID: \"b0488c3e-43d5-4f10-b3e4-d8904a296c40\") " pod="openstack/glance-default-external-api-0" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.240212 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4ljb\" (UniqueName: \"kubernetes.io/projected/1bd3753f-127a-40e9-9406-3c34efbf1e17-kube-api-access-r4ljb\") pod \"glance-default-internal-api-0\" (UID: \"1bd3753f-127a-40e9-9406-3c34efbf1e17\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.240245 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/1bd3753f-127a-40e9-9406-3c34efbf1e17-ceph\") pod \"glance-default-internal-api-0\" (UID: \"1bd3753f-127a-40e9-9406-3c34efbf1e17\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.240282 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0488c3e-43d5-4f10-b3e4-d8904a296c40-scripts\") pod \"glance-default-external-api-0\" (UID: \"b0488c3e-43d5-4f10-b3e4-d8904a296c40\") " pod="openstack/glance-default-external-api-0" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.240305 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2cfk\" (UniqueName: \"kubernetes.io/projected/b0488c3e-43d5-4f10-b3e4-d8904a296c40-kube-api-access-g2cfk\") pod \"glance-default-external-api-0\" (UID: \"b0488c3e-43d5-4f10-b3e4-d8904a296c40\") " pod="openstack/glance-default-external-api-0" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.240337 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2a9d62a7-fa35-4937-8cf4-31142e2f0623-horizon-secret-key\") pod \"horizon-6ccd6d974c-ptg7b\" (UID: \"2a9d62a7-fa35-4937-8cf4-31142e2f0623\") " pod="openstack/horizon-6ccd6d974c-ptg7b" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.240365 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"b0488c3e-43d5-4f10-b3e4-d8904a296c40\") " pod="openstack/glance-default-external-api-0" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.240407 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bd3753f-127a-40e9-9406-3c34efbf1e17-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1bd3753f-127a-40e9-9406-3c34efbf1e17\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.240432 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1bd3753f-127a-40e9-9406-3c34efbf1e17-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1bd3753f-127a-40e9-9406-3c34efbf1e17\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.240463 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a9d62a7-fa35-4937-8cf4-31142e2f0623-logs\") pod \"horizon-6ccd6d974c-ptg7b\" (UID: \"2a9d62a7-fa35-4937-8cf4-31142e2f0623\") " pod="openstack/horizon-6ccd6d974c-ptg7b" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.240553 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2a9d62a7-fa35-4937-8cf4-31142e2f0623-scripts\") pod \"horizon-6ccd6d974c-ptg7b\" (UID: \"2a9d62a7-fa35-4937-8cf4-31142e2f0623\") " pod="openstack/horizon-6ccd6d974c-ptg7b" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.240632 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0488c3e-43d5-4f10-b3e4-d8904a296c40-logs\") pod \"glance-default-external-api-0\" (UID: \"b0488c3e-43d5-4f10-b3e4-d8904a296c40\") " pod="openstack/glance-default-external-api-0" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.240700 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1bd3753f-127a-40e9-9406-3c34efbf1e17-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"1bd3753f-127a-40e9-9406-3c34efbf1e17\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.240765 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0488c3e-43d5-4f10-b3e4-d8904a296c40-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b0488c3e-43d5-4f10-b3e4-d8904a296c40\") " pod="openstack/glance-default-external-api-0" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.240798 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2a9d62a7-fa35-4937-8cf4-31142e2f0623-config-data\") pod \"horizon-6ccd6d974c-ptg7b\" (UID: \"2a9d62a7-fa35-4937-8cf4-31142e2f0623\") " pod="openstack/horizon-6ccd6d974c-ptg7b" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.240821 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1bd3753f-127a-40e9-9406-3c34efbf1e17-logs\") pod \"glance-default-internal-api-0\" (UID: \"1bd3753f-127a-40e9-9406-3c34efbf1e17\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.240859 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b0488c3e-43d5-4f10-b3e4-d8904a296c40-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b0488c3e-43d5-4f10-b3e4-d8904a296c40\") " pod="openstack/glance-default-external-api-0" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.240879 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-442kk\" (UniqueName: \"kubernetes.io/projected/2a9d62a7-fa35-4937-8cf4-31142e2f0623-kube-api-access-442kk\") pod \"horizon-6ccd6d974c-ptg7b\" (UID: \"2a9d62a7-fa35-4937-8cf4-31142e2f0623\") " pod="openstack/horizon-6ccd6d974c-ptg7b" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.240907 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/b0488c3e-43d5-4f10-b3e4-d8904a296c40-ceph\") pod \"glance-default-external-api-0\" (UID: \"b0488c3e-43d5-4f10-b3e4-d8904a296c40\") " pod="openstack/glance-default-external-api-0" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.240933 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"1bd3753f-127a-40e9-9406-3c34efbf1e17\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.240967 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0488c3e-43d5-4f10-b3e4-d8904a296c40-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"b0488c3e-43d5-4f10-b3e4-d8904a296c40\") " pod="openstack/glance-default-external-api-0" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.240989 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1bd3753f-127a-40e9-9406-3c34efbf1e17-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1bd3753f-127a-40e9-9406-3c34efbf1e17\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.265340 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-d2d4-account-create-hl6fw" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.286518 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-668c6889fc-xbssb" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.344602 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0488c3e-43d5-4f10-b3e4-d8904a296c40-logs\") pod \"glance-default-external-api-0\" (UID: \"b0488c3e-43d5-4f10-b3e4-d8904a296c40\") " pod="openstack/glance-default-external-api-0" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.344672 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1bd3753f-127a-40e9-9406-3c34efbf1e17-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"1bd3753f-127a-40e9-9406-3c34efbf1e17\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.344724 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0488c3e-43d5-4f10-b3e4-d8904a296c40-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b0488c3e-43d5-4f10-b3e4-d8904a296c40\") " pod="openstack/glance-default-external-api-0" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.344766 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2a9d62a7-fa35-4937-8cf4-31142e2f0623-config-data\") pod \"horizon-6ccd6d974c-ptg7b\" (UID: \"2a9d62a7-fa35-4937-8cf4-31142e2f0623\") " pod="openstack/horizon-6ccd6d974c-ptg7b" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.344791 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1bd3753f-127a-40e9-9406-3c34efbf1e17-logs\") pod \"glance-default-internal-api-0\" (UID: \"1bd3753f-127a-40e9-9406-3c34efbf1e17\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.344830 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b0488c3e-43d5-4f10-b3e4-d8904a296c40-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b0488c3e-43d5-4f10-b3e4-d8904a296c40\") " pod="openstack/glance-default-external-api-0" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.344854 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-442kk\" (UniqueName: \"kubernetes.io/projected/2a9d62a7-fa35-4937-8cf4-31142e2f0623-kube-api-access-442kk\") pod \"horizon-6ccd6d974c-ptg7b\" (UID: \"2a9d62a7-fa35-4937-8cf4-31142e2f0623\") " pod="openstack/horizon-6ccd6d974c-ptg7b" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.344882 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/b0488c3e-43d5-4f10-b3e4-d8904a296c40-ceph\") pod \"glance-default-external-api-0\" (UID: \"b0488c3e-43d5-4f10-b3e4-d8904a296c40\") " pod="openstack/glance-default-external-api-0" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.344905 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"1bd3753f-127a-40e9-9406-3c34efbf1e17\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.344936 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0488c3e-43d5-4f10-b3e4-d8904a296c40-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"b0488c3e-43d5-4f10-b3e4-d8904a296c40\") " pod="openstack/glance-default-external-api-0" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.344959 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1bd3753f-127a-40e9-9406-3c34efbf1e17-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1bd3753f-127a-40e9-9406-3c34efbf1e17\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.344991 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bd3753f-127a-40e9-9406-3c34efbf1e17-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1bd3753f-127a-40e9-9406-3c34efbf1e17\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.345025 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0488c3e-43d5-4f10-b3e4-d8904a296c40-config-data\") pod \"glance-default-external-api-0\" (UID: \"b0488c3e-43d5-4f10-b3e4-d8904a296c40\") " pod="openstack/glance-default-external-api-0" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.345059 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4ljb\" (UniqueName: \"kubernetes.io/projected/1bd3753f-127a-40e9-9406-3c34efbf1e17-kube-api-access-r4ljb\") pod \"glance-default-internal-api-0\" (UID: \"1bd3753f-127a-40e9-9406-3c34efbf1e17\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.345093 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/1bd3753f-127a-40e9-9406-3c34efbf1e17-ceph\") pod \"glance-default-internal-api-0\" (UID: \"1bd3753f-127a-40e9-9406-3c34efbf1e17\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.345126 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0488c3e-43d5-4f10-b3e4-d8904a296c40-scripts\") pod \"glance-default-external-api-0\" (UID: \"b0488c3e-43d5-4f10-b3e4-d8904a296c40\") " pod="openstack/glance-default-external-api-0" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.345148 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2cfk\" (UniqueName: \"kubernetes.io/projected/b0488c3e-43d5-4f10-b3e4-d8904a296c40-kube-api-access-g2cfk\") pod \"glance-default-external-api-0\" (UID: \"b0488c3e-43d5-4f10-b3e4-d8904a296c40\") " pod="openstack/glance-default-external-api-0" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.345180 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2a9d62a7-fa35-4937-8cf4-31142e2f0623-horizon-secret-key\") pod \"horizon-6ccd6d974c-ptg7b\" (UID: \"2a9d62a7-fa35-4937-8cf4-31142e2f0623\") " pod="openstack/horizon-6ccd6d974c-ptg7b" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.345207 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"b0488c3e-43d5-4f10-b3e4-d8904a296c40\") " pod="openstack/glance-default-external-api-0" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.345230 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bd3753f-127a-40e9-9406-3c34efbf1e17-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1bd3753f-127a-40e9-9406-3c34efbf1e17\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.345256 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1bd3753f-127a-40e9-9406-3c34efbf1e17-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1bd3753f-127a-40e9-9406-3c34efbf1e17\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.345287 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a9d62a7-fa35-4937-8cf4-31142e2f0623-logs\") pod \"horizon-6ccd6d974c-ptg7b\" (UID: \"2a9d62a7-fa35-4937-8cf4-31142e2f0623\") " pod="openstack/horizon-6ccd6d974c-ptg7b" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.345338 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2a9d62a7-fa35-4937-8cf4-31142e2f0623-scripts\") pod \"horizon-6ccd6d974c-ptg7b\" (UID: \"2a9d62a7-fa35-4937-8cf4-31142e2f0623\") " pod="openstack/horizon-6ccd6d974c-ptg7b" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.345898 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0488c3e-43d5-4f10-b3e4-d8904a296c40-logs\") pod \"glance-default-external-api-0\" (UID: \"b0488c3e-43d5-4f10-b3e4-d8904a296c40\") " pod="openstack/glance-default-external-api-0" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.346202 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2a9d62a7-fa35-4937-8cf4-31142e2f0623-scripts\") pod \"horizon-6ccd6d974c-ptg7b\" (UID: \"2a9d62a7-fa35-4937-8cf4-31142e2f0623\") " pod="openstack/horizon-6ccd6d974c-ptg7b" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.346564 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b0488c3e-43d5-4f10-b3e4-d8904a296c40-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b0488c3e-43d5-4f10-b3e4-d8904a296c40\") " pod="openstack/glance-default-external-api-0" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.347344 5072 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"1bd3753f-127a-40e9-9406-3c34efbf1e17\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.348687 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1bd3753f-127a-40e9-9406-3c34efbf1e17-logs\") pod \"glance-default-internal-api-0\" (UID: \"1bd3753f-127a-40e9-9406-3c34efbf1e17\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.348828 5072 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"b0488c3e-43d5-4f10-b3e4-d8904a296c40\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-external-api-0" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.350591 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1bd3753f-127a-40e9-9406-3c34efbf1e17-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1bd3753f-127a-40e9-9406-3c34efbf1e17\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.351180 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a9d62a7-fa35-4937-8cf4-31142e2f0623-logs\") pod \"horizon-6ccd6d974c-ptg7b\" (UID: \"2a9d62a7-fa35-4937-8cf4-31142e2f0623\") " pod="openstack/horizon-6ccd6d974c-ptg7b" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.352555 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2a9d62a7-fa35-4937-8cf4-31142e2f0623-config-data\") pod \"horizon-6ccd6d974c-ptg7b\" (UID: \"2a9d62a7-fa35-4937-8cf4-31142e2f0623\") " pod="openstack/horizon-6ccd6d974c-ptg7b" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.354865 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/b0488c3e-43d5-4f10-b3e4-d8904a296c40-ceph\") pod \"glance-default-external-api-0\" (UID: \"b0488c3e-43d5-4f10-b3e4-d8904a296c40\") " pod="openstack/glance-default-external-api-0" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.359595 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0488c3e-43d5-4f10-b3e4-d8904a296c40-config-data\") pod \"glance-default-external-api-0\" (UID: \"b0488c3e-43d5-4f10-b3e4-d8904a296c40\") " pod="openstack/glance-default-external-api-0" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.360549 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bd3753f-127a-40e9-9406-3c34efbf1e17-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1bd3753f-127a-40e9-9406-3c34efbf1e17\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.360825 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bd3753f-127a-40e9-9406-3c34efbf1e17-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1bd3753f-127a-40e9-9406-3c34efbf1e17\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.361591 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1bd3753f-127a-40e9-9406-3c34efbf1e17-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1bd3753f-127a-40e9-9406-3c34efbf1e17\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.361802 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1bd3753f-127a-40e9-9406-3c34efbf1e17-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"1bd3753f-127a-40e9-9406-3c34efbf1e17\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.371167 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-442kk\" (UniqueName: \"kubernetes.io/projected/2a9d62a7-fa35-4937-8cf4-31142e2f0623-kube-api-access-442kk\") pod \"horizon-6ccd6d974c-ptg7b\" (UID: \"2a9d62a7-fa35-4937-8cf4-31142e2f0623\") " pod="openstack/horizon-6ccd6d974c-ptg7b" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.372014 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/1bd3753f-127a-40e9-9406-3c34efbf1e17-ceph\") pod \"glance-default-internal-api-0\" (UID: \"1bd3753f-127a-40e9-9406-3c34efbf1e17\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.372071 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0488c3e-43d5-4f10-b3e4-d8904a296c40-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"b0488c3e-43d5-4f10-b3e4-d8904a296c40\") " pod="openstack/glance-default-external-api-0" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.372261 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0488c3e-43d5-4f10-b3e4-d8904a296c40-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b0488c3e-43d5-4f10-b3e4-d8904a296c40\") " pod="openstack/glance-default-external-api-0" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.376977 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4ljb\" (UniqueName: \"kubernetes.io/projected/1bd3753f-127a-40e9-9406-3c34efbf1e17-kube-api-access-r4ljb\") pod \"glance-default-internal-api-0\" (UID: \"1bd3753f-127a-40e9-9406-3c34efbf1e17\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.381052 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2cfk\" (UniqueName: \"kubernetes.io/projected/b0488c3e-43d5-4f10-b3e4-d8904a296c40-kube-api-access-g2cfk\") pod \"glance-default-external-api-0\" (UID: \"b0488c3e-43d5-4f10-b3e4-d8904a296c40\") " pod="openstack/glance-default-external-api-0" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.382755 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2a9d62a7-fa35-4937-8cf4-31142e2f0623-horizon-secret-key\") pod \"horizon-6ccd6d974c-ptg7b\" (UID: \"2a9d62a7-fa35-4937-8cf4-31142e2f0623\") " pod="openstack/horizon-6ccd6d974c-ptg7b" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.386614 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0488c3e-43d5-4f10-b3e4-d8904a296c40-scripts\") pod \"glance-default-external-api-0\" (UID: \"b0488c3e-43d5-4f10-b3e4-d8904a296c40\") " pod="openstack/glance-default-external-api-0" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.414554 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"b0488c3e-43d5-4f10-b3e4-d8904a296c40\") " pod="openstack/glance-default-external-api-0" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.432881 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"1bd3753f-127a-40e9-9406-3c34efbf1e17\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.466947 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6ccd6d974c-ptg7b" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.497291 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.499852 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 11:58:54 crc kubenswrapper[5072]: W1124 11:58:54.538552 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9ed8d6e1_fa71_401b_acd5_341fbc2ec5a0.slice/crio-fea40a651e65ccf591750a300ec9a1ed93728ee97fd890991f24f8e362860d95 WatchSource:0}: Error finding container fea40a651e65ccf591750a300ec9a1ed93728ee97fd890991f24f8e362860d95: Status 404 returned error can't find the container with id fea40a651e65ccf591750a300ec9a1ed93728ee97fd890991f24f8e362860d95 Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.553482 5072 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.622757 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.630681 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 11:58:54 crc kubenswrapper[5072]: W1124 11:58:54.639238 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode51194ec_7c1f_4609_996f_ee210bb13bb5.slice/crio-3b2f730ecabc5ba311bc5ed7e2a7fb331c5f211bc55091580298f5a8d7f5a06f WatchSource:0}: Error finding container 3b2f730ecabc5ba311bc5ed7e2a7fb331c5f211bc55091580298f5a8d7f5a06f: Status 404 returned error can't find the container with id 3b2f730ecabc5ba311bc5ed7e2a7fb331c5f211bc55091580298f5a8d7f5a06f Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.690127 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.753173 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0488c3e-43d5-4f10-b3e4-d8904a296c40-config-data\") pod \"b0488c3e-43d5-4f10-b3e4-d8904a296c40\" (UID: \"b0488c3e-43d5-4f10-b3e4-d8904a296c40\") " Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.753616 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0488c3e-43d5-4f10-b3e4-d8904a296c40-combined-ca-bundle\") pod \"b0488c3e-43d5-4f10-b3e4-d8904a296c40\" (UID: \"b0488c3e-43d5-4f10-b3e4-d8904a296c40\") " Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.753672 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0488c3e-43d5-4f10-b3e4-d8904a296c40-public-tls-certs\") pod \"b0488c3e-43d5-4f10-b3e4-d8904a296c40\" (UID: \"b0488c3e-43d5-4f10-b3e4-d8904a296c40\") " Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.753730 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b0488c3e-43d5-4f10-b3e4-d8904a296c40-httpd-run\") pod \"b0488c3e-43d5-4f10-b3e4-d8904a296c40\" (UID: \"b0488c3e-43d5-4f10-b3e4-d8904a296c40\") " Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.753785 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0488c3e-43d5-4f10-b3e4-d8904a296c40-scripts\") pod \"b0488c3e-43d5-4f10-b3e4-d8904a296c40\" (UID: \"b0488c3e-43d5-4f10-b3e4-d8904a296c40\") " Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.753816 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0488c3e-43d5-4f10-b3e4-d8904a296c40-logs\") pod \"b0488c3e-43d5-4f10-b3e4-d8904a296c40\" (UID: \"b0488c3e-43d5-4f10-b3e4-d8904a296c40\") " Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.753842 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g2cfk\" (UniqueName: \"kubernetes.io/projected/b0488c3e-43d5-4f10-b3e4-d8904a296c40-kube-api-access-g2cfk\") pod \"b0488c3e-43d5-4f10-b3e4-d8904a296c40\" (UID: \"b0488c3e-43d5-4f10-b3e4-d8904a296c40\") " Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.753866 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/b0488c3e-43d5-4f10-b3e4-d8904a296c40-ceph\") pod \"b0488c3e-43d5-4f10-b3e4-d8904a296c40\" (UID: \"b0488c3e-43d5-4f10-b3e4-d8904a296c40\") " Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.753887 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"b0488c3e-43d5-4f10-b3e4-d8904a296c40\" (UID: \"b0488c3e-43d5-4f10-b3e4-d8904a296c40\") " Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.759576 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "glance") pod "b0488c3e-43d5-4f10-b3e4-d8904a296c40" (UID: "b0488c3e-43d5-4f10-b3e4-d8904a296c40"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.762470 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0488c3e-43d5-4f10-b3e4-d8904a296c40-config-data" (OuterVolumeSpecName: "config-data") pod "b0488c3e-43d5-4f10-b3e4-d8904a296c40" (UID: "b0488c3e-43d5-4f10-b3e4-d8904a296c40"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.762769 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0488c3e-43d5-4f10-b3e4-d8904a296c40-logs" (OuterVolumeSpecName: "logs") pod "b0488c3e-43d5-4f10-b3e4-d8904a296c40" (UID: "b0488c3e-43d5-4f10-b3e4-d8904a296c40"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.762780 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0488c3e-43d5-4f10-b3e4-d8904a296c40-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "b0488c3e-43d5-4f10-b3e4-d8904a296c40" (UID: "b0488c3e-43d5-4f10-b3e4-d8904a296c40"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.765040 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0488c3e-43d5-4f10-b3e4-d8904a296c40-scripts" (OuterVolumeSpecName: "scripts") pod "b0488c3e-43d5-4f10-b3e4-d8904a296c40" (UID: "b0488c3e-43d5-4f10-b3e4-d8904a296c40"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.765708 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0488c3e-43d5-4f10-b3e4-d8904a296c40-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b0488c3e-43d5-4f10-b3e4-d8904a296c40" (UID: "b0488c3e-43d5-4f10-b3e4-d8904a296c40"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.765780 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-6hvhf"] Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.766473 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0488c3e-43d5-4f10-b3e4-d8904a296c40-ceph" (OuterVolumeSpecName: "ceph") pod "b0488c3e-43d5-4f10-b3e4-d8904a296c40" (UID: "b0488c3e-43d5-4f10-b3e4-d8904a296c40"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.769964 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0488c3e-43d5-4f10-b3e4-d8904a296c40-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "b0488c3e-43d5-4f10-b3e4-d8904a296c40" (UID: "b0488c3e-43d5-4f10-b3e4-d8904a296c40"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.770077 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0488c3e-43d5-4f10-b3e4-d8904a296c40-kube-api-access-g2cfk" (OuterVolumeSpecName: "kube-api-access-g2cfk") pod "b0488c3e-43d5-4f10-b3e4-d8904a296c40" (UID: "b0488c3e-43d5-4f10-b3e4-d8904a296c40"). InnerVolumeSpecName "kube-api-access-g2cfk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.855911 5072 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0488c3e-43d5-4f10-b3e4-d8904a296c40-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.855954 5072 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0488c3e-43d5-4f10-b3e4-d8904a296c40-logs\") on node \"crc\" DevicePath \"\"" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.855967 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g2cfk\" (UniqueName: \"kubernetes.io/projected/b0488c3e-43d5-4f10-b3e4-d8904a296c40-kube-api-access-g2cfk\") on node \"crc\" DevicePath \"\"" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.856065 5072 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/b0488c3e-43d5-4f10-b3e4-d8904a296c40-ceph\") on node \"crc\" DevicePath \"\"" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.856101 5072 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.856110 5072 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0488c3e-43d5-4f10-b3e4-d8904a296c40-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.856120 5072 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0488c3e-43d5-4f10-b3e4-d8904a296c40-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.856129 5072 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0488c3e-43d5-4f10-b3e4-d8904a296c40-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.856137 5072 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b0488c3e-43d5-4f10-b3e4-d8904a296c40-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.877214 5072 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.927562 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-d2d4-account-create-hl6fw"] Nov 24 11:58:54 crc kubenswrapper[5072]: I1124 11:58:54.958072 5072 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Nov 24 11:58:55 crc kubenswrapper[5072]: W1124 11:58:55.024624 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod18b8401a_38a6_41b3_abc0_d4924c551633.slice/crio-0dcc5f0c3978922749c77142a5ad73a5930aeb927c3f9e77f45c6659c3b0825c WatchSource:0}: Error finding container 0dcc5f0c3978922749c77142a5ad73a5930aeb927c3f9e77f45c6659c3b0825c: Status 404 returned error can't find the container with id 0dcc5f0c3978922749c77142a5ad73a5930aeb927c3f9e77f45c6659c3b0825c Nov 24 11:58:55 crc kubenswrapper[5072]: I1124 11:58:55.051873 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-668c6889fc-xbssb"] Nov 24 11:58:55 crc kubenswrapper[5072]: I1124 11:58:55.103921 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6ccd6d974c-ptg7b"] Nov 24 11:58:55 crc kubenswrapper[5072]: I1124 11:58:55.353645 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 11:58:55 crc kubenswrapper[5072]: W1124 11:58:55.393276 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1bd3753f_127a_40e9_9406_3c34efbf1e17.slice/crio-27373fff7f0277deaa3590a9fa833ccaedf0f95f60a2ababb6a9e01ebeeb5e38 WatchSource:0}: Error finding container 27373fff7f0277deaa3590a9fa833ccaedf0f95f60a2ababb6a9e01ebeeb5e38: Status 404 returned error can't find the container with id 27373fff7f0277deaa3590a9fa833ccaedf0f95f60a2ababb6a9e01ebeeb5e38 Nov 24 11:58:55 crc kubenswrapper[5072]: I1124 11:58:55.512821 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0","Type":"ContainerStarted","Data":"fea40a651e65ccf591750a300ec9a1ed93728ee97fd890991f24f8e362860d95"} Nov 24 11:58:55 crc kubenswrapper[5072]: I1124 11:58:55.514343 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6ccd6d974c-ptg7b" event={"ID":"2a9d62a7-fa35-4937-8cf4-31142e2f0623","Type":"ContainerStarted","Data":"41dd769c5032ed2aac0444f5c443c2451bf77a0874ac6b4f26532df497df5ea0"} Nov 24 11:58:55 crc kubenswrapper[5072]: I1124 11:58:55.515343 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"e51194ec-7c1f-4609-996f-ee210bb13bb5","Type":"ContainerStarted","Data":"3b2f730ecabc5ba311bc5ed7e2a7fb331c5f211bc55091580298f5a8d7f5a06f"} Nov 24 11:58:55 crc kubenswrapper[5072]: I1124 11:58:55.516781 5072 generic.go:334] "Generic (PLEG): container finished" podID="e2b9ee49-0cbe-43d3-a768-74c71d0f79e8" containerID="c753631300873ca499bd1d589d519ad1c4a6114154e797749625b80ba3094c6d" exitCode=0 Nov 24 11:58:55 crc kubenswrapper[5072]: I1124 11:58:55.516861 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-6hvhf" event={"ID":"e2b9ee49-0cbe-43d3-a768-74c71d0f79e8","Type":"ContainerDied","Data":"c753631300873ca499bd1d589d519ad1c4a6114154e797749625b80ba3094c6d"} Nov 24 11:58:55 crc kubenswrapper[5072]: I1124 11:58:55.516897 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-6hvhf" event={"ID":"e2b9ee49-0cbe-43d3-a768-74c71d0f79e8","Type":"ContainerStarted","Data":"7f0b2b0234394e4acb72481245d99d7995d79b20ca2f8176567cdb63eec6681e"} Nov 24 11:58:55 crc kubenswrapper[5072]: I1124 11:58:55.522907 5072 generic.go:334] "Generic (PLEG): container finished" podID="feb68e18-e333-419a-acbf-7bc331cc35a8" containerID="f81646fb82089e09d7e9fe5fc7e11e71bb909c110f7a9bfd42acb274ae728a79" exitCode=0 Nov 24 11:58:55 crc kubenswrapper[5072]: I1124 11:58:55.522957 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-d2d4-account-create-hl6fw" event={"ID":"feb68e18-e333-419a-acbf-7bc331cc35a8","Type":"ContainerDied","Data":"f81646fb82089e09d7e9fe5fc7e11e71bb909c110f7a9bfd42acb274ae728a79"} Nov 24 11:58:55 crc kubenswrapper[5072]: I1124 11:58:55.522996 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-d2d4-account-create-hl6fw" event={"ID":"feb68e18-e333-419a-acbf-7bc331cc35a8","Type":"ContainerStarted","Data":"3c5e554e9a5cddcb2de54bf895491c7580dd9de54f6958d55e818dff968c96eb"} Nov 24 11:58:55 crc kubenswrapper[5072]: I1124 11:58:55.524454 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1bd3753f-127a-40e9-9406-3c34efbf1e17","Type":"ContainerStarted","Data":"27373fff7f0277deaa3590a9fa833ccaedf0f95f60a2ababb6a9e01ebeeb5e38"} Nov 24 11:58:55 crc kubenswrapper[5072]: I1124 11:58:55.526357 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-668c6889fc-xbssb" event={"ID":"18b8401a-38a6-41b3-abc0-d4924c551633","Type":"ContainerStarted","Data":"0dcc5f0c3978922749c77142a5ad73a5930aeb927c3f9e77f45c6659c3b0825c"} Nov 24 11:58:55 crc kubenswrapper[5072]: I1124 11:58:55.526479 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 11:58:55 crc kubenswrapper[5072]: I1124 11:58:55.638269 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 11:58:55 crc kubenswrapper[5072]: I1124 11:58:55.645290 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 11:58:55 crc kubenswrapper[5072]: I1124 11:58:55.673474 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 11:58:55 crc kubenswrapper[5072]: I1124 11:58:55.678525 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 11:58:55 crc kubenswrapper[5072]: I1124 11:58:55.682536 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 24 11:58:55 crc kubenswrapper[5072]: I1124 11:58:55.683000 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 24 11:58:55 crc kubenswrapper[5072]: I1124 11:58:55.698190 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 11:58:55 crc kubenswrapper[5072]: I1124 11:58:55.780322 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1bc09d77-5ad5-40bb-a7d9-327834ebfd07-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1bc09d77-5ad5-40bb-a7d9-327834ebfd07\") " pod="openstack/glance-default-external-api-0" Nov 24 11:58:55 crc kubenswrapper[5072]: I1124 11:58:55.780392 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1bc09d77-5ad5-40bb-a7d9-327834ebfd07-logs\") pod \"glance-default-external-api-0\" (UID: \"1bc09d77-5ad5-40bb-a7d9-327834ebfd07\") " pod="openstack/glance-default-external-api-0" Nov 24 11:58:55 crc kubenswrapper[5072]: I1124 11:58:55.780451 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1bc09d77-5ad5-40bb-a7d9-327834ebfd07-scripts\") pod \"glance-default-external-api-0\" (UID: \"1bc09d77-5ad5-40bb-a7d9-327834ebfd07\") " pod="openstack/glance-default-external-api-0" Nov 24 11:58:55 crc kubenswrapper[5072]: I1124 11:58:55.780476 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bc09d77-5ad5-40bb-a7d9-327834ebfd07-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1bc09d77-5ad5-40bb-a7d9-327834ebfd07\") " pod="openstack/glance-default-external-api-0" Nov 24 11:58:55 crc kubenswrapper[5072]: I1124 11:58:55.780495 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgl5v\" (UniqueName: \"kubernetes.io/projected/1bc09d77-5ad5-40bb-a7d9-327834ebfd07-kube-api-access-sgl5v\") pod \"glance-default-external-api-0\" (UID: \"1bc09d77-5ad5-40bb-a7d9-327834ebfd07\") " pod="openstack/glance-default-external-api-0" Nov 24 11:58:55 crc kubenswrapper[5072]: I1124 11:58:55.780538 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1bc09d77-5ad5-40bb-a7d9-327834ebfd07-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1bc09d77-5ad5-40bb-a7d9-327834ebfd07\") " pod="openstack/glance-default-external-api-0" Nov 24 11:58:55 crc kubenswrapper[5072]: I1124 11:58:55.780566 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"1bc09d77-5ad5-40bb-a7d9-327834ebfd07\") " pod="openstack/glance-default-external-api-0" Nov 24 11:58:55 crc kubenswrapper[5072]: I1124 11:58:55.780608 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bc09d77-5ad5-40bb-a7d9-327834ebfd07-config-data\") pod \"glance-default-external-api-0\" (UID: \"1bc09d77-5ad5-40bb-a7d9-327834ebfd07\") " pod="openstack/glance-default-external-api-0" Nov 24 11:58:55 crc kubenswrapper[5072]: I1124 11:58:55.780809 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/1bc09d77-5ad5-40bb-a7d9-327834ebfd07-ceph\") pod \"glance-default-external-api-0\" (UID: \"1bc09d77-5ad5-40bb-a7d9-327834ebfd07\") " pod="openstack/glance-default-external-api-0" Nov 24 11:58:55 crc kubenswrapper[5072]: I1124 11:58:55.882977 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/1bc09d77-5ad5-40bb-a7d9-327834ebfd07-ceph\") pod \"glance-default-external-api-0\" (UID: \"1bc09d77-5ad5-40bb-a7d9-327834ebfd07\") " pod="openstack/glance-default-external-api-0" Nov 24 11:58:55 crc kubenswrapper[5072]: I1124 11:58:55.883062 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1bc09d77-5ad5-40bb-a7d9-327834ebfd07-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1bc09d77-5ad5-40bb-a7d9-327834ebfd07\") " pod="openstack/glance-default-external-api-0" Nov 24 11:58:55 crc kubenswrapper[5072]: I1124 11:58:55.883101 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1bc09d77-5ad5-40bb-a7d9-327834ebfd07-logs\") pod \"glance-default-external-api-0\" (UID: \"1bc09d77-5ad5-40bb-a7d9-327834ebfd07\") " pod="openstack/glance-default-external-api-0" Nov 24 11:58:55 crc kubenswrapper[5072]: I1124 11:58:55.883177 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1bc09d77-5ad5-40bb-a7d9-327834ebfd07-scripts\") pod \"glance-default-external-api-0\" (UID: \"1bc09d77-5ad5-40bb-a7d9-327834ebfd07\") " pod="openstack/glance-default-external-api-0" Nov 24 11:58:55 crc kubenswrapper[5072]: I1124 11:58:55.883211 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bc09d77-5ad5-40bb-a7d9-327834ebfd07-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1bc09d77-5ad5-40bb-a7d9-327834ebfd07\") " pod="openstack/glance-default-external-api-0" Nov 24 11:58:55 crc kubenswrapper[5072]: I1124 11:58:55.883236 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgl5v\" (UniqueName: \"kubernetes.io/projected/1bc09d77-5ad5-40bb-a7d9-327834ebfd07-kube-api-access-sgl5v\") pod \"glance-default-external-api-0\" (UID: \"1bc09d77-5ad5-40bb-a7d9-327834ebfd07\") " pod="openstack/glance-default-external-api-0" Nov 24 11:58:55 crc kubenswrapper[5072]: I1124 11:58:55.883299 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1bc09d77-5ad5-40bb-a7d9-327834ebfd07-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1bc09d77-5ad5-40bb-a7d9-327834ebfd07\") " pod="openstack/glance-default-external-api-0" Nov 24 11:58:55 crc kubenswrapper[5072]: I1124 11:58:55.883330 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"1bc09d77-5ad5-40bb-a7d9-327834ebfd07\") " pod="openstack/glance-default-external-api-0" Nov 24 11:58:55 crc kubenswrapper[5072]: I1124 11:58:55.883400 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bc09d77-5ad5-40bb-a7d9-327834ebfd07-config-data\") pod \"glance-default-external-api-0\" (UID: \"1bc09d77-5ad5-40bb-a7d9-327834ebfd07\") " pod="openstack/glance-default-external-api-0" Nov 24 11:58:55 crc kubenswrapper[5072]: I1124 11:58:55.884288 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1bc09d77-5ad5-40bb-a7d9-327834ebfd07-logs\") pod \"glance-default-external-api-0\" (UID: \"1bc09d77-5ad5-40bb-a7d9-327834ebfd07\") " pod="openstack/glance-default-external-api-0" Nov 24 11:58:55 crc kubenswrapper[5072]: I1124 11:58:55.884558 5072 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"1bc09d77-5ad5-40bb-a7d9-327834ebfd07\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-external-api-0" Nov 24 11:58:55 crc kubenswrapper[5072]: I1124 11:58:55.886617 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1bc09d77-5ad5-40bb-a7d9-327834ebfd07-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1bc09d77-5ad5-40bb-a7d9-327834ebfd07\") " pod="openstack/glance-default-external-api-0" Nov 24 11:58:55 crc kubenswrapper[5072]: I1124 11:58:55.889594 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bc09d77-5ad5-40bb-a7d9-327834ebfd07-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1bc09d77-5ad5-40bb-a7d9-327834ebfd07\") " pod="openstack/glance-default-external-api-0" Nov 24 11:58:55 crc kubenswrapper[5072]: I1124 11:58:55.890802 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1bc09d77-5ad5-40bb-a7d9-327834ebfd07-scripts\") pod \"glance-default-external-api-0\" (UID: \"1bc09d77-5ad5-40bb-a7d9-327834ebfd07\") " pod="openstack/glance-default-external-api-0" Nov 24 11:58:55 crc kubenswrapper[5072]: I1124 11:58:55.893798 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bc09d77-5ad5-40bb-a7d9-327834ebfd07-config-data\") pod \"glance-default-external-api-0\" (UID: \"1bc09d77-5ad5-40bb-a7d9-327834ebfd07\") " pod="openstack/glance-default-external-api-0" Nov 24 11:58:55 crc kubenswrapper[5072]: I1124 11:58:55.895307 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1bc09d77-5ad5-40bb-a7d9-327834ebfd07-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1bc09d77-5ad5-40bb-a7d9-327834ebfd07\") " pod="openstack/glance-default-external-api-0" Nov 24 11:58:55 crc kubenswrapper[5072]: I1124 11:58:55.897007 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/1bc09d77-5ad5-40bb-a7d9-327834ebfd07-ceph\") pod \"glance-default-external-api-0\" (UID: \"1bc09d77-5ad5-40bb-a7d9-327834ebfd07\") " pod="openstack/glance-default-external-api-0" Nov 24 11:58:55 crc kubenswrapper[5072]: I1124 11:58:55.904200 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgl5v\" (UniqueName: \"kubernetes.io/projected/1bc09d77-5ad5-40bb-a7d9-327834ebfd07-kube-api-access-sgl5v\") pod \"glance-default-external-api-0\" (UID: \"1bc09d77-5ad5-40bb-a7d9-327834ebfd07\") " pod="openstack/glance-default-external-api-0" Nov 24 11:58:55 crc kubenswrapper[5072]: I1124 11:58:55.931849 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"1bc09d77-5ad5-40bb-a7d9-327834ebfd07\") " pod="openstack/glance-default-external-api-0" Nov 24 11:58:56 crc kubenswrapper[5072]: I1124 11:58:56.020989 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 11:58:56 crc kubenswrapper[5072]: I1124 11:58:56.543342 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0","Type":"ContainerStarted","Data":"cb6986f1abd4cc776240c254e7b39a2a3cff54eb3af564e512ef3384c8cafbf6"} Nov 24 11:58:56 crc kubenswrapper[5072]: I1124 11:58:56.547711 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"e51194ec-7c1f-4609-996f-ee210bb13bb5","Type":"ContainerStarted","Data":"d147a128ef97103c01f27efd5181b5d315562ee7d3fff5ba253441783fd4a54a"} Nov 24 11:58:56 crc kubenswrapper[5072]: I1124 11:58:56.547745 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"e51194ec-7c1f-4609-996f-ee210bb13bb5","Type":"ContainerStarted","Data":"3376adc0e6ad1024c2bd2efdf4c1f9dcc43b58f7117854ac5f3e4bf0d6a3bf96"} Nov 24 11:58:56 crc kubenswrapper[5072]: I1124 11:58:56.649429 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 11:58:56 crc kubenswrapper[5072]: I1124 11:58:56.820468 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-668c6889fc-xbssb"] Nov 24 11:58:56 crc kubenswrapper[5072]: I1124 11:58:56.897409 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 11:58:56 crc kubenswrapper[5072]: I1124 11:58:56.940424 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-587d57694d-km6sf"] Nov 24 11:58:56 crc kubenswrapper[5072]: I1124 11:58:56.942249 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-587d57694d-km6sf" Nov 24 11:58:56 crc kubenswrapper[5072]: I1124 11:58:56.956074 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Nov 24 11:58:57 crc kubenswrapper[5072]: I1124 11:58:57.022167 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f-horizon-tls-certs\") pod \"horizon-587d57694d-km6sf\" (UID: \"3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f\") " pod="openstack/horizon-587d57694d-km6sf" Nov 24 11:58:57 crc kubenswrapper[5072]: I1124 11:58:57.022475 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f-logs\") pod \"horizon-587d57694d-km6sf\" (UID: \"3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f\") " pod="openstack/horizon-587d57694d-km6sf" Nov 24 11:58:57 crc kubenswrapper[5072]: I1124 11:58:57.023001 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f-horizon-secret-key\") pod \"horizon-587d57694d-km6sf\" (UID: \"3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f\") " pod="openstack/horizon-587d57694d-km6sf" Nov 24 11:58:57 crc kubenswrapper[5072]: I1124 11:58:57.023137 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f-scripts\") pod \"horizon-587d57694d-km6sf\" (UID: \"3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f\") " pod="openstack/horizon-587d57694d-km6sf" Nov 24 11:58:57 crc kubenswrapper[5072]: I1124 11:58:57.023158 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f-combined-ca-bundle\") pod \"horizon-587d57694d-km6sf\" (UID: \"3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f\") " pod="openstack/horizon-587d57694d-km6sf" Nov 24 11:58:57 crc kubenswrapper[5072]: I1124 11:58:57.023196 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f-config-data\") pod \"horizon-587d57694d-km6sf\" (UID: \"3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f\") " pod="openstack/horizon-587d57694d-km6sf" Nov 24 11:58:57 crc kubenswrapper[5072]: I1124 11:58:57.023311 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97wgk\" (UniqueName: \"kubernetes.io/projected/3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f-kube-api-access-97wgk\") pod \"horizon-587d57694d-km6sf\" (UID: \"3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f\") " pod="openstack/horizon-587d57694d-km6sf" Nov 24 11:58:57 crc kubenswrapper[5072]: I1124 11:58:57.111568 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-6hvhf" Nov 24 11:58:57 crc kubenswrapper[5072]: I1124 11:58:57.112898 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0488c3e-43d5-4f10-b3e4-d8904a296c40" path="/var/lib/kubelet/pods/b0488c3e-43d5-4f10-b3e4-d8904a296c40/volumes" Nov 24 11:58:57 crc kubenswrapper[5072]: I1124 11:58:57.113585 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-587d57694d-km6sf"] Nov 24 11:58:57 crc kubenswrapper[5072]: I1124 11:58:57.113757 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6ccd6d974c-ptg7b"] Nov 24 11:58:57 crc kubenswrapper[5072]: I1124 11:58:57.115651 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-575b5d47b6-n66fd"] Nov 24 11:58:57 crc kubenswrapper[5072]: E1124 11:58:57.116665 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2b9ee49-0cbe-43d3-a768-74c71d0f79e8" containerName="mariadb-database-create" Nov 24 11:58:57 crc kubenswrapper[5072]: I1124 11:58:57.116764 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2b9ee49-0cbe-43d3-a768-74c71d0f79e8" containerName="mariadb-database-create" Nov 24 11:58:57 crc kubenswrapper[5072]: I1124 11:58:57.117219 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2b9ee49-0cbe-43d3-a768-74c71d0f79e8" containerName="mariadb-database-create" Nov 24 11:58:57 crc kubenswrapper[5072]: I1124 11:58:57.120178 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-575b5d47b6-n66fd" Nov 24 11:58:57 crc kubenswrapper[5072]: I1124 11:58:57.127570 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f-horizon-secret-key\") pod \"horizon-587d57694d-km6sf\" (UID: \"3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f\") " pod="openstack/horizon-587d57694d-km6sf" Nov 24 11:58:57 crc kubenswrapper[5072]: I1124 11:58:57.127990 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f-scripts\") pod \"horizon-587d57694d-km6sf\" (UID: \"3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f\") " pod="openstack/horizon-587d57694d-km6sf" Nov 24 11:58:57 crc kubenswrapper[5072]: I1124 11:58:57.133900 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f-horizon-secret-key\") pod \"horizon-587d57694d-km6sf\" (UID: \"3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f\") " pod="openstack/horizon-587d57694d-km6sf" Nov 24 11:58:57 crc kubenswrapper[5072]: I1124 11:58:57.137750 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f-scripts\") pod \"horizon-587d57694d-km6sf\" (UID: \"3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f\") " pod="openstack/horizon-587d57694d-km6sf" Nov 24 11:58:57 crc kubenswrapper[5072]: I1124 11:58:57.142697 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f-combined-ca-bundle\") pod \"horizon-587d57694d-km6sf\" (UID: \"3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f\") " pod="openstack/horizon-587d57694d-km6sf" Nov 24 11:58:57 crc kubenswrapper[5072]: I1124 11:58:57.142791 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f-config-data\") pod \"horizon-587d57694d-km6sf\" (UID: \"3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f\") " pod="openstack/horizon-587d57694d-km6sf" Nov 24 11:58:57 crc kubenswrapper[5072]: I1124 11:58:57.142897 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97wgk\" (UniqueName: \"kubernetes.io/projected/3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f-kube-api-access-97wgk\") pod \"horizon-587d57694d-km6sf\" (UID: \"3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f\") " pod="openstack/horizon-587d57694d-km6sf" Nov 24 11:58:57 crc kubenswrapper[5072]: I1124 11:58:57.143043 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f-horizon-tls-certs\") pod \"horizon-587d57694d-km6sf\" (UID: \"3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f\") " pod="openstack/horizon-587d57694d-km6sf" Nov 24 11:58:57 crc kubenswrapper[5072]: I1124 11:58:57.143081 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f-logs\") pod \"horizon-587d57694d-km6sf\" (UID: \"3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f\") " pod="openstack/horizon-587d57694d-km6sf" Nov 24 11:58:57 crc kubenswrapper[5072]: I1124 11:58:57.143603 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f-logs\") pod \"horizon-587d57694d-km6sf\" (UID: \"3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f\") " pod="openstack/horizon-587d57694d-km6sf" Nov 24 11:58:57 crc kubenswrapper[5072]: I1124 11:58:57.145438 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f-config-data\") pod \"horizon-587d57694d-km6sf\" (UID: \"3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f\") " pod="openstack/horizon-587d57694d-km6sf" Nov 24 11:58:57 crc kubenswrapper[5072]: I1124 11:58:57.162420 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 11:58:57 crc kubenswrapper[5072]: I1124 11:58:57.167348 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-d2d4-account-create-hl6fw" Nov 24 11:58:57 crc kubenswrapper[5072]: I1124 11:58:57.169528 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97wgk\" (UniqueName: \"kubernetes.io/projected/3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f-kube-api-access-97wgk\") pod \"horizon-587d57694d-km6sf\" (UID: \"3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f\") " pod="openstack/horizon-587d57694d-km6sf" Nov 24 11:58:57 crc kubenswrapper[5072]: I1124 11:58:57.170928 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f-horizon-tls-certs\") pod \"horizon-587d57694d-km6sf\" (UID: \"3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f\") " pod="openstack/horizon-587d57694d-km6sf" Nov 24 11:58:57 crc kubenswrapper[5072]: I1124 11:58:57.180895 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f-combined-ca-bundle\") pod \"horizon-587d57694d-km6sf\" (UID: \"3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f\") " pod="openstack/horizon-587d57694d-km6sf" Nov 24 11:58:57 crc kubenswrapper[5072]: I1124 11:58:57.185913 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-575b5d47b6-n66fd"] Nov 24 11:58:57 crc kubenswrapper[5072]: I1124 11:58:57.244635 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/feb68e18-e333-419a-acbf-7bc331cc35a8-operator-scripts\") pod \"feb68e18-e333-419a-acbf-7bc331cc35a8\" (UID: \"feb68e18-e333-419a-acbf-7bc331cc35a8\") " Nov 24 11:58:57 crc kubenswrapper[5072]: I1124 11:58:57.244730 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2b9ee49-0cbe-43d3-a768-74c71d0f79e8-operator-scripts\") pod \"e2b9ee49-0cbe-43d3-a768-74c71d0f79e8\" (UID: \"e2b9ee49-0cbe-43d3-a768-74c71d0f79e8\") " Nov 24 11:58:57 crc kubenswrapper[5072]: I1124 11:58:57.244795 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xh8s4\" (UniqueName: \"kubernetes.io/projected/feb68e18-e333-419a-acbf-7bc331cc35a8-kube-api-access-xh8s4\") pod \"feb68e18-e333-419a-acbf-7bc331cc35a8\" (UID: \"feb68e18-e333-419a-acbf-7bc331cc35a8\") " Nov 24 11:58:57 crc kubenswrapper[5072]: I1124 11:58:57.244909 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z2rh9\" (UniqueName: \"kubernetes.io/projected/e2b9ee49-0cbe-43d3-a768-74c71d0f79e8-kube-api-access-z2rh9\") pod \"e2b9ee49-0cbe-43d3-a768-74c71d0f79e8\" (UID: \"e2b9ee49-0cbe-43d3-a768-74c71d0f79e8\") " Nov 24 11:58:57 crc kubenswrapper[5072]: I1124 11:58:57.245329 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/78739666-79c8-4af9-9766-6793e7975629-horizon-secret-key\") pod \"horizon-575b5d47b6-n66fd\" (UID: \"78739666-79c8-4af9-9766-6793e7975629\") " pod="openstack/horizon-575b5d47b6-n66fd" Nov 24 11:58:57 crc kubenswrapper[5072]: I1124 11:58:57.245399 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78739666-79c8-4af9-9766-6793e7975629-logs\") pod \"horizon-575b5d47b6-n66fd\" (UID: \"78739666-79c8-4af9-9766-6793e7975629\") " pod="openstack/horizon-575b5d47b6-n66fd" Nov 24 11:58:57 crc kubenswrapper[5072]: I1124 11:58:57.245479 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78739666-79c8-4af9-9766-6793e7975629-combined-ca-bundle\") pod \"horizon-575b5d47b6-n66fd\" (UID: \"78739666-79c8-4af9-9766-6793e7975629\") " pod="openstack/horizon-575b5d47b6-n66fd" Nov 24 11:58:57 crc kubenswrapper[5072]: I1124 11:58:57.245566 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/78739666-79c8-4af9-9766-6793e7975629-horizon-tls-certs\") pod \"horizon-575b5d47b6-n66fd\" (UID: \"78739666-79c8-4af9-9766-6793e7975629\") " pod="openstack/horizon-575b5d47b6-n66fd" Nov 24 11:58:57 crc kubenswrapper[5072]: I1124 11:58:57.245640 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/78739666-79c8-4af9-9766-6793e7975629-scripts\") pod \"horizon-575b5d47b6-n66fd\" (UID: \"78739666-79c8-4af9-9766-6793e7975629\") " pod="openstack/horizon-575b5d47b6-n66fd" Nov 24 11:58:57 crc kubenswrapper[5072]: I1124 11:58:57.245661 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/78739666-79c8-4af9-9766-6793e7975629-config-data\") pod \"horizon-575b5d47b6-n66fd\" (UID: \"78739666-79c8-4af9-9766-6793e7975629\") " pod="openstack/horizon-575b5d47b6-n66fd" Nov 24 11:58:57 crc kubenswrapper[5072]: I1124 11:58:57.245712 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtdqf\" (UniqueName: \"kubernetes.io/projected/78739666-79c8-4af9-9766-6793e7975629-kube-api-access-xtdqf\") pod \"horizon-575b5d47b6-n66fd\" (UID: \"78739666-79c8-4af9-9766-6793e7975629\") " pod="openstack/horizon-575b5d47b6-n66fd" Nov 24 11:58:57 crc kubenswrapper[5072]: I1124 11:58:57.245558 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2b9ee49-0cbe-43d3-a768-74c71d0f79e8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e2b9ee49-0cbe-43d3-a768-74c71d0f79e8" (UID: "e2b9ee49-0cbe-43d3-a768-74c71d0f79e8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:58:57 crc kubenswrapper[5072]: I1124 11:58:57.246055 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/feb68e18-e333-419a-acbf-7bc331cc35a8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "feb68e18-e333-419a-acbf-7bc331cc35a8" (UID: "feb68e18-e333-419a-acbf-7bc331cc35a8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:58:57 crc kubenswrapper[5072]: I1124 11:58:57.249345 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/feb68e18-e333-419a-acbf-7bc331cc35a8-kube-api-access-xh8s4" (OuterVolumeSpecName: "kube-api-access-xh8s4") pod "feb68e18-e333-419a-acbf-7bc331cc35a8" (UID: "feb68e18-e333-419a-acbf-7bc331cc35a8"). InnerVolumeSpecName "kube-api-access-xh8s4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:58:57 crc kubenswrapper[5072]: I1124 11:58:57.249973 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2b9ee49-0cbe-43d3-a768-74c71d0f79e8-kube-api-access-z2rh9" (OuterVolumeSpecName: "kube-api-access-z2rh9") pod "e2b9ee49-0cbe-43d3-a768-74c71d0f79e8" (UID: "e2b9ee49-0cbe-43d3-a768-74c71d0f79e8"). InnerVolumeSpecName "kube-api-access-z2rh9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:58:57 crc kubenswrapper[5072]: I1124 11:58:57.282792 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-587d57694d-km6sf" Nov 24 11:58:57 crc kubenswrapper[5072]: I1124 11:58:57.347942 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/78739666-79c8-4af9-9766-6793e7975629-horizon-tls-certs\") pod \"horizon-575b5d47b6-n66fd\" (UID: \"78739666-79c8-4af9-9766-6793e7975629\") " pod="openstack/horizon-575b5d47b6-n66fd" Nov 24 11:58:57 crc kubenswrapper[5072]: I1124 11:58:57.348035 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/78739666-79c8-4af9-9766-6793e7975629-config-data\") pod \"horizon-575b5d47b6-n66fd\" (UID: \"78739666-79c8-4af9-9766-6793e7975629\") " pod="openstack/horizon-575b5d47b6-n66fd" Nov 24 11:58:57 crc kubenswrapper[5072]: I1124 11:58:57.348058 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/78739666-79c8-4af9-9766-6793e7975629-scripts\") pod \"horizon-575b5d47b6-n66fd\" (UID: \"78739666-79c8-4af9-9766-6793e7975629\") " pod="openstack/horizon-575b5d47b6-n66fd" Nov 24 11:58:57 crc kubenswrapper[5072]: I1124 11:58:57.348108 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtdqf\" (UniqueName: \"kubernetes.io/projected/78739666-79c8-4af9-9766-6793e7975629-kube-api-access-xtdqf\") pod \"horizon-575b5d47b6-n66fd\" (UID: \"78739666-79c8-4af9-9766-6793e7975629\") " pod="openstack/horizon-575b5d47b6-n66fd" Nov 24 11:58:57 crc kubenswrapper[5072]: I1124 11:58:57.348147 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/78739666-79c8-4af9-9766-6793e7975629-horizon-secret-key\") pod \"horizon-575b5d47b6-n66fd\" (UID: \"78739666-79c8-4af9-9766-6793e7975629\") " pod="openstack/horizon-575b5d47b6-n66fd" Nov 24 11:58:57 crc kubenswrapper[5072]: I1124 11:58:57.348172 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78739666-79c8-4af9-9766-6793e7975629-logs\") pod \"horizon-575b5d47b6-n66fd\" (UID: \"78739666-79c8-4af9-9766-6793e7975629\") " pod="openstack/horizon-575b5d47b6-n66fd" Nov 24 11:58:57 crc kubenswrapper[5072]: I1124 11:58:57.348231 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78739666-79c8-4af9-9766-6793e7975629-combined-ca-bundle\") pod \"horizon-575b5d47b6-n66fd\" (UID: \"78739666-79c8-4af9-9766-6793e7975629\") " pod="openstack/horizon-575b5d47b6-n66fd" Nov 24 11:58:57 crc kubenswrapper[5072]: I1124 11:58:57.348312 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xh8s4\" (UniqueName: \"kubernetes.io/projected/feb68e18-e333-419a-acbf-7bc331cc35a8-kube-api-access-xh8s4\") on node \"crc\" DevicePath \"\"" Nov 24 11:58:57 crc kubenswrapper[5072]: I1124 11:58:57.348329 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z2rh9\" (UniqueName: \"kubernetes.io/projected/e2b9ee49-0cbe-43d3-a768-74c71d0f79e8-kube-api-access-z2rh9\") on node \"crc\" DevicePath \"\"" Nov 24 11:58:57 crc kubenswrapper[5072]: I1124 11:58:57.348343 5072 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/feb68e18-e333-419a-acbf-7bc331cc35a8-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:58:57 crc kubenswrapper[5072]: I1124 11:58:57.348355 5072 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2b9ee49-0cbe-43d3-a768-74c71d0f79e8-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:58:57 crc kubenswrapper[5072]: I1124 11:58:57.349243 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78739666-79c8-4af9-9766-6793e7975629-logs\") pod \"horizon-575b5d47b6-n66fd\" (UID: \"78739666-79c8-4af9-9766-6793e7975629\") " pod="openstack/horizon-575b5d47b6-n66fd" Nov 24 11:58:57 crc kubenswrapper[5072]: I1124 11:58:57.350519 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/78739666-79c8-4af9-9766-6793e7975629-config-data\") pod \"horizon-575b5d47b6-n66fd\" (UID: \"78739666-79c8-4af9-9766-6793e7975629\") " pod="openstack/horizon-575b5d47b6-n66fd" Nov 24 11:58:57 crc kubenswrapper[5072]: I1124 11:58:57.351128 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/78739666-79c8-4af9-9766-6793e7975629-scripts\") pod \"horizon-575b5d47b6-n66fd\" (UID: \"78739666-79c8-4af9-9766-6793e7975629\") " pod="openstack/horizon-575b5d47b6-n66fd" Nov 24 11:58:57 crc kubenswrapper[5072]: I1124 11:58:57.355359 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/78739666-79c8-4af9-9766-6793e7975629-horizon-secret-key\") pod \"horizon-575b5d47b6-n66fd\" (UID: \"78739666-79c8-4af9-9766-6793e7975629\") " pod="openstack/horizon-575b5d47b6-n66fd" Nov 24 11:58:57 crc kubenswrapper[5072]: I1124 11:58:57.356163 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78739666-79c8-4af9-9766-6793e7975629-combined-ca-bundle\") pod \"horizon-575b5d47b6-n66fd\" (UID: \"78739666-79c8-4af9-9766-6793e7975629\") " pod="openstack/horizon-575b5d47b6-n66fd" Nov 24 11:58:57 crc kubenswrapper[5072]: I1124 11:58:57.358706 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/78739666-79c8-4af9-9766-6793e7975629-horizon-tls-certs\") pod \"horizon-575b5d47b6-n66fd\" (UID: \"78739666-79c8-4af9-9766-6793e7975629\") " pod="openstack/horizon-575b5d47b6-n66fd" Nov 24 11:58:57 crc kubenswrapper[5072]: I1124 11:58:57.372546 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtdqf\" (UniqueName: \"kubernetes.io/projected/78739666-79c8-4af9-9766-6793e7975629-kube-api-access-xtdqf\") pod \"horizon-575b5d47b6-n66fd\" (UID: \"78739666-79c8-4af9-9766-6793e7975629\") " pod="openstack/horizon-575b5d47b6-n66fd" Nov 24 11:58:57 crc kubenswrapper[5072]: I1124 11:58:57.452991 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-575b5d47b6-n66fd" Nov 24 11:58:57 crc kubenswrapper[5072]: I1124 11:58:57.581351 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-d2d4-account-create-hl6fw" event={"ID":"feb68e18-e333-419a-acbf-7bc331cc35a8","Type":"ContainerDied","Data":"3c5e554e9a5cddcb2de54bf895491c7580dd9de54f6958d55e818dff968c96eb"} Nov 24 11:58:57 crc kubenswrapper[5072]: I1124 11:58:57.581763 5072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c5e554e9a5cddcb2de54bf895491c7580dd9de54f6958d55e818dff968c96eb" Nov 24 11:58:57 crc kubenswrapper[5072]: I1124 11:58:57.581838 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-d2d4-account-create-hl6fw" Nov 24 11:58:57 crc kubenswrapper[5072]: I1124 11:58:57.593339 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1bc09d77-5ad5-40bb-a7d9-327834ebfd07","Type":"ContainerStarted","Data":"2ab19809bff85d08b779239c5adc9b78c44ff708b92845c06130cfd72bacea81"} Nov 24 11:58:57 crc kubenswrapper[5072]: I1124 11:58:57.599102 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1bd3753f-127a-40e9-9406-3c34efbf1e17","Type":"ContainerStarted","Data":"521297ae15ef99c9607a4b67b97725c860bd33ba8eb7388f6b45293742ef3cac"} Nov 24 11:58:57 crc kubenswrapper[5072]: I1124 11:58:57.603414 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0","Type":"ContainerStarted","Data":"c993cd41a928a52708d09aa83c51e31150b4f07ccf3ec7314628bf22cd3c2844"} Nov 24 11:58:57 crc kubenswrapper[5072]: I1124 11:58:57.610847 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-6hvhf" Nov 24 11:58:57 crc kubenswrapper[5072]: I1124 11:58:57.612199 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-6hvhf" event={"ID":"e2b9ee49-0cbe-43d3-a768-74c71d0f79e8","Type":"ContainerDied","Data":"7f0b2b0234394e4acb72481245d99d7995d79b20ca2f8176567cdb63eec6681e"} Nov 24 11:58:57 crc kubenswrapper[5072]: I1124 11:58:57.612248 5072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f0b2b0234394e4acb72481245d99d7995d79b20ca2f8176567cdb63eec6681e" Nov 24 11:58:57 crc kubenswrapper[5072]: I1124 11:58:57.638513 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-volume1-0" podStartSLOduration=3.423032291 podStartE2EDuration="4.638493131s" podCreationTimestamp="2025-11-24 11:58:53 +0000 UTC" firstStartedPulling="2025-11-24 11:58:54.553238767 +0000 UTC m=+2986.264763243" lastFinishedPulling="2025-11-24 11:58:55.768699607 +0000 UTC m=+2987.480224083" observedRunningTime="2025-11-24 11:58:57.634101132 +0000 UTC m=+2989.345625608" watchObservedRunningTime="2025-11-24 11:58:57.638493131 +0000 UTC m=+2989.350017617" Nov 24 11:58:57 crc kubenswrapper[5072]: I1124 11:58:57.853662 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-backup-0" podStartSLOduration=3.768481985 podStartE2EDuration="4.853635219s" podCreationTimestamp="2025-11-24 11:58:53 +0000 UTC" firstStartedPulling="2025-11-24 11:58:54.643935196 +0000 UTC m=+2986.355459682" lastFinishedPulling="2025-11-24 11:58:55.72908844 +0000 UTC m=+2987.440612916" observedRunningTime="2025-11-24 11:58:57.662990911 +0000 UTC m=+2989.374515387" watchObservedRunningTime="2025-11-24 11:58:57.853635219 +0000 UTC m=+2989.565159695" Nov 24 11:58:57 crc kubenswrapper[5072]: I1124 11:58:57.865716 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-587d57694d-km6sf"] Nov 24 11:58:57 crc kubenswrapper[5072]: W1124 11:58:57.902839 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3ae3eb1b_1c4a_4e8b_8429_f55ce79cca8f.slice/crio-c6d1efd7e2eb92c89e6fe373f194bd3a485005398840d3f80a84925037318db1 WatchSource:0}: Error finding container c6d1efd7e2eb92c89e6fe373f194bd3a485005398840d3f80a84925037318db1: Status 404 returned error can't find the container with id c6d1efd7e2eb92c89e6fe373f194bd3a485005398840d3f80a84925037318db1 Nov 24 11:58:57 crc kubenswrapper[5072]: I1124 11:58:57.942653 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-575b5d47b6-n66fd"] Nov 24 11:58:57 crc kubenswrapper[5072]: W1124 11:58:57.968613 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod78739666_79c8_4af9_9766_6793e7975629.slice/crio-2309bb56fb0cd1df1534288f43269d231e5f4c3638129cdffd5dea89dc7e60e3 WatchSource:0}: Error finding container 2309bb56fb0cd1df1534288f43269d231e5f4c3638129cdffd5dea89dc7e60e3: Status 404 returned error can't find the container with id 2309bb56fb0cd1df1534288f43269d231e5f4c3638129cdffd5dea89dc7e60e3 Nov 24 11:58:58 crc kubenswrapper[5072]: I1124 11:58:58.524254 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-volume1-0" Nov 24 11:58:58 crc kubenswrapper[5072]: I1124 11:58:58.619681 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-587d57694d-km6sf" event={"ID":"3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f","Type":"ContainerStarted","Data":"c6d1efd7e2eb92c89e6fe373f194bd3a485005398840d3f80a84925037318db1"} Nov 24 11:58:58 crc kubenswrapper[5072]: I1124 11:58:58.620912 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-575b5d47b6-n66fd" event={"ID":"78739666-79c8-4af9-9766-6793e7975629","Type":"ContainerStarted","Data":"2309bb56fb0cd1df1534288f43269d231e5f4c3638129cdffd5dea89dc7e60e3"} Nov 24 11:58:58 crc kubenswrapper[5072]: I1124 11:58:58.625109 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1bc09d77-5ad5-40bb-a7d9-327834ebfd07","Type":"ContainerStarted","Data":"ab7a9b5c6b635c90135f4e4a5eec2cc51cc65df4dee1294b128bafdd964a954b"} Nov 24 11:58:58 crc kubenswrapper[5072]: I1124 11:58:58.631023 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="1bd3753f-127a-40e9-9406-3c34efbf1e17" containerName="glance-log" containerID="cri-o://521297ae15ef99c9607a4b67b97725c860bd33ba8eb7388f6b45293742ef3cac" gracePeriod=30 Nov 24 11:58:58 crc kubenswrapper[5072]: I1124 11:58:58.631267 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1bd3753f-127a-40e9-9406-3c34efbf1e17","Type":"ContainerStarted","Data":"7fab48aa07d1d4d85e8ccc0562e8a42afc322a02d1beadbcab461b2638247ed2"} Nov 24 11:58:58 crc kubenswrapper[5072]: I1124 11:58:58.631526 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="1bd3753f-127a-40e9-9406-3c34efbf1e17" containerName="glance-httpd" containerID="cri-o://7fab48aa07d1d4d85e8ccc0562e8a42afc322a02d1beadbcab461b2638247ed2" gracePeriod=30 Nov 24 11:58:58 crc kubenswrapper[5072]: I1124 11:58:58.645775 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Nov 24 11:58:58 crc kubenswrapper[5072]: I1124 11:58:58.669042 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.669025866 podStartE2EDuration="5.669025866s" podCreationTimestamp="2025-11-24 11:58:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:58:58.660317059 +0000 UTC m=+2990.371841555" watchObservedRunningTime="2025-11-24 11:58:58.669025866 +0000 UTC m=+2990.380550342" Nov 24 11:58:59 crc kubenswrapper[5072]: I1124 11:58:59.412538 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-sync-b55tw"] Nov 24 11:58:59 crc kubenswrapper[5072]: E1124 11:58:59.413416 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="feb68e18-e333-419a-acbf-7bc331cc35a8" containerName="mariadb-account-create" Nov 24 11:58:59 crc kubenswrapper[5072]: I1124 11:58:59.413441 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="feb68e18-e333-419a-acbf-7bc331cc35a8" containerName="mariadb-account-create" Nov 24 11:58:59 crc kubenswrapper[5072]: I1124 11:58:59.413689 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="feb68e18-e333-419a-acbf-7bc331cc35a8" containerName="mariadb-account-create" Nov 24 11:58:59 crc kubenswrapper[5072]: I1124 11:58:59.419005 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-b55tw" Nov 24 11:58:59 crc kubenswrapper[5072]: I1124 11:58:59.425184 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-2wtjm" Nov 24 11:58:59 crc kubenswrapper[5072]: I1124 11:58:59.425444 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Nov 24 11:58:59 crc kubenswrapper[5072]: I1124 11:58:59.449148 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-b55tw"] Nov 24 11:58:59 crc kubenswrapper[5072]: I1124 11:58:59.505348 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a074607-4e56-4d2e-a4ee-87906af89764-combined-ca-bundle\") pod \"manila-db-sync-b55tw\" (UID: \"4a074607-4e56-4d2e-a4ee-87906af89764\") " pod="openstack/manila-db-sync-b55tw" Nov 24 11:58:59 crc kubenswrapper[5072]: I1124 11:58:59.507953 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/4a074607-4e56-4d2e-a4ee-87906af89764-job-config-data\") pod \"manila-db-sync-b55tw\" (UID: \"4a074607-4e56-4d2e-a4ee-87906af89764\") " pod="openstack/manila-db-sync-b55tw" Nov 24 11:58:59 crc kubenswrapper[5072]: I1124 11:58:59.508223 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a074607-4e56-4d2e-a4ee-87906af89764-config-data\") pod \"manila-db-sync-b55tw\" (UID: \"4a074607-4e56-4d2e-a4ee-87906af89764\") " pod="openstack/manila-db-sync-b55tw" Nov 24 11:58:59 crc kubenswrapper[5072]: I1124 11:58:59.508544 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7h55\" (UniqueName: \"kubernetes.io/projected/4a074607-4e56-4d2e-a4ee-87906af89764-kube-api-access-t7h55\") pod \"manila-db-sync-b55tw\" (UID: \"4a074607-4e56-4d2e-a4ee-87906af89764\") " pod="openstack/manila-db-sync-b55tw" Nov 24 11:58:59 crc kubenswrapper[5072]: I1124 11:58:59.611721 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a074607-4e56-4d2e-a4ee-87906af89764-combined-ca-bundle\") pod \"manila-db-sync-b55tw\" (UID: \"4a074607-4e56-4d2e-a4ee-87906af89764\") " pod="openstack/manila-db-sync-b55tw" Nov 24 11:58:59 crc kubenswrapper[5072]: I1124 11:58:59.612054 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/4a074607-4e56-4d2e-a4ee-87906af89764-job-config-data\") pod \"manila-db-sync-b55tw\" (UID: \"4a074607-4e56-4d2e-a4ee-87906af89764\") " pod="openstack/manila-db-sync-b55tw" Nov 24 11:58:59 crc kubenswrapper[5072]: I1124 11:58:59.612088 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a074607-4e56-4d2e-a4ee-87906af89764-config-data\") pod \"manila-db-sync-b55tw\" (UID: \"4a074607-4e56-4d2e-a4ee-87906af89764\") " pod="openstack/manila-db-sync-b55tw" Nov 24 11:58:59 crc kubenswrapper[5072]: I1124 11:58:59.612128 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7h55\" (UniqueName: \"kubernetes.io/projected/4a074607-4e56-4d2e-a4ee-87906af89764-kube-api-access-t7h55\") pod \"manila-db-sync-b55tw\" (UID: \"4a074607-4e56-4d2e-a4ee-87906af89764\") " pod="openstack/manila-db-sync-b55tw" Nov 24 11:58:59 crc kubenswrapper[5072]: I1124 11:58:59.620788 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a074607-4e56-4d2e-a4ee-87906af89764-combined-ca-bundle\") pod \"manila-db-sync-b55tw\" (UID: \"4a074607-4e56-4d2e-a4ee-87906af89764\") " pod="openstack/manila-db-sync-b55tw" Nov 24 11:58:59 crc kubenswrapper[5072]: I1124 11:58:59.625224 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a074607-4e56-4d2e-a4ee-87906af89764-config-data\") pod \"manila-db-sync-b55tw\" (UID: \"4a074607-4e56-4d2e-a4ee-87906af89764\") " pod="openstack/manila-db-sync-b55tw" Nov 24 11:58:59 crc kubenswrapper[5072]: I1124 11:58:59.633209 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/4a074607-4e56-4d2e-a4ee-87906af89764-job-config-data\") pod \"manila-db-sync-b55tw\" (UID: \"4a074607-4e56-4d2e-a4ee-87906af89764\") " pod="openstack/manila-db-sync-b55tw" Nov 24 11:58:59 crc kubenswrapper[5072]: I1124 11:58:59.633974 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7h55\" (UniqueName: \"kubernetes.io/projected/4a074607-4e56-4d2e-a4ee-87906af89764-kube-api-access-t7h55\") pod \"manila-db-sync-b55tw\" (UID: \"4a074607-4e56-4d2e-a4ee-87906af89764\") " pod="openstack/manila-db-sync-b55tw" Nov 24 11:58:59 crc kubenswrapper[5072]: I1124 11:58:59.649112 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1bc09d77-5ad5-40bb-a7d9-327834ebfd07","Type":"ContainerStarted","Data":"1f9512203b5d653be41e6eb617e459328c12ee26a9cf65323229d95f1bf8c6e0"} Nov 24 11:58:59 crc kubenswrapper[5072]: I1124 11:58:59.649274 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="1bc09d77-5ad5-40bb-a7d9-327834ebfd07" containerName="glance-log" containerID="cri-o://ab7a9b5c6b635c90135f4e4a5eec2cc51cc65df4dee1294b128bafdd964a954b" gracePeriod=30 Nov 24 11:58:59 crc kubenswrapper[5072]: I1124 11:58:59.650024 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="1bc09d77-5ad5-40bb-a7d9-327834ebfd07" containerName="glance-httpd" containerID="cri-o://1f9512203b5d653be41e6eb617e459328c12ee26a9cf65323229d95f1bf8c6e0" gracePeriod=30 Nov 24 11:58:59 crc kubenswrapper[5072]: I1124 11:58:59.692849 5072 generic.go:334] "Generic (PLEG): container finished" podID="1bd3753f-127a-40e9-9406-3c34efbf1e17" containerID="7fab48aa07d1d4d85e8ccc0562e8a42afc322a02d1beadbcab461b2638247ed2" exitCode=143 Nov 24 11:58:59 crc kubenswrapper[5072]: I1124 11:58:59.692877 5072 generic.go:334] "Generic (PLEG): container finished" podID="1bd3753f-127a-40e9-9406-3c34efbf1e17" containerID="521297ae15ef99c9607a4b67b97725c860bd33ba8eb7388f6b45293742ef3cac" exitCode=143 Nov 24 11:58:59 crc kubenswrapper[5072]: I1124 11:58:59.693220 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1bd3753f-127a-40e9-9406-3c34efbf1e17","Type":"ContainerDied","Data":"7fab48aa07d1d4d85e8ccc0562e8a42afc322a02d1beadbcab461b2638247ed2"} Nov 24 11:58:59 crc kubenswrapper[5072]: I1124 11:58:59.693272 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1bd3753f-127a-40e9-9406-3c34efbf1e17","Type":"ContainerDied","Data":"521297ae15ef99c9607a4b67b97725c860bd33ba8eb7388f6b45293742ef3cac"} Nov 24 11:58:59 crc kubenswrapper[5072]: I1124 11:58:59.714470 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.714446681 podStartE2EDuration="4.714446681s" podCreationTimestamp="2025-11-24 11:58:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:58:59.684502715 +0000 UTC m=+2991.396027201" watchObservedRunningTime="2025-11-24 11:58:59.714446681 +0000 UTC m=+2991.425971157" Nov 24 11:58:59 crc kubenswrapper[5072]: I1124 11:58:59.845899 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-b55tw" Nov 24 11:58:59 crc kubenswrapper[5072]: I1124 11:58:59.963795 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 11:59:00 crc kubenswrapper[5072]: I1124 11:59:00.017100 5072 scope.go:117] "RemoveContainer" containerID="4c463b6823449c0875f1fec4633ea521827aee0fee045719621150bcb1ac1a4f" Nov 24 11:59:00 crc kubenswrapper[5072]: E1124 11:59:00.017481 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 11:59:00 crc kubenswrapper[5072]: I1124 11:59:00.043821 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bd3753f-127a-40e9-9406-3c34efbf1e17-combined-ca-bundle\") pod \"1bd3753f-127a-40e9-9406-3c34efbf1e17\" (UID: \"1bd3753f-127a-40e9-9406-3c34efbf1e17\") " Nov 24 11:59:00 crc kubenswrapper[5072]: I1124 11:59:00.043930 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1bd3753f-127a-40e9-9406-3c34efbf1e17-scripts\") pod \"1bd3753f-127a-40e9-9406-3c34efbf1e17\" (UID: \"1bd3753f-127a-40e9-9406-3c34efbf1e17\") " Nov 24 11:59:00 crc kubenswrapper[5072]: I1124 11:59:00.044232 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1bd3753f-127a-40e9-9406-3c34efbf1e17-internal-tls-certs\") pod \"1bd3753f-127a-40e9-9406-3c34efbf1e17\" (UID: \"1bd3753f-127a-40e9-9406-3c34efbf1e17\") " Nov 24 11:59:00 crc kubenswrapper[5072]: I1124 11:59:00.044347 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bd3753f-127a-40e9-9406-3c34efbf1e17-config-data\") pod \"1bd3753f-127a-40e9-9406-3c34efbf1e17\" (UID: \"1bd3753f-127a-40e9-9406-3c34efbf1e17\") " Nov 24 11:59:00 crc kubenswrapper[5072]: I1124 11:59:00.044421 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1bd3753f-127a-40e9-9406-3c34efbf1e17-logs\") pod \"1bd3753f-127a-40e9-9406-3c34efbf1e17\" (UID: \"1bd3753f-127a-40e9-9406-3c34efbf1e17\") " Nov 24 11:59:00 crc kubenswrapper[5072]: I1124 11:59:00.044458 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/1bd3753f-127a-40e9-9406-3c34efbf1e17-ceph\") pod \"1bd3753f-127a-40e9-9406-3c34efbf1e17\" (UID: \"1bd3753f-127a-40e9-9406-3c34efbf1e17\") " Nov 24 11:59:00 crc kubenswrapper[5072]: I1124 11:59:00.044477 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4ljb\" (UniqueName: \"kubernetes.io/projected/1bd3753f-127a-40e9-9406-3c34efbf1e17-kube-api-access-r4ljb\") pod \"1bd3753f-127a-40e9-9406-3c34efbf1e17\" (UID: \"1bd3753f-127a-40e9-9406-3c34efbf1e17\") " Nov 24 11:59:00 crc kubenswrapper[5072]: I1124 11:59:00.044519 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1bd3753f-127a-40e9-9406-3c34efbf1e17-httpd-run\") pod \"1bd3753f-127a-40e9-9406-3c34efbf1e17\" (UID: \"1bd3753f-127a-40e9-9406-3c34efbf1e17\") " Nov 24 11:59:00 crc kubenswrapper[5072]: I1124 11:59:00.044540 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"1bd3753f-127a-40e9-9406-3c34efbf1e17\" (UID: \"1bd3753f-127a-40e9-9406-3c34efbf1e17\") " Nov 24 11:59:00 crc kubenswrapper[5072]: I1124 11:59:00.046112 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1bd3753f-127a-40e9-9406-3c34efbf1e17-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "1bd3753f-127a-40e9-9406-3c34efbf1e17" (UID: "1bd3753f-127a-40e9-9406-3c34efbf1e17"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:59:00 crc kubenswrapper[5072]: I1124 11:59:00.046393 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1bd3753f-127a-40e9-9406-3c34efbf1e17-logs" (OuterVolumeSpecName: "logs") pod "1bd3753f-127a-40e9-9406-3c34efbf1e17" (UID: "1bd3753f-127a-40e9-9406-3c34efbf1e17"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:59:00 crc kubenswrapper[5072]: I1124 11:59:00.050451 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bd3753f-127a-40e9-9406-3c34efbf1e17-kube-api-access-r4ljb" (OuterVolumeSpecName: "kube-api-access-r4ljb") pod "1bd3753f-127a-40e9-9406-3c34efbf1e17" (UID: "1bd3753f-127a-40e9-9406-3c34efbf1e17"). InnerVolumeSpecName "kube-api-access-r4ljb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:59:00 crc kubenswrapper[5072]: I1124 11:59:00.050728 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "1bd3753f-127a-40e9-9406-3c34efbf1e17" (UID: "1bd3753f-127a-40e9-9406-3c34efbf1e17"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 24 11:59:00 crc kubenswrapper[5072]: I1124 11:59:00.050931 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bd3753f-127a-40e9-9406-3c34efbf1e17-ceph" (OuterVolumeSpecName: "ceph") pod "1bd3753f-127a-40e9-9406-3c34efbf1e17" (UID: "1bd3753f-127a-40e9-9406-3c34efbf1e17"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:59:00 crc kubenswrapper[5072]: I1124 11:59:00.052945 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bd3753f-127a-40e9-9406-3c34efbf1e17-scripts" (OuterVolumeSpecName: "scripts") pod "1bd3753f-127a-40e9-9406-3c34efbf1e17" (UID: "1bd3753f-127a-40e9-9406-3c34efbf1e17"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:59:00 crc kubenswrapper[5072]: I1124 11:59:00.089431 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bd3753f-127a-40e9-9406-3c34efbf1e17-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1bd3753f-127a-40e9-9406-3c34efbf1e17" (UID: "1bd3753f-127a-40e9-9406-3c34efbf1e17"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:59:00 crc kubenswrapper[5072]: I1124 11:59:00.148074 5072 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bd3753f-127a-40e9-9406-3c34efbf1e17-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:00 crc kubenswrapper[5072]: I1124 11:59:00.148113 5072 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1bd3753f-127a-40e9-9406-3c34efbf1e17-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:00 crc kubenswrapper[5072]: I1124 11:59:00.148128 5072 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1bd3753f-127a-40e9-9406-3c34efbf1e17-logs\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:00 crc kubenswrapper[5072]: I1124 11:59:00.148140 5072 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/1bd3753f-127a-40e9-9406-3c34efbf1e17-ceph\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:00 crc kubenswrapper[5072]: I1124 11:59:00.148152 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r4ljb\" (UniqueName: \"kubernetes.io/projected/1bd3753f-127a-40e9-9406-3c34efbf1e17-kube-api-access-r4ljb\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:00 crc kubenswrapper[5072]: I1124 11:59:00.148164 5072 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1bd3753f-127a-40e9-9406-3c34efbf1e17-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:00 crc kubenswrapper[5072]: I1124 11:59:00.148188 5072 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Nov 24 11:59:00 crc kubenswrapper[5072]: I1124 11:59:00.210746 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bd3753f-127a-40e9-9406-3c34efbf1e17-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "1bd3753f-127a-40e9-9406-3c34efbf1e17" (UID: "1bd3753f-127a-40e9-9406-3c34efbf1e17"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:59:00 crc kubenswrapper[5072]: I1124 11:59:00.222504 5072 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Nov 24 11:59:00 crc kubenswrapper[5072]: I1124 11:59:00.236964 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bd3753f-127a-40e9-9406-3c34efbf1e17-config-data" (OuterVolumeSpecName: "config-data") pod "1bd3753f-127a-40e9-9406-3c34efbf1e17" (UID: "1bd3753f-127a-40e9-9406-3c34efbf1e17"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:59:00 crc kubenswrapper[5072]: I1124 11:59:00.249602 5072 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:00 crc kubenswrapper[5072]: I1124 11:59:00.249685 5072 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1bd3753f-127a-40e9-9406-3c34efbf1e17-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:00 crc kubenswrapper[5072]: I1124 11:59:00.249697 5072 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bd3753f-127a-40e9-9406-3c34efbf1e17-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:00 crc kubenswrapper[5072]: I1124 11:59:00.654553 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-b55tw"] Nov 24 11:59:00 crc kubenswrapper[5072]: I1124 11:59:00.720571 5072 generic.go:334] "Generic (PLEG): container finished" podID="1bc09d77-5ad5-40bb-a7d9-327834ebfd07" containerID="1f9512203b5d653be41e6eb617e459328c12ee26a9cf65323229d95f1bf8c6e0" exitCode=143 Nov 24 11:59:00 crc kubenswrapper[5072]: I1124 11:59:00.720602 5072 generic.go:334] "Generic (PLEG): container finished" podID="1bc09d77-5ad5-40bb-a7d9-327834ebfd07" containerID="ab7a9b5c6b635c90135f4e4a5eec2cc51cc65df4dee1294b128bafdd964a954b" exitCode=143 Nov 24 11:59:00 crc kubenswrapper[5072]: I1124 11:59:00.720636 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1bc09d77-5ad5-40bb-a7d9-327834ebfd07","Type":"ContainerDied","Data":"1f9512203b5d653be41e6eb617e459328c12ee26a9cf65323229d95f1bf8c6e0"} Nov 24 11:59:00 crc kubenswrapper[5072]: I1124 11:59:00.720682 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1bc09d77-5ad5-40bb-a7d9-327834ebfd07","Type":"ContainerDied","Data":"ab7a9b5c6b635c90135f4e4a5eec2cc51cc65df4dee1294b128bafdd964a954b"} Nov 24 11:59:00 crc kubenswrapper[5072]: I1124 11:59:00.723952 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1bd3753f-127a-40e9-9406-3c34efbf1e17","Type":"ContainerDied","Data":"27373fff7f0277deaa3590a9fa833ccaedf0f95f60a2ababb6a9e01ebeeb5e38"} Nov 24 11:59:00 crc kubenswrapper[5072]: I1124 11:59:00.723995 5072 scope.go:117] "RemoveContainer" containerID="7fab48aa07d1d4d85e8ccc0562e8a42afc322a02d1beadbcab461b2638247ed2" Nov 24 11:59:00 crc kubenswrapper[5072]: I1124 11:59:00.724200 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 11:59:00 crc kubenswrapper[5072]: I1124 11:59:00.771555 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 11:59:00 crc kubenswrapper[5072]: I1124 11:59:00.779733 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 11:59:00 crc kubenswrapper[5072]: I1124 11:59:00.788360 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 11:59:00 crc kubenswrapper[5072]: E1124 11:59:00.788812 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bd3753f-127a-40e9-9406-3c34efbf1e17" containerName="glance-httpd" Nov 24 11:59:00 crc kubenswrapper[5072]: I1124 11:59:00.788830 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bd3753f-127a-40e9-9406-3c34efbf1e17" containerName="glance-httpd" Nov 24 11:59:00 crc kubenswrapper[5072]: E1124 11:59:00.788857 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bd3753f-127a-40e9-9406-3c34efbf1e17" containerName="glance-log" Nov 24 11:59:00 crc kubenswrapper[5072]: I1124 11:59:00.788863 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bd3753f-127a-40e9-9406-3c34efbf1e17" containerName="glance-log" Nov 24 11:59:00 crc kubenswrapper[5072]: I1124 11:59:00.789054 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bd3753f-127a-40e9-9406-3c34efbf1e17" containerName="glance-httpd" Nov 24 11:59:00 crc kubenswrapper[5072]: I1124 11:59:00.789076 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bd3753f-127a-40e9-9406-3c34efbf1e17" containerName="glance-log" Nov 24 11:59:00 crc kubenswrapper[5072]: I1124 11:59:00.790136 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 11:59:00 crc kubenswrapper[5072]: I1124 11:59:00.792929 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 24 11:59:00 crc kubenswrapper[5072]: I1124 11:59:00.793209 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 24 11:59:00 crc kubenswrapper[5072]: I1124 11:59:00.804943 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 11:59:00 crc kubenswrapper[5072]: I1124 11:59:00.869259 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/61880241-c7c3-4422-adbb-3e6323831d71-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"61880241-c7c3-4422-adbb-3e6323831d71\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:59:00 crc kubenswrapper[5072]: I1124 11:59:00.869348 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/61880241-c7c3-4422-adbb-3e6323831d71-ceph\") pod \"glance-default-internal-api-0\" (UID: \"61880241-c7c3-4422-adbb-3e6323831d71\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:59:00 crc kubenswrapper[5072]: I1124 11:59:00.869395 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61880241-c7c3-4422-adbb-3e6323831d71-logs\") pod \"glance-default-internal-api-0\" (UID: \"61880241-c7c3-4422-adbb-3e6323831d71\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:59:00 crc kubenswrapper[5072]: I1124 11:59:00.869415 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61880241-c7c3-4422-adbb-3e6323831d71-config-data\") pod \"glance-default-internal-api-0\" (UID: \"61880241-c7c3-4422-adbb-3e6323831d71\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:59:00 crc kubenswrapper[5072]: I1124 11:59:00.869439 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61880241-c7c3-4422-adbb-3e6323831d71-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"61880241-c7c3-4422-adbb-3e6323831d71\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:59:00 crc kubenswrapper[5072]: I1124 11:59:00.869515 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"61880241-c7c3-4422-adbb-3e6323831d71\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:59:00 crc kubenswrapper[5072]: I1124 11:59:00.869535 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w49z6\" (UniqueName: \"kubernetes.io/projected/61880241-c7c3-4422-adbb-3e6323831d71-kube-api-access-w49z6\") pod \"glance-default-internal-api-0\" (UID: \"61880241-c7c3-4422-adbb-3e6323831d71\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:59:00 crc kubenswrapper[5072]: I1124 11:59:00.869584 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/61880241-c7c3-4422-adbb-3e6323831d71-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"61880241-c7c3-4422-adbb-3e6323831d71\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:59:00 crc kubenswrapper[5072]: I1124 11:59:00.869604 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61880241-c7c3-4422-adbb-3e6323831d71-scripts\") pod \"glance-default-internal-api-0\" (UID: \"61880241-c7c3-4422-adbb-3e6323831d71\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:59:00 crc kubenswrapper[5072]: I1124 11:59:00.971256 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"61880241-c7c3-4422-adbb-3e6323831d71\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:59:00 crc kubenswrapper[5072]: I1124 11:59:00.971322 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w49z6\" (UniqueName: \"kubernetes.io/projected/61880241-c7c3-4422-adbb-3e6323831d71-kube-api-access-w49z6\") pod \"glance-default-internal-api-0\" (UID: \"61880241-c7c3-4422-adbb-3e6323831d71\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:59:00 crc kubenswrapper[5072]: I1124 11:59:00.971422 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/61880241-c7c3-4422-adbb-3e6323831d71-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"61880241-c7c3-4422-adbb-3e6323831d71\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:59:00 crc kubenswrapper[5072]: I1124 11:59:00.971454 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61880241-c7c3-4422-adbb-3e6323831d71-scripts\") pod \"glance-default-internal-api-0\" (UID: \"61880241-c7c3-4422-adbb-3e6323831d71\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:59:00 crc kubenswrapper[5072]: I1124 11:59:00.971511 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/61880241-c7c3-4422-adbb-3e6323831d71-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"61880241-c7c3-4422-adbb-3e6323831d71\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:59:00 crc kubenswrapper[5072]: I1124 11:59:00.971573 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/61880241-c7c3-4422-adbb-3e6323831d71-ceph\") pod \"glance-default-internal-api-0\" (UID: \"61880241-c7c3-4422-adbb-3e6323831d71\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:59:00 crc kubenswrapper[5072]: I1124 11:59:00.971604 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61880241-c7c3-4422-adbb-3e6323831d71-logs\") pod \"glance-default-internal-api-0\" (UID: \"61880241-c7c3-4422-adbb-3e6323831d71\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:59:00 crc kubenswrapper[5072]: I1124 11:59:00.971625 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61880241-c7c3-4422-adbb-3e6323831d71-config-data\") pod \"glance-default-internal-api-0\" (UID: \"61880241-c7c3-4422-adbb-3e6323831d71\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:59:00 crc kubenswrapper[5072]: I1124 11:59:00.971652 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61880241-c7c3-4422-adbb-3e6323831d71-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"61880241-c7c3-4422-adbb-3e6323831d71\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:59:00 crc kubenswrapper[5072]: I1124 11:59:00.972944 5072 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"61880241-c7c3-4422-adbb-3e6323831d71\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Nov 24 11:59:00 crc kubenswrapper[5072]: I1124 11:59:00.973984 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61880241-c7c3-4422-adbb-3e6323831d71-logs\") pod \"glance-default-internal-api-0\" (UID: \"61880241-c7c3-4422-adbb-3e6323831d71\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:59:00 crc kubenswrapper[5072]: I1124 11:59:00.975722 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/61880241-c7c3-4422-adbb-3e6323831d71-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"61880241-c7c3-4422-adbb-3e6323831d71\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:59:00 crc kubenswrapper[5072]: I1124 11:59:00.982215 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/61880241-c7c3-4422-adbb-3e6323831d71-ceph\") pod \"glance-default-internal-api-0\" (UID: \"61880241-c7c3-4422-adbb-3e6323831d71\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:59:00 crc kubenswrapper[5072]: I1124 11:59:00.983042 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61880241-c7c3-4422-adbb-3e6323831d71-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"61880241-c7c3-4422-adbb-3e6323831d71\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:59:00 crc kubenswrapper[5072]: I1124 11:59:00.983163 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61880241-c7c3-4422-adbb-3e6323831d71-config-data\") pod \"glance-default-internal-api-0\" (UID: \"61880241-c7c3-4422-adbb-3e6323831d71\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:59:01 crc kubenswrapper[5072]: I1124 11:59:01.006885 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61880241-c7c3-4422-adbb-3e6323831d71-scripts\") pod \"glance-default-internal-api-0\" (UID: \"61880241-c7c3-4422-adbb-3e6323831d71\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:59:01 crc kubenswrapper[5072]: I1124 11:59:01.011580 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"61880241-c7c3-4422-adbb-3e6323831d71\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:59:01 crc kubenswrapper[5072]: I1124 11:59:01.020792 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/61880241-c7c3-4422-adbb-3e6323831d71-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"61880241-c7c3-4422-adbb-3e6323831d71\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:59:01 crc kubenswrapper[5072]: I1124 11:59:01.022919 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w49z6\" (UniqueName: \"kubernetes.io/projected/61880241-c7c3-4422-adbb-3e6323831d71-kube-api-access-w49z6\") pod \"glance-default-internal-api-0\" (UID: \"61880241-c7c3-4422-adbb-3e6323831d71\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:59:01 crc kubenswrapper[5072]: I1124 11:59:01.035737 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bd3753f-127a-40e9-9406-3c34efbf1e17" path="/var/lib/kubelet/pods/1bd3753f-127a-40e9-9406-3c34efbf1e17/volumes" Nov 24 11:59:01 crc kubenswrapper[5072]: I1124 11:59:01.120894 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 11:59:03 crc kubenswrapper[5072]: I1124 11:59:03.741342 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-volume1-0" Nov 24 11:59:03 crc kubenswrapper[5072]: I1124 11:59:03.892982 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Nov 24 11:59:08 crc kubenswrapper[5072]: I1124 11:59:08.909869 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 11:59:08 crc kubenswrapper[5072]: I1124 11:59:08.934860 5072 scope.go:117] "RemoveContainer" containerID="521297ae15ef99c9607a4b67b97725c860bd33ba8eb7388f6b45293742ef3cac" Nov 24 11:59:09 crc kubenswrapper[5072]: I1124 11:59:09.063823 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bc09d77-5ad5-40bb-a7d9-327834ebfd07-combined-ca-bundle\") pod \"1bc09d77-5ad5-40bb-a7d9-327834ebfd07\" (UID: \"1bc09d77-5ad5-40bb-a7d9-327834ebfd07\") " Nov 24 11:59:09 crc kubenswrapper[5072]: I1124 11:59:09.064267 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"1bc09d77-5ad5-40bb-a7d9-327834ebfd07\" (UID: \"1bc09d77-5ad5-40bb-a7d9-327834ebfd07\") " Nov 24 11:59:09 crc kubenswrapper[5072]: I1124 11:59:09.064457 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1bc09d77-5ad5-40bb-a7d9-327834ebfd07-httpd-run\") pod \"1bc09d77-5ad5-40bb-a7d9-327834ebfd07\" (UID: \"1bc09d77-5ad5-40bb-a7d9-327834ebfd07\") " Nov 24 11:59:09 crc kubenswrapper[5072]: I1124 11:59:09.064491 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bc09d77-5ad5-40bb-a7d9-327834ebfd07-config-data\") pod \"1bc09d77-5ad5-40bb-a7d9-327834ebfd07\" (UID: \"1bc09d77-5ad5-40bb-a7d9-327834ebfd07\") " Nov 24 11:59:09 crc kubenswrapper[5072]: I1124 11:59:09.064562 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1bc09d77-5ad5-40bb-a7d9-327834ebfd07-public-tls-certs\") pod \"1bc09d77-5ad5-40bb-a7d9-327834ebfd07\" (UID: \"1bc09d77-5ad5-40bb-a7d9-327834ebfd07\") " Nov 24 11:59:09 crc kubenswrapper[5072]: I1124 11:59:09.064597 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sgl5v\" (UniqueName: \"kubernetes.io/projected/1bc09d77-5ad5-40bb-a7d9-327834ebfd07-kube-api-access-sgl5v\") pod \"1bc09d77-5ad5-40bb-a7d9-327834ebfd07\" (UID: \"1bc09d77-5ad5-40bb-a7d9-327834ebfd07\") " Nov 24 11:59:09 crc kubenswrapper[5072]: I1124 11:59:09.064619 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1bc09d77-5ad5-40bb-a7d9-327834ebfd07-scripts\") pod \"1bc09d77-5ad5-40bb-a7d9-327834ebfd07\" (UID: \"1bc09d77-5ad5-40bb-a7d9-327834ebfd07\") " Nov 24 11:59:09 crc kubenswrapper[5072]: I1124 11:59:09.064683 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1bc09d77-5ad5-40bb-a7d9-327834ebfd07-logs\") pod \"1bc09d77-5ad5-40bb-a7d9-327834ebfd07\" (UID: \"1bc09d77-5ad5-40bb-a7d9-327834ebfd07\") " Nov 24 11:59:09 crc kubenswrapper[5072]: I1124 11:59:09.064709 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/1bc09d77-5ad5-40bb-a7d9-327834ebfd07-ceph\") pod \"1bc09d77-5ad5-40bb-a7d9-327834ebfd07\" (UID: \"1bc09d77-5ad5-40bb-a7d9-327834ebfd07\") " Nov 24 11:59:09 crc kubenswrapper[5072]: I1124 11:59:09.066823 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1bc09d77-5ad5-40bb-a7d9-327834ebfd07-logs" (OuterVolumeSpecName: "logs") pod "1bc09d77-5ad5-40bb-a7d9-327834ebfd07" (UID: "1bc09d77-5ad5-40bb-a7d9-327834ebfd07"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:59:09 crc kubenswrapper[5072]: I1124 11:59:09.067589 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1bc09d77-5ad5-40bb-a7d9-327834ebfd07-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "1bc09d77-5ad5-40bb-a7d9-327834ebfd07" (UID: "1bc09d77-5ad5-40bb-a7d9-327834ebfd07"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:59:09 crc kubenswrapper[5072]: I1124 11:59:09.071670 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bc09d77-5ad5-40bb-a7d9-327834ebfd07-scripts" (OuterVolumeSpecName: "scripts") pod "1bc09d77-5ad5-40bb-a7d9-327834ebfd07" (UID: "1bc09d77-5ad5-40bb-a7d9-327834ebfd07"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:59:09 crc kubenswrapper[5072]: I1124 11:59:09.071931 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "glance") pod "1bc09d77-5ad5-40bb-a7d9-327834ebfd07" (UID: "1bc09d77-5ad5-40bb-a7d9-327834ebfd07"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 24 11:59:09 crc kubenswrapper[5072]: I1124 11:59:09.073117 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bc09d77-5ad5-40bb-a7d9-327834ebfd07-ceph" (OuterVolumeSpecName: "ceph") pod "1bc09d77-5ad5-40bb-a7d9-327834ebfd07" (UID: "1bc09d77-5ad5-40bb-a7d9-327834ebfd07"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:59:09 crc kubenswrapper[5072]: I1124 11:59:09.074754 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bc09d77-5ad5-40bb-a7d9-327834ebfd07-kube-api-access-sgl5v" (OuterVolumeSpecName: "kube-api-access-sgl5v") pod "1bc09d77-5ad5-40bb-a7d9-327834ebfd07" (UID: "1bc09d77-5ad5-40bb-a7d9-327834ebfd07"). InnerVolumeSpecName "kube-api-access-sgl5v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:59:09 crc kubenswrapper[5072]: I1124 11:59:09.104550 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bc09d77-5ad5-40bb-a7d9-327834ebfd07-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1bc09d77-5ad5-40bb-a7d9-327834ebfd07" (UID: "1bc09d77-5ad5-40bb-a7d9-327834ebfd07"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:59:09 crc kubenswrapper[5072]: I1124 11:59:09.136015 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bc09d77-5ad5-40bb-a7d9-327834ebfd07-config-data" (OuterVolumeSpecName: "config-data") pod "1bc09d77-5ad5-40bb-a7d9-327834ebfd07" (UID: "1bc09d77-5ad5-40bb-a7d9-327834ebfd07"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:59:09 crc kubenswrapper[5072]: I1124 11:59:09.153477 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bc09d77-5ad5-40bb-a7d9-327834ebfd07-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "1bc09d77-5ad5-40bb-a7d9-327834ebfd07" (UID: "1bc09d77-5ad5-40bb-a7d9-327834ebfd07"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:59:09 crc kubenswrapper[5072]: I1124 11:59:09.169457 5072 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bc09d77-5ad5-40bb-a7d9-327834ebfd07-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:09 crc kubenswrapper[5072]: I1124 11:59:09.169506 5072 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Nov 24 11:59:09 crc kubenswrapper[5072]: I1124 11:59:09.169517 5072 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1bc09d77-5ad5-40bb-a7d9-327834ebfd07-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:09 crc kubenswrapper[5072]: I1124 11:59:09.169546 5072 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bc09d77-5ad5-40bb-a7d9-327834ebfd07-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:09 crc kubenswrapper[5072]: I1124 11:59:09.169574 5072 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1bc09d77-5ad5-40bb-a7d9-327834ebfd07-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:09 crc kubenswrapper[5072]: I1124 11:59:09.169588 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sgl5v\" (UniqueName: \"kubernetes.io/projected/1bc09d77-5ad5-40bb-a7d9-327834ebfd07-kube-api-access-sgl5v\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:09 crc kubenswrapper[5072]: I1124 11:59:09.169601 5072 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1bc09d77-5ad5-40bb-a7d9-327834ebfd07-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:09 crc kubenswrapper[5072]: I1124 11:59:09.169611 5072 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1bc09d77-5ad5-40bb-a7d9-327834ebfd07-logs\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:09 crc kubenswrapper[5072]: I1124 11:59:09.169621 5072 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/1bc09d77-5ad5-40bb-a7d9-327834ebfd07-ceph\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:09 crc kubenswrapper[5072]: I1124 11:59:09.189307 5072 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Nov 24 11:59:09 crc kubenswrapper[5072]: I1124 11:59:09.272895 5072 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:09 crc kubenswrapper[5072]: I1124 11:59:09.494566 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 11:59:09 crc kubenswrapper[5072]: I1124 11:59:09.824976 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1bc09d77-5ad5-40bb-a7d9-327834ebfd07","Type":"ContainerDied","Data":"2ab19809bff85d08b779239c5adc9b78c44ff708b92845c06130cfd72bacea81"} Nov 24 11:59:09 crc kubenswrapper[5072]: I1124 11:59:09.825059 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 11:59:09 crc kubenswrapper[5072]: I1124 11:59:09.825276 5072 scope.go:117] "RemoveContainer" containerID="1f9512203b5d653be41e6eb617e459328c12ee26a9cf65323229d95f1bf8c6e0" Nov 24 11:59:09 crc kubenswrapper[5072]: I1124 11:59:09.827155 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-b55tw" event={"ID":"4a074607-4e56-4d2e-a4ee-87906af89764","Type":"ContainerStarted","Data":"6daad80fe4400ec67e7ab4cfd625d3b2eb92390cc5b7cf71ea478db93ed09e53"} Nov 24 11:59:09 crc kubenswrapper[5072]: I1124 11:59:09.831288 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"61880241-c7c3-4422-adbb-3e6323831d71","Type":"ContainerStarted","Data":"63e6ce280dd74afd41ad3f6015a0499563cca09c1a233bde354778b2951a106c"} Nov 24 11:59:09 crc kubenswrapper[5072]: I1124 11:59:09.877805 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 11:59:09 crc kubenswrapper[5072]: I1124 11:59:09.885812 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 11:59:09 crc kubenswrapper[5072]: I1124 11:59:09.924240 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 11:59:09 crc kubenswrapper[5072]: E1124 11:59:09.924704 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bc09d77-5ad5-40bb-a7d9-327834ebfd07" containerName="glance-log" Nov 24 11:59:09 crc kubenswrapper[5072]: I1124 11:59:09.924718 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bc09d77-5ad5-40bb-a7d9-327834ebfd07" containerName="glance-log" Nov 24 11:59:09 crc kubenswrapper[5072]: E1124 11:59:09.924738 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bc09d77-5ad5-40bb-a7d9-327834ebfd07" containerName="glance-httpd" Nov 24 11:59:09 crc kubenswrapper[5072]: I1124 11:59:09.924750 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bc09d77-5ad5-40bb-a7d9-327834ebfd07" containerName="glance-httpd" Nov 24 11:59:09 crc kubenswrapper[5072]: I1124 11:59:09.924949 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bc09d77-5ad5-40bb-a7d9-327834ebfd07" containerName="glance-httpd" Nov 24 11:59:09 crc kubenswrapper[5072]: I1124 11:59:09.924984 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bc09d77-5ad5-40bb-a7d9-327834ebfd07" containerName="glance-log" Nov 24 11:59:09 crc kubenswrapper[5072]: I1124 11:59:09.926182 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 11:59:09 crc kubenswrapper[5072]: I1124 11:59:09.928084 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 24 11:59:09 crc kubenswrapper[5072]: I1124 11:59:09.928627 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 24 11:59:09 crc kubenswrapper[5072]: I1124 11:59:09.952532 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 11:59:09 crc kubenswrapper[5072]: I1124 11:59:09.955288 5072 scope.go:117] "RemoveContainer" containerID="ab7a9b5c6b635c90135f4e4a5eec2cc51cc65df4dee1294b128bafdd964a954b" Nov 24 11:59:09 crc kubenswrapper[5072]: I1124 11:59:09.987899 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d71c9a2-3657-43f6-aec2-b53e3ea8fc01-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1d71c9a2-3657-43f6-aec2-b53e3ea8fc01\") " pod="openstack/glance-default-external-api-0" Nov 24 11:59:09 crc kubenswrapper[5072]: I1124 11:59:09.988163 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d71c9a2-3657-43f6-aec2-b53e3ea8fc01-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1d71c9a2-3657-43f6-aec2-b53e3ea8fc01\") " pod="openstack/glance-default-external-api-0" Nov 24 11:59:09 crc kubenswrapper[5072]: I1124 11:59:09.988297 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glnpp\" (UniqueName: \"kubernetes.io/projected/1d71c9a2-3657-43f6-aec2-b53e3ea8fc01-kube-api-access-glnpp\") pod \"glance-default-external-api-0\" (UID: \"1d71c9a2-3657-43f6-aec2-b53e3ea8fc01\") " pod="openstack/glance-default-external-api-0" Nov 24 11:59:09 crc kubenswrapper[5072]: I1124 11:59:09.988467 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d71c9a2-3657-43f6-aec2-b53e3ea8fc01-config-data\") pod \"glance-default-external-api-0\" (UID: \"1d71c9a2-3657-43f6-aec2-b53e3ea8fc01\") " pod="openstack/glance-default-external-api-0" Nov 24 11:59:09 crc kubenswrapper[5072]: I1124 11:59:09.988588 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/1d71c9a2-3657-43f6-aec2-b53e3ea8fc01-ceph\") pod \"glance-default-external-api-0\" (UID: \"1d71c9a2-3657-43f6-aec2-b53e3ea8fc01\") " pod="openstack/glance-default-external-api-0" Nov 24 11:59:09 crc kubenswrapper[5072]: I1124 11:59:09.988716 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"1d71c9a2-3657-43f6-aec2-b53e3ea8fc01\") " pod="openstack/glance-default-external-api-0" Nov 24 11:59:09 crc kubenswrapper[5072]: I1124 11:59:09.988808 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d71c9a2-3657-43f6-aec2-b53e3ea8fc01-logs\") pod \"glance-default-external-api-0\" (UID: \"1d71c9a2-3657-43f6-aec2-b53e3ea8fc01\") " pod="openstack/glance-default-external-api-0" Nov 24 11:59:09 crc kubenswrapper[5072]: I1124 11:59:09.988836 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1d71c9a2-3657-43f6-aec2-b53e3ea8fc01-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1d71c9a2-3657-43f6-aec2-b53e3ea8fc01\") " pod="openstack/glance-default-external-api-0" Nov 24 11:59:09 crc kubenswrapper[5072]: I1124 11:59:09.988853 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d71c9a2-3657-43f6-aec2-b53e3ea8fc01-scripts\") pod \"glance-default-external-api-0\" (UID: \"1d71c9a2-3657-43f6-aec2-b53e3ea8fc01\") " pod="openstack/glance-default-external-api-0" Nov 24 11:59:10 crc kubenswrapper[5072]: I1124 11:59:10.090801 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d71c9a2-3657-43f6-aec2-b53e3ea8fc01-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1d71c9a2-3657-43f6-aec2-b53e3ea8fc01\") " pod="openstack/glance-default-external-api-0" Nov 24 11:59:10 crc kubenswrapper[5072]: I1124 11:59:10.090870 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glnpp\" (UniqueName: \"kubernetes.io/projected/1d71c9a2-3657-43f6-aec2-b53e3ea8fc01-kube-api-access-glnpp\") pod \"glance-default-external-api-0\" (UID: \"1d71c9a2-3657-43f6-aec2-b53e3ea8fc01\") " pod="openstack/glance-default-external-api-0" Nov 24 11:59:10 crc kubenswrapper[5072]: I1124 11:59:10.090905 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d71c9a2-3657-43f6-aec2-b53e3ea8fc01-config-data\") pod \"glance-default-external-api-0\" (UID: \"1d71c9a2-3657-43f6-aec2-b53e3ea8fc01\") " pod="openstack/glance-default-external-api-0" Nov 24 11:59:10 crc kubenswrapper[5072]: I1124 11:59:10.090969 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/1d71c9a2-3657-43f6-aec2-b53e3ea8fc01-ceph\") pod \"glance-default-external-api-0\" (UID: \"1d71c9a2-3657-43f6-aec2-b53e3ea8fc01\") " pod="openstack/glance-default-external-api-0" Nov 24 11:59:10 crc kubenswrapper[5072]: I1124 11:59:10.091032 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"1d71c9a2-3657-43f6-aec2-b53e3ea8fc01\") " pod="openstack/glance-default-external-api-0" Nov 24 11:59:10 crc kubenswrapper[5072]: I1124 11:59:10.091113 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d71c9a2-3657-43f6-aec2-b53e3ea8fc01-logs\") pod \"glance-default-external-api-0\" (UID: \"1d71c9a2-3657-43f6-aec2-b53e3ea8fc01\") " pod="openstack/glance-default-external-api-0" Nov 24 11:59:10 crc kubenswrapper[5072]: I1124 11:59:10.091140 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d71c9a2-3657-43f6-aec2-b53e3ea8fc01-scripts\") pod \"glance-default-external-api-0\" (UID: \"1d71c9a2-3657-43f6-aec2-b53e3ea8fc01\") " pod="openstack/glance-default-external-api-0" Nov 24 11:59:10 crc kubenswrapper[5072]: I1124 11:59:10.091164 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1d71c9a2-3657-43f6-aec2-b53e3ea8fc01-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1d71c9a2-3657-43f6-aec2-b53e3ea8fc01\") " pod="openstack/glance-default-external-api-0" Nov 24 11:59:10 crc kubenswrapper[5072]: I1124 11:59:10.091190 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d71c9a2-3657-43f6-aec2-b53e3ea8fc01-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1d71c9a2-3657-43f6-aec2-b53e3ea8fc01\") " pod="openstack/glance-default-external-api-0" Nov 24 11:59:10 crc kubenswrapper[5072]: I1124 11:59:10.092686 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d71c9a2-3657-43f6-aec2-b53e3ea8fc01-logs\") pod \"glance-default-external-api-0\" (UID: \"1d71c9a2-3657-43f6-aec2-b53e3ea8fc01\") " pod="openstack/glance-default-external-api-0" Nov 24 11:59:10 crc kubenswrapper[5072]: I1124 11:59:10.092712 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1d71c9a2-3657-43f6-aec2-b53e3ea8fc01-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1d71c9a2-3657-43f6-aec2-b53e3ea8fc01\") " pod="openstack/glance-default-external-api-0" Nov 24 11:59:10 crc kubenswrapper[5072]: I1124 11:59:10.093018 5072 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"1d71c9a2-3657-43f6-aec2-b53e3ea8fc01\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-external-api-0" Nov 24 11:59:10 crc kubenswrapper[5072]: I1124 11:59:10.098997 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d71c9a2-3657-43f6-aec2-b53e3ea8fc01-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1d71c9a2-3657-43f6-aec2-b53e3ea8fc01\") " pod="openstack/glance-default-external-api-0" Nov 24 11:59:10 crc kubenswrapper[5072]: I1124 11:59:10.099215 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d71c9a2-3657-43f6-aec2-b53e3ea8fc01-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1d71c9a2-3657-43f6-aec2-b53e3ea8fc01\") " pod="openstack/glance-default-external-api-0" Nov 24 11:59:10 crc kubenswrapper[5072]: I1124 11:59:10.102625 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d71c9a2-3657-43f6-aec2-b53e3ea8fc01-config-data\") pod \"glance-default-external-api-0\" (UID: \"1d71c9a2-3657-43f6-aec2-b53e3ea8fc01\") " pod="openstack/glance-default-external-api-0" Nov 24 11:59:10 crc kubenswrapper[5072]: I1124 11:59:10.111221 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d71c9a2-3657-43f6-aec2-b53e3ea8fc01-scripts\") pod \"glance-default-external-api-0\" (UID: \"1d71c9a2-3657-43f6-aec2-b53e3ea8fc01\") " pod="openstack/glance-default-external-api-0" Nov 24 11:59:10 crc kubenswrapper[5072]: I1124 11:59:10.111689 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/1d71c9a2-3657-43f6-aec2-b53e3ea8fc01-ceph\") pod \"glance-default-external-api-0\" (UID: \"1d71c9a2-3657-43f6-aec2-b53e3ea8fc01\") " pod="openstack/glance-default-external-api-0" Nov 24 11:59:10 crc kubenswrapper[5072]: I1124 11:59:10.115176 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glnpp\" (UniqueName: \"kubernetes.io/projected/1d71c9a2-3657-43f6-aec2-b53e3ea8fc01-kube-api-access-glnpp\") pod \"glance-default-external-api-0\" (UID: \"1d71c9a2-3657-43f6-aec2-b53e3ea8fc01\") " pod="openstack/glance-default-external-api-0" Nov 24 11:59:10 crc kubenswrapper[5072]: I1124 11:59:10.135467 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"1d71c9a2-3657-43f6-aec2-b53e3ea8fc01\") " pod="openstack/glance-default-external-api-0" Nov 24 11:59:10 crc kubenswrapper[5072]: I1124 11:59:10.249476 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 11:59:10 crc kubenswrapper[5072]: I1124 11:59:10.850225 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 11:59:10 crc kubenswrapper[5072]: W1124 11:59:10.852071 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1d71c9a2_3657_43f6_aec2_b53e3ea8fc01.slice/crio-b99133a2d77e269a27861a0d2e52cf40cc319fb3bd7c1793fe129ab7ebe9b55a WatchSource:0}: Error finding container b99133a2d77e269a27861a0d2e52cf40cc319fb3bd7c1793fe129ab7ebe9b55a: Status 404 returned error can't find the container with id b99133a2d77e269a27861a0d2e52cf40cc319fb3bd7c1793fe129ab7ebe9b55a Nov 24 11:59:11 crc kubenswrapper[5072]: I1124 11:59:11.028110 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bc09d77-5ad5-40bb-a7d9-327834ebfd07" path="/var/lib/kubelet/pods/1bc09d77-5ad5-40bb-a7d9-327834ebfd07/volumes" Nov 24 11:59:11 crc kubenswrapper[5072]: I1124 11:59:11.855204 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"61880241-c7c3-4422-adbb-3e6323831d71","Type":"ContainerStarted","Data":"92eaefc49e0ff2755c1eb17fd06e0086ca141cc3e166461525a9e940cbd61696"} Nov 24 11:59:11 crc kubenswrapper[5072]: I1124 11:59:11.856836 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1d71c9a2-3657-43f6-aec2-b53e3ea8fc01","Type":"ContainerStarted","Data":"b99133a2d77e269a27861a0d2e52cf40cc319fb3bd7c1793fe129ab7ebe9b55a"} Nov 24 11:59:12 crc kubenswrapper[5072]: I1124 11:59:12.015974 5072 scope.go:117] "RemoveContainer" containerID="4c463b6823449c0875f1fec4633ea521827aee0fee045719621150bcb1ac1a4f" Nov 24 11:59:12 crc kubenswrapper[5072]: E1124 11:59:12.016265 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 11:59:16 crc kubenswrapper[5072]: E1124 11:59:16.953318 5072 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Nov 24 11:59:16 crc kubenswrapper[5072]: E1124 11:59:16.954024 5072 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nd5h5d7h65dh687h95h5fdh66bhbh86h5b7h5f4h9chd4h66fh657hbfhf7h55h79h96h689h66hb8hffh677h95h5f8h646h59ch5f6h7dh676q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-th9l9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-668c6889fc-xbssb_openstack(18b8401a-38a6-41b3-abc0-d4924c551633): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 11:59:16 crc kubenswrapper[5072]: E1124 11:59:16.958605 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-668c6889fc-xbssb" podUID="18b8401a-38a6-41b3-abc0-d4924c551633" Nov 24 11:59:16 crc kubenswrapper[5072]: E1124 11:59:16.962189 5072 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Nov 24 11:59:16 crc kubenswrapper[5072]: E1124 11:59:16.962467 5072 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n568h5fch5ffh56h7fhc7h5bh58dh54fh5d4h5cdh8fh99hdch68dh8fh5d4hbbh669hc9h99hd4h5bh85h66bh594h65bh54fh5fch5d4h77h57fq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-442kk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-6ccd6d974c-ptg7b_openstack(2a9d62a7-fa35-4937-8cf4-31142e2f0623): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 11:59:16 crc kubenswrapper[5072]: E1124 11:59:16.965294 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-6ccd6d974c-ptg7b" podUID="2a9d62a7-fa35-4937-8cf4-31142e2f0623" Nov 24 11:59:17 crc kubenswrapper[5072]: E1124 11:59:17.042525 5072 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Nov 24 11:59:17 crc kubenswrapper[5072]: E1124 11:59:17.042695 5072 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5bdh57ch566hf8hd5hd5hdbh57chcdh76h698h667h79h655h584h6hf8h97h5dh68dh585hd7h597hbch546h58h599h69h595h594h5bchf7q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xtdqf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-575b5d47b6-n66fd_openstack(78739666-79c8-4af9-9766-6793e7975629): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 11:59:17 crc kubenswrapper[5072]: E1124 11:59:17.045111 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-575b5d47b6-n66fd" podUID="78739666-79c8-4af9-9766-6793e7975629" Nov 24 11:59:17 crc kubenswrapper[5072]: E1124 11:59:17.504222 5072 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Nov 24 11:59:17 crc kubenswrapper[5072]: E1124 11:59:17.504458 5072 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n55dh689h685h5bh76h596h8hcdh5hb6h5b5h89h55bh75h9bh658h67bh68fhf7hc5h599h55dh598h547h684h58h5f4h56dh65dh669h8h589q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-97wgk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-587d57694d-km6sf_openstack(3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 11:59:17 crc kubenswrapper[5072]: E1124 11:59:17.506994 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-587d57694d-km6sf" podUID="3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f" Nov 24 11:59:17 crc kubenswrapper[5072]: I1124 11:59:17.915994 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1d71c9a2-3657-43f6-aec2-b53e3ea8fc01","Type":"ContainerStarted","Data":"e95256c8fd4a84fcfa7f6358dd86d3dab3ebda90cb11166417476e0c6a79abce"} Nov 24 11:59:17 crc kubenswrapper[5072]: E1124 11:59:17.920107 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-575b5d47b6-n66fd" podUID="78739666-79c8-4af9-9766-6793e7975629" Nov 24 11:59:17 crc kubenswrapper[5072]: E1124 11:59:17.920770 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-587d57694d-km6sf" podUID="3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f" Nov 24 11:59:22 crc kubenswrapper[5072]: I1124 11:59:22.454697 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-668c6889fc-xbssb" Nov 24 11:59:22 crc kubenswrapper[5072]: I1124 11:59:22.462710 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6ccd6d974c-ptg7b" Nov 24 11:59:22 crc kubenswrapper[5072]: I1124 11:59:22.592303 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2a9d62a7-fa35-4937-8cf4-31142e2f0623-horizon-secret-key\") pod \"2a9d62a7-fa35-4937-8cf4-31142e2f0623\" (UID: \"2a9d62a7-fa35-4937-8cf4-31142e2f0623\") " Nov 24 11:59:22 crc kubenswrapper[5072]: I1124 11:59:22.592572 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/18b8401a-38a6-41b3-abc0-d4924c551633-horizon-secret-key\") pod \"18b8401a-38a6-41b3-abc0-d4924c551633\" (UID: \"18b8401a-38a6-41b3-abc0-d4924c551633\") " Nov 24 11:59:22 crc kubenswrapper[5072]: I1124 11:59:22.592692 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18b8401a-38a6-41b3-abc0-d4924c551633-logs\") pod \"18b8401a-38a6-41b3-abc0-d4924c551633\" (UID: \"18b8401a-38a6-41b3-abc0-d4924c551633\") " Nov 24 11:59:22 crc kubenswrapper[5072]: I1124 11:59:22.592790 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a9d62a7-fa35-4937-8cf4-31142e2f0623-logs\") pod \"2a9d62a7-fa35-4937-8cf4-31142e2f0623\" (UID: \"2a9d62a7-fa35-4937-8cf4-31142e2f0623\") " Nov 24 11:59:22 crc kubenswrapper[5072]: I1124 11:59:22.592869 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-442kk\" (UniqueName: \"kubernetes.io/projected/2a9d62a7-fa35-4937-8cf4-31142e2f0623-kube-api-access-442kk\") pod \"2a9d62a7-fa35-4937-8cf4-31142e2f0623\" (UID: \"2a9d62a7-fa35-4937-8cf4-31142e2f0623\") " Nov 24 11:59:22 crc kubenswrapper[5072]: I1124 11:59:22.592937 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2a9d62a7-fa35-4937-8cf4-31142e2f0623-config-data\") pod \"2a9d62a7-fa35-4937-8cf4-31142e2f0623\" (UID: \"2a9d62a7-fa35-4937-8cf4-31142e2f0623\") " Nov 24 11:59:22 crc kubenswrapper[5072]: I1124 11:59:22.593076 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/18b8401a-38a6-41b3-abc0-d4924c551633-scripts\") pod \"18b8401a-38a6-41b3-abc0-d4924c551633\" (UID: \"18b8401a-38a6-41b3-abc0-d4924c551633\") " Nov 24 11:59:22 crc kubenswrapper[5072]: I1124 11:59:22.593337 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-th9l9\" (UniqueName: \"kubernetes.io/projected/18b8401a-38a6-41b3-abc0-d4924c551633-kube-api-access-th9l9\") pod \"18b8401a-38a6-41b3-abc0-d4924c551633\" (UID: \"18b8401a-38a6-41b3-abc0-d4924c551633\") " Nov 24 11:59:22 crc kubenswrapper[5072]: I1124 11:59:22.593590 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/18b8401a-38a6-41b3-abc0-d4924c551633-config-data\") pod \"18b8401a-38a6-41b3-abc0-d4924c551633\" (UID: \"18b8401a-38a6-41b3-abc0-d4924c551633\") " Nov 24 11:59:22 crc kubenswrapper[5072]: I1124 11:59:22.593750 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2a9d62a7-fa35-4937-8cf4-31142e2f0623-scripts\") pod \"2a9d62a7-fa35-4937-8cf4-31142e2f0623\" (UID: \"2a9d62a7-fa35-4937-8cf4-31142e2f0623\") " Nov 24 11:59:22 crc kubenswrapper[5072]: I1124 11:59:22.594152 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18b8401a-38a6-41b3-abc0-d4924c551633-scripts" (OuterVolumeSpecName: "scripts") pod "18b8401a-38a6-41b3-abc0-d4924c551633" (UID: "18b8401a-38a6-41b3-abc0-d4924c551633"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:59:22 crc kubenswrapper[5072]: I1124 11:59:22.594431 5072 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/18b8401a-38a6-41b3-abc0-d4924c551633-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:22 crc kubenswrapper[5072]: I1124 11:59:22.594873 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a9d62a7-fa35-4937-8cf4-31142e2f0623-scripts" (OuterVolumeSpecName: "scripts") pod "2a9d62a7-fa35-4937-8cf4-31142e2f0623" (UID: "2a9d62a7-fa35-4937-8cf4-31142e2f0623"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:59:22 crc kubenswrapper[5072]: I1124 11:59:22.595042 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18b8401a-38a6-41b3-abc0-d4924c551633-config-data" (OuterVolumeSpecName: "config-data") pod "18b8401a-38a6-41b3-abc0-d4924c551633" (UID: "18b8401a-38a6-41b3-abc0-d4924c551633"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:59:22 crc kubenswrapper[5072]: I1124 11:59:22.595228 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18b8401a-38a6-41b3-abc0-d4924c551633-logs" (OuterVolumeSpecName: "logs") pod "18b8401a-38a6-41b3-abc0-d4924c551633" (UID: "18b8401a-38a6-41b3-abc0-d4924c551633"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:59:22 crc kubenswrapper[5072]: I1124 11:59:22.595712 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a9d62a7-fa35-4937-8cf4-31142e2f0623-config-data" (OuterVolumeSpecName: "config-data") pod "2a9d62a7-fa35-4937-8cf4-31142e2f0623" (UID: "2a9d62a7-fa35-4937-8cf4-31142e2f0623"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:59:22 crc kubenswrapper[5072]: I1124 11:59:22.595778 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a9d62a7-fa35-4937-8cf4-31142e2f0623-logs" (OuterVolumeSpecName: "logs") pod "2a9d62a7-fa35-4937-8cf4-31142e2f0623" (UID: "2a9d62a7-fa35-4937-8cf4-31142e2f0623"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:59:22 crc kubenswrapper[5072]: I1124 11:59:22.597858 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18b8401a-38a6-41b3-abc0-d4924c551633-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "18b8401a-38a6-41b3-abc0-d4924c551633" (UID: "18b8401a-38a6-41b3-abc0-d4924c551633"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:59:22 crc kubenswrapper[5072]: I1124 11:59:22.598266 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a9d62a7-fa35-4937-8cf4-31142e2f0623-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "2a9d62a7-fa35-4937-8cf4-31142e2f0623" (UID: "2a9d62a7-fa35-4937-8cf4-31142e2f0623"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:59:22 crc kubenswrapper[5072]: I1124 11:59:22.599997 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18b8401a-38a6-41b3-abc0-d4924c551633-kube-api-access-th9l9" (OuterVolumeSpecName: "kube-api-access-th9l9") pod "18b8401a-38a6-41b3-abc0-d4924c551633" (UID: "18b8401a-38a6-41b3-abc0-d4924c551633"). InnerVolumeSpecName "kube-api-access-th9l9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:59:22 crc kubenswrapper[5072]: I1124 11:59:22.600606 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a9d62a7-fa35-4937-8cf4-31142e2f0623-kube-api-access-442kk" (OuterVolumeSpecName: "kube-api-access-442kk") pod "2a9d62a7-fa35-4937-8cf4-31142e2f0623" (UID: "2a9d62a7-fa35-4937-8cf4-31142e2f0623"). InnerVolumeSpecName "kube-api-access-442kk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:59:22 crc kubenswrapper[5072]: I1124 11:59:22.696781 5072 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2a9d62a7-fa35-4937-8cf4-31142e2f0623-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:22 crc kubenswrapper[5072]: I1124 11:59:22.697083 5072 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/18b8401a-38a6-41b3-abc0-d4924c551633-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:22 crc kubenswrapper[5072]: I1124 11:59:22.697092 5072 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18b8401a-38a6-41b3-abc0-d4924c551633-logs\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:22 crc kubenswrapper[5072]: I1124 11:59:22.697103 5072 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a9d62a7-fa35-4937-8cf4-31142e2f0623-logs\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:22 crc kubenswrapper[5072]: I1124 11:59:22.697153 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-442kk\" (UniqueName: \"kubernetes.io/projected/2a9d62a7-fa35-4937-8cf4-31142e2f0623-kube-api-access-442kk\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:22 crc kubenswrapper[5072]: I1124 11:59:22.697166 5072 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2a9d62a7-fa35-4937-8cf4-31142e2f0623-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:22 crc kubenswrapper[5072]: I1124 11:59:22.697175 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-th9l9\" (UniqueName: \"kubernetes.io/projected/18b8401a-38a6-41b3-abc0-d4924c551633-kube-api-access-th9l9\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:22 crc kubenswrapper[5072]: I1124 11:59:22.697184 5072 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/18b8401a-38a6-41b3-abc0-d4924c551633-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:22 crc kubenswrapper[5072]: I1124 11:59:22.697194 5072 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2a9d62a7-fa35-4937-8cf4-31142e2f0623-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:22 crc kubenswrapper[5072]: I1124 11:59:22.963481 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-668c6889fc-xbssb" Nov 24 11:59:22 crc kubenswrapper[5072]: I1124 11:59:22.963597 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-668c6889fc-xbssb" event={"ID":"18b8401a-38a6-41b3-abc0-d4924c551633","Type":"ContainerDied","Data":"0dcc5f0c3978922749c77142a5ad73a5930aeb927c3f9e77f45c6659c3b0825c"} Nov 24 11:59:22 crc kubenswrapper[5072]: I1124 11:59:22.965177 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6ccd6d974c-ptg7b" event={"ID":"2a9d62a7-fa35-4937-8cf4-31142e2f0623","Type":"ContainerDied","Data":"41dd769c5032ed2aac0444f5c443c2451bf77a0874ac6b4f26532df497df5ea0"} Nov 24 11:59:22 crc kubenswrapper[5072]: I1124 11:59:22.965232 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6ccd6d974c-ptg7b" Nov 24 11:59:23 crc kubenswrapper[5072]: I1124 11:59:23.016231 5072 scope.go:117] "RemoveContainer" containerID="4c463b6823449c0875f1fec4633ea521827aee0fee045719621150bcb1ac1a4f" Nov 24 11:59:23 crc kubenswrapper[5072]: E1124 11:59:23.016610 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 11:59:23 crc kubenswrapper[5072]: I1124 11:59:23.094260 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-668c6889fc-xbssb"] Nov 24 11:59:23 crc kubenswrapper[5072]: I1124 11:59:23.103446 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-668c6889fc-xbssb"] Nov 24 11:59:23 crc kubenswrapper[5072]: I1124 11:59:23.124113 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6ccd6d974c-ptg7b"] Nov 24 11:59:23 crc kubenswrapper[5072]: I1124 11:59:23.131949 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6ccd6d974c-ptg7b"] Nov 24 11:59:23 crc kubenswrapper[5072]: E1124 11:59:23.215668 5072 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod18b8401a_38a6_41b3_abc0_d4924c551633.slice/crio-0dcc5f0c3978922749c77142a5ad73a5930aeb927c3f9e77f45c6659c3b0825c\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2a9d62a7_fa35_4937_8cf4_31142e2f0623.slice/crio-41dd769c5032ed2aac0444f5c443c2451bf77a0874ac6b4f26532df497df5ea0\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod18b8401a_38a6_41b3_abc0_d4924c551633.slice\": RecentStats: unable to find data in memory cache]" Nov 24 11:59:23 crc kubenswrapper[5072]: I1124 11:59:23.975947 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1d71c9a2-3657-43f6-aec2-b53e3ea8fc01","Type":"ContainerStarted","Data":"7387d0a9cf1da757bfebf1ea8b9961de14522a8b5efe2ee3da796c48c266f94b"} Nov 24 11:59:23 crc kubenswrapper[5072]: I1124 11:59:23.977470 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-b55tw" event={"ID":"4a074607-4e56-4d2e-a4ee-87906af89764","Type":"ContainerStarted","Data":"1d87411ad890d3383fdb2466f4b2255ae671da030dc8f2cf61121b7460f5c1b3"} Nov 24 11:59:23 crc kubenswrapper[5072]: I1124 11:59:23.979408 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"61880241-c7c3-4422-adbb-3e6323831d71","Type":"ContainerStarted","Data":"4db565b54d3a8cca66c195cc535643740aff2b81b548511235d898541e80d474"} Nov 24 11:59:24 crc kubenswrapper[5072]: I1124 11:59:24.014417 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=15.014393769 podStartE2EDuration="15.014393769s" podCreationTimestamp="2025-11-24 11:59:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:59:23.997595701 +0000 UTC m=+3015.709120187" watchObservedRunningTime="2025-11-24 11:59:24.014393769 +0000 UTC m=+3015.725918245" Nov 24 11:59:24 crc kubenswrapper[5072]: I1124 11:59:24.016229 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-db-sync-b55tw" podStartSLOduration=11.37263866 podStartE2EDuration="25.016219835s" podCreationTimestamp="2025-11-24 11:58:59 +0000 UTC" firstStartedPulling="2025-11-24 11:59:08.934848353 +0000 UTC m=+3000.646372839" lastFinishedPulling="2025-11-24 11:59:22.578429538 +0000 UTC m=+3014.289954014" observedRunningTime="2025-11-24 11:59:24.013920748 +0000 UTC m=+3015.725445234" watchObservedRunningTime="2025-11-24 11:59:24.016219835 +0000 UTC m=+3015.727744301" Nov 24 11:59:24 crc kubenswrapper[5072]: I1124 11:59:24.043931 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=24.043908064 podStartE2EDuration="24.043908064s" podCreationTimestamp="2025-11-24 11:59:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:59:24.035891055 +0000 UTC m=+3015.747415541" watchObservedRunningTime="2025-11-24 11:59:24.043908064 +0000 UTC m=+3015.755432550" Nov 24 11:59:25 crc kubenswrapper[5072]: I1124 11:59:25.029840 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18b8401a-38a6-41b3-abc0-d4924c551633" path="/var/lib/kubelet/pods/18b8401a-38a6-41b3-abc0-d4924c551633/volumes" Nov 24 11:59:25 crc kubenswrapper[5072]: I1124 11:59:25.031186 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a9d62a7-fa35-4937-8cf4-31142e2f0623" path="/var/lib/kubelet/pods/2a9d62a7-fa35-4937-8cf4-31142e2f0623/volumes" Nov 24 11:59:30 crc kubenswrapper[5072]: I1124 11:59:30.250580 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 24 11:59:30 crc kubenswrapper[5072]: I1124 11:59:30.251283 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 24 11:59:30 crc kubenswrapper[5072]: I1124 11:59:30.365414 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 24 11:59:30 crc kubenswrapper[5072]: I1124 11:59:30.366641 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 24 11:59:31 crc kubenswrapper[5072]: I1124 11:59:31.042950 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 24 11:59:31 crc kubenswrapper[5072]: I1124 11:59:31.043254 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 24 11:59:31 crc kubenswrapper[5072]: I1124 11:59:31.120987 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 24 11:59:31 crc kubenswrapper[5072]: I1124 11:59:31.121052 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 24 11:59:31 crc kubenswrapper[5072]: I1124 11:59:31.121073 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 24 11:59:31 crc kubenswrapper[5072]: I1124 11:59:31.121089 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 24 11:59:31 crc kubenswrapper[5072]: I1124 11:59:31.151639 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 24 11:59:31 crc kubenswrapper[5072]: I1124 11:59:31.182010 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 24 11:59:36 crc kubenswrapper[5072]: I1124 11:59:36.018185 5072 scope.go:117] "RemoveContainer" containerID="4c463b6823449c0875f1fec4633ea521827aee0fee045719621150bcb1ac1a4f" Nov 24 11:59:36 crc kubenswrapper[5072]: E1124 11:59:36.019550 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 11:59:48 crc kubenswrapper[5072]: I1124 11:59:48.017130 5072 scope.go:117] "RemoveContainer" containerID="4c463b6823449c0875f1fec4633ea521827aee0fee045719621150bcb1ac1a4f" Nov 24 11:59:48 crc kubenswrapper[5072]: E1124 11:59:48.018151 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 11:59:48 crc kubenswrapper[5072]: I1124 11:59:48.219121 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-587d57694d-km6sf" event={"ID":"3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f","Type":"ContainerStarted","Data":"aec4b15829b4affb5daa97f04b55773c915c3c649ce3aa744732507ee9bac4c7"} Nov 24 11:59:48 crc kubenswrapper[5072]: I1124 11:59:48.220837 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-575b5d47b6-n66fd" event={"ID":"78739666-79c8-4af9-9766-6793e7975629","Type":"ContainerStarted","Data":"9c024aeb1de62a367e5bd917a9422161d5b647211dd96e83ebca225e7938f841"} Nov 24 11:59:50 crc kubenswrapper[5072]: I1124 11:59:50.239000 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-587d57694d-km6sf" event={"ID":"3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f","Type":"ContainerStarted","Data":"66054e0d1c884046c07bdf9ebcfb3c6f1bbbdc040b8d3e2aff52418bbfaa52d3"} Nov 24 11:59:50 crc kubenswrapper[5072]: I1124 11:59:50.242283 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-575b5d47b6-n66fd" event={"ID":"78739666-79c8-4af9-9766-6793e7975629","Type":"ContainerStarted","Data":"ee492ff17199a762006f692e01e4272485e0743ef5b342026c0a146e4ec6470b"} Nov 24 11:59:50 crc kubenswrapper[5072]: I1124 11:59:50.271651 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-575b5d47b6-n66fd" podStartSLOduration=5.438962106 podStartE2EDuration="54.27162959s" podCreationTimestamp="2025-11-24 11:58:56 +0000 UTC" firstStartedPulling="2025-11-24 11:58:57.971485154 +0000 UTC m=+2989.683009640" lastFinishedPulling="2025-11-24 11:59:46.804152648 +0000 UTC m=+3038.515677124" observedRunningTime="2025-11-24 11:59:50.262584274 +0000 UTC m=+3041.974108750" watchObservedRunningTime="2025-11-24 11:59:50.27162959 +0000 UTC m=+3041.983154066" Nov 24 11:59:51 crc kubenswrapper[5072]: I1124 11:59:51.282496 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-587d57694d-km6sf" podStartSLOduration=6.416853475 podStartE2EDuration="55.28247984s" podCreationTimestamp="2025-11-24 11:58:56 +0000 UTC" firstStartedPulling="2025-11-24 11:58:57.939359714 +0000 UTC m=+2989.650884200" lastFinishedPulling="2025-11-24 11:59:46.804986089 +0000 UTC m=+3038.516510565" observedRunningTime="2025-11-24 11:59:51.275511096 +0000 UTC m=+3042.987035582" watchObservedRunningTime="2025-11-24 11:59:51.28247984 +0000 UTC m=+3042.994004316" Nov 24 11:59:57 crc kubenswrapper[5072]: I1124 11:59:57.283522 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-587d57694d-km6sf" Nov 24 11:59:57 crc kubenswrapper[5072]: I1124 11:59:57.283985 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-587d57694d-km6sf" Nov 24 11:59:57 crc kubenswrapper[5072]: I1124 11:59:57.453904 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-575b5d47b6-n66fd" Nov 24 11:59:57 crc kubenswrapper[5072]: I1124 11:59:57.453966 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-575b5d47b6-n66fd" Nov 24 11:59:59 crc kubenswrapper[5072]: I1124 11:59:59.022779 5072 scope.go:117] "RemoveContainer" containerID="4c463b6823449c0875f1fec4633ea521827aee0fee045719621150bcb1ac1a4f" Nov 24 11:59:59 crc kubenswrapper[5072]: E1124 11:59:59.023351 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 12:00:00 crc kubenswrapper[5072]: I1124 12:00:00.179236 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399760-dkstv"] Nov 24 12:00:00 crc kubenswrapper[5072]: I1124 12:00:00.180846 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399760-dkstv" Nov 24 12:00:00 crc kubenswrapper[5072]: I1124 12:00:00.183111 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 24 12:00:00 crc kubenswrapper[5072]: I1124 12:00:00.183887 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 12:00:00 crc kubenswrapper[5072]: I1124 12:00:00.196898 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399760-dkstv"] Nov 24 12:00:00 crc kubenswrapper[5072]: I1124 12:00:00.314903 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/70e336e6-2a4c-4fc0-a0ee-6668ad67cd14-config-volume\") pod \"collect-profiles-29399760-dkstv\" (UID: \"70e336e6-2a4c-4fc0-a0ee-6668ad67cd14\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399760-dkstv" Nov 24 12:00:00 crc kubenswrapper[5072]: I1124 12:00:00.314961 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/70e336e6-2a4c-4fc0-a0ee-6668ad67cd14-secret-volume\") pod \"collect-profiles-29399760-dkstv\" (UID: \"70e336e6-2a4c-4fc0-a0ee-6668ad67cd14\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399760-dkstv" Nov 24 12:00:00 crc kubenswrapper[5072]: I1124 12:00:00.315048 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c47wt\" (UniqueName: \"kubernetes.io/projected/70e336e6-2a4c-4fc0-a0ee-6668ad67cd14-kube-api-access-c47wt\") pod \"collect-profiles-29399760-dkstv\" (UID: \"70e336e6-2a4c-4fc0-a0ee-6668ad67cd14\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399760-dkstv" Nov 24 12:00:00 crc kubenswrapper[5072]: I1124 12:00:00.416797 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c47wt\" (UniqueName: \"kubernetes.io/projected/70e336e6-2a4c-4fc0-a0ee-6668ad67cd14-kube-api-access-c47wt\") pod \"collect-profiles-29399760-dkstv\" (UID: \"70e336e6-2a4c-4fc0-a0ee-6668ad67cd14\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399760-dkstv" Nov 24 12:00:00 crc kubenswrapper[5072]: I1124 12:00:00.416975 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/70e336e6-2a4c-4fc0-a0ee-6668ad67cd14-config-volume\") pod \"collect-profiles-29399760-dkstv\" (UID: \"70e336e6-2a4c-4fc0-a0ee-6668ad67cd14\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399760-dkstv" Nov 24 12:00:00 crc kubenswrapper[5072]: I1124 12:00:00.417020 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/70e336e6-2a4c-4fc0-a0ee-6668ad67cd14-secret-volume\") pod \"collect-profiles-29399760-dkstv\" (UID: \"70e336e6-2a4c-4fc0-a0ee-6668ad67cd14\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399760-dkstv" Nov 24 12:00:00 crc kubenswrapper[5072]: I1124 12:00:00.417858 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/70e336e6-2a4c-4fc0-a0ee-6668ad67cd14-config-volume\") pod \"collect-profiles-29399760-dkstv\" (UID: \"70e336e6-2a4c-4fc0-a0ee-6668ad67cd14\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399760-dkstv" Nov 24 12:00:00 crc kubenswrapper[5072]: I1124 12:00:00.425649 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/70e336e6-2a4c-4fc0-a0ee-6668ad67cd14-secret-volume\") pod \"collect-profiles-29399760-dkstv\" (UID: \"70e336e6-2a4c-4fc0-a0ee-6668ad67cd14\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399760-dkstv" Nov 24 12:00:00 crc kubenswrapper[5072]: I1124 12:00:00.432865 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c47wt\" (UniqueName: \"kubernetes.io/projected/70e336e6-2a4c-4fc0-a0ee-6668ad67cd14-kube-api-access-c47wt\") pod \"collect-profiles-29399760-dkstv\" (UID: \"70e336e6-2a4c-4fc0-a0ee-6668ad67cd14\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399760-dkstv" Nov 24 12:00:00 crc kubenswrapper[5072]: I1124 12:00:00.521727 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399760-dkstv" Nov 24 12:00:00 crc kubenswrapper[5072]: I1124 12:00:00.994248 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399760-dkstv"] Nov 24 12:00:01 crc kubenswrapper[5072]: I1124 12:00:01.059607 5072 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="1d71c9a2-3657-43f6-aec2-b53e3ea8fc01" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.244:9292/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 12:00:01 crc kubenswrapper[5072]: I1124 12:00:01.059820 5072 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="1d71c9a2-3657-43f6-aec2-b53e3ea8fc01" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.244:9292/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 12:00:01 crc kubenswrapper[5072]: I1124 12:00:01.359514 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399760-dkstv" event={"ID":"70e336e6-2a4c-4fc0-a0ee-6668ad67cd14","Type":"ContainerStarted","Data":"ba0b62f269e2c35f5f0db158b0b3e8ac15f4b4bedac03d4b2b65de3bffa63f7e"} Nov 24 12:00:01 crc kubenswrapper[5072]: I1124 12:00:01.687071 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 24 12:00:01 crc kubenswrapper[5072]: I1124 12:00:01.687225 5072 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 12:00:01 crc kubenswrapper[5072]: I1124 12:00:01.690253 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 24 12:00:02 crc kubenswrapper[5072]: I1124 12:00:02.379336 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399760-dkstv" event={"ID":"70e336e6-2a4c-4fc0-a0ee-6668ad67cd14","Type":"ContainerStarted","Data":"6afe1c5e558064ce7acafe0b08ef52401fe7115f4019cb5b4ca98a3a34827fe5"} Nov 24 12:00:03 crc kubenswrapper[5072]: I1124 12:00:03.004309 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 24 12:00:03 crc kubenswrapper[5072]: I1124 12:00:03.009450 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 24 12:00:04 crc kubenswrapper[5072]: I1124 12:00:04.407355 5072 generic.go:334] "Generic (PLEG): container finished" podID="70e336e6-2a4c-4fc0-a0ee-6668ad67cd14" containerID="6afe1c5e558064ce7acafe0b08ef52401fe7115f4019cb5b4ca98a3a34827fe5" exitCode=0 Nov 24 12:00:04 crc kubenswrapper[5072]: I1124 12:00:04.407956 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399760-dkstv" event={"ID":"70e336e6-2a4c-4fc0-a0ee-6668ad67cd14","Type":"ContainerDied","Data":"6afe1c5e558064ce7acafe0b08ef52401fe7115f4019cb5b4ca98a3a34827fe5"} Nov 24 12:00:05 crc kubenswrapper[5072]: I1124 12:00:05.792488 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399760-dkstv" Nov 24 12:00:05 crc kubenswrapper[5072]: I1124 12:00:05.854982 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/70e336e6-2a4c-4fc0-a0ee-6668ad67cd14-secret-volume\") pod \"70e336e6-2a4c-4fc0-a0ee-6668ad67cd14\" (UID: \"70e336e6-2a4c-4fc0-a0ee-6668ad67cd14\") " Nov 24 12:00:05 crc kubenswrapper[5072]: I1124 12:00:05.855046 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/70e336e6-2a4c-4fc0-a0ee-6668ad67cd14-config-volume\") pod \"70e336e6-2a4c-4fc0-a0ee-6668ad67cd14\" (UID: \"70e336e6-2a4c-4fc0-a0ee-6668ad67cd14\") " Nov 24 12:00:05 crc kubenswrapper[5072]: I1124 12:00:05.855119 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c47wt\" (UniqueName: \"kubernetes.io/projected/70e336e6-2a4c-4fc0-a0ee-6668ad67cd14-kube-api-access-c47wt\") pod \"70e336e6-2a4c-4fc0-a0ee-6668ad67cd14\" (UID: \"70e336e6-2a4c-4fc0-a0ee-6668ad67cd14\") " Nov 24 12:00:05 crc kubenswrapper[5072]: I1124 12:00:05.856580 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70e336e6-2a4c-4fc0-a0ee-6668ad67cd14-config-volume" (OuterVolumeSpecName: "config-volume") pod "70e336e6-2a4c-4fc0-a0ee-6668ad67cd14" (UID: "70e336e6-2a4c-4fc0-a0ee-6668ad67cd14"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:00:05 crc kubenswrapper[5072]: I1124 12:00:05.860380 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70e336e6-2a4c-4fc0-a0ee-6668ad67cd14-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "70e336e6-2a4c-4fc0-a0ee-6668ad67cd14" (UID: "70e336e6-2a4c-4fc0-a0ee-6668ad67cd14"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:00:05 crc kubenswrapper[5072]: I1124 12:00:05.860437 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70e336e6-2a4c-4fc0-a0ee-6668ad67cd14-kube-api-access-c47wt" (OuterVolumeSpecName: "kube-api-access-c47wt") pod "70e336e6-2a4c-4fc0-a0ee-6668ad67cd14" (UID: "70e336e6-2a4c-4fc0-a0ee-6668ad67cd14"). InnerVolumeSpecName "kube-api-access-c47wt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:00:05 crc kubenswrapper[5072]: I1124 12:00:05.957660 5072 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/70e336e6-2a4c-4fc0-a0ee-6668ad67cd14-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 24 12:00:05 crc kubenswrapper[5072]: I1124 12:00:05.957702 5072 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/70e336e6-2a4c-4fc0-a0ee-6668ad67cd14-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 12:00:05 crc kubenswrapper[5072]: I1124 12:00:05.957712 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c47wt\" (UniqueName: \"kubernetes.io/projected/70e336e6-2a4c-4fc0-a0ee-6668ad67cd14-kube-api-access-c47wt\") on node \"crc\" DevicePath \"\"" Nov 24 12:00:06 crc kubenswrapper[5072]: I1124 12:00:06.431305 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399760-dkstv" event={"ID":"70e336e6-2a4c-4fc0-a0ee-6668ad67cd14","Type":"ContainerDied","Data":"ba0b62f269e2c35f5f0db158b0b3e8ac15f4b4bedac03d4b2b65de3bffa63f7e"} Nov 24 12:00:06 crc kubenswrapper[5072]: I1124 12:00:06.431345 5072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba0b62f269e2c35f5f0db158b0b3e8ac15f4b4bedac03d4b2b65de3bffa63f7e" Nov 24 12:00:06 crc kubenswrapper[5072]: I1124 12:00:06.431419 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399760-dkstv" Nov 24 12:00:06 crc kubenswrapper[5072]: I1124 12:00:06.500222 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399715-tdrwn"] Nov 24 12:00:06 crc kubenswrapper[5072]: I1124 12:00:06.508734 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399715-tdrwn"] Nov 24 12:00:07 crc kubenswrapper[5072]: I1124 12:00:07.030635 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad3bb474-e119-49eb-a13d-3c71b170fb33" path="/var/lib/kubelet/pods/ad3bb474-e119-49eb-a13d-3c71b170fb33/volumes" Nov 24 12:00:07 crc kubenswrapper[5072]: I1124 12:00:07.285144 5072 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-587d57694d-km6sf" podUID="3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.240:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.240:8443: connect: connection refused" Nov 24 12:00:07 crc kubenswrapper[5072]: I1124 12:00:07.456598 5072 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-575b5d47b6-n66fd" podUID="78739666-79c8-4af9-9766-6793e7975629" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.241:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.241:8443: connect: connection refused" Nov 24 12:00:11 crc kubenswrapper[5072]: I1124 12:00:11.017212 5072 scope.go:117] "RemoveContainer" containerID="4c463b6823449c0875f1fec4633ea521827aee0fee045719621150bcb1ac1a4f" Nov 24 12:00:11 crc kubenswrapper[5072]: E1124 12:00:11.018016 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 12:00:17 crc kubenswrapper[5072]: I1124 12:00:17.286266 5072 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-587d57694d-km6sf" podUID="3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.240:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.240:8443: connect: connection refused" Nov 24 12:00:17 crc kubenswrapper[5072]: I1124 12:00:17.455344 5072 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-575b5d47b6-n66fd" podUID="78739666-79c8-4af9-9766-6793e7975629" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.241:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.241:8443: connect: connection refused" Nov 24 12:00:22 crc kubenswrapper[5072]: I1124 12:00:22.558191 5072 scope.go:117] "RemoveContainer" containerID="4a81dc24ed3d563a3996aa3e050718e3c7ea8d792b140465372cabc473f2a017" Nov 24 12:00:23 crc kubenswrapper[5072]: I1124 12:00:23.017052 5072 scope.go:117] "RemoveContainer" containerID="4c463b6823449c0875f1fec4633ea521827aee0fee045719621150bcb1ac1a4f" Nov 24 12:00:23 crc kubenswrapper[5072]: E1124 12:00:23.017620 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 12:00:27 crc kubenswrapper[5072]: I1124 12:00:27.284150 5072 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-587d57694d-km6sf" podUID="3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.240:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.240:8443: connect: connection refused" Nov 24 12:00:27 crc kubenswrapper[5072]: I1124 12:00:27.284661 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-587d57694d-km6sf" Nov 24 12:00:27 crc kubenswrapper[5072]: I1124 12:00:27.285650 5072 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"66054e0d1c884046c07bdf9ebcfb3c6f1bbbdc040b8d3e2aff52418bbfaa52d3"} pod="openstack/horizon-587d57694d-km6sf" containerMessage="Container horizon failed startup probe, will be restarted" Nov 24 12:00:27 crc kubenswrapper[5072]: I1124 12:00:27.285726 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-587d57694d-km6sf" podUID="3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f" containerName="horizon" containerID="cri-o://66054e0d1c884046c07bdf9ebcfb3c6f1bbbdc040b8d3e2aff52418bbfaa52d3" gracePeriod=30 Nov 24 12:00:27 crc kubenswrapper[5072]: I1124 12:00:27.454890 5072 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-575b5d47b6-n66fd" podUID="78739666-79c8-4af9-9766-6793e7975629" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.241:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.241:8443: connect: connection refused" Nov 24 12:00:27 crc kubenswrapper[5072]: I1124 12:00:27.455441 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-575b5d47b6-n66fd" Nov 24 12:00:27 crc kubenswrapper[5072]: I1124 12:00:27.456510 5072 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"ee492ff17199a762006f692e01e4272485e0743ef5b342026c0a146e4ec6470b"} pod="openstack/horizon-575b5d47b6-n66fd" containerMessage="Container horizon failed startup probe, will be restarted" Nov 24 12:00:27 crc kubenswrapper[5072]: I1124 12:00:27.456563 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-575b5d47b6-n66fd" podUID="78739666-79c8-4af9-9766-6793e7975629" containerName="horizon" containerID="cri-o://ee492ff17199a762006f692e01e4272485e0743ef5b342026c0a146e4ec6470b" gracePeriod=30 Nov 24 12:00:37 crc kubenswrapper[5072]: I1124 12:00:37.017041 5072 scope.go:117] "RemoveContainer" containerID="4c463b6823449c0875f1fec4633ea521827aee0fee045719621150bcb1ac1a4f" Nov 24 12:00:37 crc kubenswrapper[5072]: E1124 12:00:37.018438 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 12:00:50 crc kubenswrapper[5072]: I1124 12:00:50.021698 5072 scope.go:117] "RemoveContainer" containerID="4c463b6823449c0875f1fec4633ea521827aee0fee045719621150bcb1ac1a4f" Nov 24 12:00:50 crc kubenswrapper[5072]: E1124 12:00:50.022662 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 12:00:58 crc kubenswrapper[5072]: I1124 12:00:58.012801 5072 generic.go:334] "Generic (PLEG): container finished" podID="3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f" containerID="66054e0d1c884046c07bdf9ebcfb3c6f1bbbdc040b8d3e2aff52418bbfaa52d3" exitCode=137 Nov 24 12:00:58 crc kubenswrapper[5072]: I1124 12:00:58.013335 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-587d57694d-km6sf" event={"ID":"3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f","Type":"ContainerDied","Data":"66054e0d1c884046c07bdf9ebcfb3c6f1bbbdc040b8d3e2aff52418bbfaa52d3"} Nov 24 12:00:58 crc kubenswrapper[5072]: I1124 12:00:58.022250 5072 generic.go:334] "Generic (PLEG): container finished" podID="78739666-79c8-4af9-9766-6793e7975629" containerID="ee492ff17199a762006f692e01e4272485e0743ef5b342026c0a146e4ec6470b" exitCode=137 Nov 24 12:00:58 crc kubenswrapper[5072]: I1124 12:00:58.022301 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-575b5d47b6-n66fd" event={"ID":"78739666-79c8-4af9-9766-6793e7975629","Type":"ContainerDied","Data":"ee492ff17199a762006f692e01e4272485e0743ef5b342026c0a146e4ec6470b"} Nov 24 12:00:58 crc kubenswrapper[5072]: I1124 12:00:58.022330 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-575b5d47b6-n66fd" event={"ID":"78739666-79c8-4af9-9766-6793e7975629","Type":"ContainerStarted","Data":"0c4b3ccceb0efb58fd292902710599f6b4cfd3fa8771bbada47607e39bfc1b44"} Nov 24 12:00:59 crc kubenswrapper[5072]: I1124 12:00:59.038156 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-587d57694d-km6sf" event={"ID":"3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f","Type":"ContainerStarted","Data":"5c8a7216ac20c05b9c591c0c4e102ed060f4b3017033e3a9088f3e50a15ca7ed"} Nov 24 12:01:00 crc kubenswrapper[5072]: I1124 12:01:00.157748 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29399761-642mr"] Nov 24 12:01:00 crc kubenswrapper[5072]: E1124 12:01:00.158490 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70e336e6-2a4c-4fc0-a0ee-6668ad67cd14" containerName="collect-profiles" Nov 24 12:01:00 crc kubenswrapper[5072]: I1124 12:01:00.158508 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="70e336e6-2a4c-4fc0-a0ee-6668ad67cd14" containerName="collect-profiles" Nov 24 12:01:00 crc kubenswrapper[5072]: I1124 12:01:00.158752 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="70e336e6-2a4c-4fc0-a0ee-6668ad67cd14" containerName="collect-profiles" Nov 24 12:01:00 crc kubenswrapper[5072]: I1124 12:01:00.159469 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29399761-642mr" Nov 24 12:01:00 crc kubenswrapper[5072]: I1124 12:01:00.170177 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29399761-642mr"] Nov 24 12:01:00 crc kubenswrapper[5072]: I1124 12:01:00.282467 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/360e5e7f-fc1f-4d24-8446-b97c9c04aa46-combined-ca-bundle\") pod \"keystone-cron-29399761-642mr\" (UID: \"360e5e7f-fc1f-4d24-8446-b97c9c04aa46\") " pod="openstack/keystone-cron-29399761-642mr" Nov 24 12:01:00 crc kubenswrapper[5072]: I1124 12:01:00.282626 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/360e5e7f-fc1f-4d24-8446-b97c9c04aa46-fernet-keys\") pod \"keystone-cron-29399761-642mr\" (UID: \"360e5e7f-fc1f-4d24-8446-b97c9c04aa46\") " pod="openstack/keystone-cron-29399761-642mr" Nov 24 12:01:00 crc kubenswrapper[5072]: I1124 12:01:00.282752 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/360e5e7f-fc1f-4d24-8446-b97c9c04aa46-config-data\") pod \"keystone-cron-29399761-642mr\" (UID: \"360e5e7f-fc1f-4d24-8446-b97c9c04aa46\") " pod="openstack/keystone-cron-29399761-642mr" Nov 24 12:01:00 crc kubenswrapper[5072]: I1124 12:01:00.282884 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fzqp\" (UniqueName: \"kubernetes.io/projected/360e5e7f-fc1f-4d24-8446-b97c9c04aa46-kube-api-access-9fzqp\") pod \"keystone-cron-29399761-642mr\" (UID: \"360e5e7f-fc1f-4d24-8446-b97c9c04aa46\") " pod="openstack/keystone-cron-29399761-642mr" Nov 24 12:01:00 crc kubenswrapper[5072]: I1124 12:01:00.384674 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/360e5e7f-fc1f-4d24-8446-b97c9c04aa46-fernet-keys\") pod \"keystone-cron-29399761-642mr\" (UID: \"360e5e7f-fc1f-4d24-8446-b97c9c04aa46\") " pod="openstack/keystone-cron-29399761-642mr" Nov 24 12:01:00 crc kubenswrapper[5072]: I1124 12:01:00.384778 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/360e5e7f-fc1f-4d24-8446-b97c9c04aa46-config-data\") pod \"keystone-cron-29399761-642mr\" (UID: \"360e5e7f-fc1f-4d24-8446-b97c9c04aa46\") " pod="openstack/keystone-cron-29399761-642mr" Nov 24 12:01:00 crc kubenswrapper[5072]: I1124 12:01:00.384849 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fzqp\" (UniqueName: \"kubernetes.io/projected/360e5e7f-fc1f-4d24-8446-b97c9c04aa46-kube-api-access-9fzqp\") pod \"keystone-cron-29399761-642mr\" (UID: \"360e5e7f-fc1f-4d24-8446-b97c9c04aa46\") " pod="openstack/keystone-cron-29399761-642mr" Nov 24 12:01:00 crc kubenswrapper[5072]: I1124 12:01:00.384970 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/360e5e7f-fc1f-4d24-8446-b97c9c04aa46-combined-ca-bundle\") pod \"keystone-cron-29399761-642mr\" (UID: \"360e5e7f-fc1f-4d24-8446-b97c9c04aa46\") " pod="openstack/keystone-cron-29399761-642mr" Nov 24 12:01:00 crc kubenswrapper[5072]: I1124 12:01:00.391494 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/360e5e7f-fc1f-4d24-8446-b97c9c04aa46-fernet-keys\") pod \"keystone-cron-29399761-642mr\" (UID: \"360e5e7f-fc1f-4d24-8446-b97c9c04aa46\") " pod="openstack/keystone-cron-29399761-642mr" Nov 24 12:01:00 crc kubenswrapper[5072]: I1124 12:01:00.397120 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/360e5e7f-fc1f-4d24-8446-b97c9c04aa46-config-data\") pod \"keystone-cron-29399761-642mr\" (UID: \"360e5e7f-fc1f-4d24-8446-b97c9c04aa46\") " pod="openstack/keystone-cron-29399761-642mr" Nov 24 12:01:00 crc kubenswrapper[5072]: I1124 12:01:00.400391 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/360e5e7f-fc1f-4d24-8446-b97c9c04aa46-combined-ca-bundle\") pod \"keystone-cron-29399761-642mr\" (UID: \"360e5e7f-fc1f-4d24-8446-b97c9c04aa46\") " pod="openstack/keystone-cron-29399761-642mr" Nov 24 12:01:00 crc kubenswrapper[5072]: I1124 12:01:00.404270 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fzqp\" (UniqueName: \"kubernetes.io/projected/360e5e7f-fc1f-4d24-8446-b97c9c04aa46-kube-api-access-9fzqp\") pod \"keystone-cron-29399761-642mr\" (UID: \"360e5e7f-fc1f-4d24-8446-b97c9c04aa46\") " pod="openstack/keystone-cron-29399761-642mr" Nov 24 12:01:00 crc kubenswrapper[5072]: I1124 12:01:00.488893 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29399761-642mr" Nov 24 12:01:00 crc kubenswrapper[5072]: I1124 12:01:00.977164 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29399761-642mr"] Nov 24 12:01:00 crc kubenswrapper[5072]: W1124 12:01:00.988127 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod360e5e7f_fc1f_4d24_8446_b97c9c04aa46.slice/crio-18534c4e5a1a9cccf4d5be3a503a0dfbdc92239cea685c8246d6501abf9866bc WatchSource:0}: Error finding container 18534c4e5a1a9cccf4d5be3a503a0dfbdc92239cea685c8246d6501abf9866bc: Status 404 returned error can't find the container with id 18534c4e5a1a9cccf4d5be3a503a0dfbdc92239cea685c8246d6501abf9866bc Nov 24 12:01:01 crc kubenswrapper[5072]: I1124 12:01:01.067932 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29399761-642mr" event={"ID":"360e5e7f-fc1f-4d24-8446-b97c9c04aa46","Type":"ContainerStarted","Data":"18534c4e5a1a9cccf4d5be3a503a0dfbdc92239cea685c8246d6501abf9866bc"} Nov 24 12:01:02 crc kubenswrapper[5072]: I1124 12:01:02.080979 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29399761-642mr" event={"ID":"360e5e7f-fc1f-4d24-8446-b97c9c04aa46","Type":"ContainerStarted","Data":"d576511f36d01c0ffec6277db322c1a9437ea33712fdbd1895c7b429ddafabb0"} Nov 24 12:01:02 crc kubenswrapper[5072]: I1124 12:01:02.104097 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29399761-642mr" podStartSLOduration=2.104070571 podStartE2EDuration="2.104070571s" podCreationTimestamp="2025-11-24 12:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:01:02.09883216 +0000 UTC m=+3113.810356636" watchObservedRunningTime="2025-11-24 12:01:02.104070571 +0000 UTC m=+3113.815595057" Nov 24 12:01:04 crc kubenswrapper[5072]: I1124 12:01:04.017698 5072 scope.go:117] "RemoveContainer" containerID="4c463b6823449c0875f1fec4633ea521827aee0fee045719621150bcb1ac1a4f" Nov 24 12:01:04 crc kubenswrapper[5072]: E1124 12:01:04.018447 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 12:01:07 crc kubenswrapper[5072]: I1124 12:01:07.283155 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-587d57694d-km6sf" Nov 24 12:01:07 crc kubenswrapper[5072]: I1124 12:01:07.283728 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-587d57694d-km6sf" Nov 24 12:01:07 crc kubenswrapper[5072]: I1124 12:01:07.454259 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-575b5d47b6-n66fd" Nov 24 12:01:07 crc kubenswrapper[5072]: I1124 12:01:07.454341 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-575b5d47b6-n66fd" Nov 24 12:01:07 crc kubenswrapper[5072]: I1124 12:01:07.455756 5072 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-575b5d47b6-n66fd" podUID="78739666-79c8-4af9-9766-6793e7975629" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.241:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.241:8443: connect: connection refused" Nov 24 12:01:08 crc kubenswrapper[5072]: I1124 12:01:08.150703 5072 generic.go:334] "Generic (PLEG): container finished" podID="360e5e7f-fc1f-4d24-8446-b97c9c04aa46" containerID="d576511f36d01c0ffec6277db322c1a9437ea33712fdbd1895c7b429ddafabb0" exitCode=0 Nov 24 12:01:08 crc kubenswrapper[5072]: I1124 12:01:08.150752 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29399761-642mr" event={"ID":"360e5e7f-fc1f-4d24-8446-b97c9c04aa46","Type":"ContainerDied","Data":"d576511f36d01c0ffec6277db322c1a9437ea33712fdbd1895c7b429ddafabb0"} Nov 24 12:01:09 crc kubenswrapper[5072]: I1124 12:01:09.512156 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29399761-642mr" Nov 24 12:01:09 crc kubenswrapper[5072]: I1124 12:01:09.551008 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/360e5e7f-fc1f-4d24-8446-b97c9c04aa46-config-data\") pod \"360e5e7f-fc1f-4d24-8446-b97c9c04aa46\" (UID: \"360e5e7f-fc1f-4d24-8446-b97c9c04aa46\") " Nov 24 12:01:09 crc kubenswrapper[5072]: I1124 12:01:09.556478 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/360e5e7f-fc1f-4d24-8446-b97c9c04aa46-combined-ca-bundle\") pod \"360e5e7f-fc1f-4d24-8446-b97c9c04aa46\" (UID: \"360e5e7f-fc1f-4d24-8446-b97c9c04aa46\") " Nov 24 12:01:09 crc kubenswrapper[5072]: I1124 12:01:09.556938 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9fzqp\" (UniqueName: \"kubernetes.io/projected/360e5e7f-fc1f-4d24-8446-b97c9c04aa46-kube-api-access-9fzqp\") pod \"360e5e7f-fc1f-4d24-8446-b97c9c04aa46\" (UID: \"360e5e7f-fc1f-4d24-8446-b97c9c04aa46\") " Nov 24 12:01:09 crc kubenswrapper[5072]: I1124 12:01:09.557085 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/360e5e7f-fc1f-4d24-8446-b97c9c04aa46-fernet-keys\") pod \"360e5e7f-fc1f-4d24-8446-b97c9c04aa46\" (UID: \"360e5e7f-fc1f-4d24-8446-b97c9c04aa46\") " Nov 24 12:01:09 crc kubenswrapper[5072]: I1124 12:01:09.560387 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/360e5e7f-fc1f-4d24-8446-b97c9c04aa46-kube-api-access-9fzqp" (OuterVolumeSpecName: "kube-api-access-9fzqp") pod "360e5e7f-fc1f-4d24-8446-b97c9c04aa46" (UID: "360e5e7f-fc1f-4d24-8446-b97c9c04aa46"). InnerVolumeSpecName "kube-api-access-9fzqp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:01:09 crc kubenswrapper[5072]: I1124 12:01:09.566158 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/360e5e7f-fc1f-4d24-8446-b97c9c04aa46-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "360e5e7f-fc1f-4d24-8446-b97c9c04aa46" (UID: "360e5e7f-fc1f-4d24-8446-b97c9c04aa46"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:01:09 crc kubenswrapper[5072]: I1124 12:01:09.586501 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/360e5e7f-fc1f-4d24-8446-b97c9c04aa46-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "360e5e7f-fc1f-4d24-8446-b97c9c04aa46" (UID: "360e5e7f-fc1f-4d24-8446-b97c9c04aa46"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:01:09 crc kubenswrapper[5072]: I1124 12:01:09.634612 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/360e5e7f-fc1f-4d24-8446-b97c9c04aa46-config-data" (OuterVolumeSpecName: "config-data") pod "360e5e7f-fc1f-4d24-8446-b97c9c04aa46" (UID: "360e5e7f-fc1f-4d24-8446-b97c9c04aa46"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:01:09 crc kubenswrapper[5072]: I1124 12:01:09.660839 5072 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/360e5e7f-fc1f-4d24-8446-b97c9c04aa46-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:01:09 crc kubenswrapper[5072]: I1124 12:01:09.660883 5072 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/360e5e7f-fc1f-4d24-8446-b97c9c04aa46-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:01:09 crc kubenswrapper[5072]: I1124 12:01:09.660898 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9fzqp\" (UniqueName: \"kubernetes.io/projected/360e5e7f-fc1f-4d24-8446-b97c9c04aa46-kube-api-access-9fzqp\") on node \"crc\" DevicePath \"\"" Nov 24 12:01:09 crc kubenswrapper[5072]: I1124 12:01:09.660918 5072 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/360e5e7f-fc1f-4d24-8446-b97c9c04aa46-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 24 12:01:10 crc kubenswrapper[5072]: I1124 12:01:10.176031 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29399761-642mr" event={"ID":"360e5e7f-fc1f-4d24-8446-b97c9c04aa46","Type":"ContainerDied","Data":"18534c4e5a1a9cccf4d5be3a503a0dfbdc92239cea685c8246d6501abf9866bc"} Nov 24 12:01:10 crc kubenswrapper[5072]: I1124 12:01:10.176075 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29399761-642mr" Nov 24 12:01:10 crc kubenswrapper[5072]: I1124 12:01:10.176085 5072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18534c4e5a1a9cccf4d5be3a503a0dfbdc92239cea685c8246d6501abf9866bc" Nov 24 12:01:17 crc kubenswrapper[5072]: I1124 12:01:17.284515 5072 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-587d57694d-km6sf" podUID="3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.240:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.240:8443: connect: connection refused" Nov 24 12:01:19 crc kubenswrapper[5072]: I1124 12:01:19.022826 5072 scope.go:117] "RemoveContainer" containerID="4c463b6823449c0875f1fec4633ea521827aee0fee045719621150bcb1ac1a4f" Nov 24 12:01:19 crc kubenswrapper[5072]: E1124 12:01:19.023471 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 12:01:19 crc kubenswrapper[5072]: I1124 12:01:19.951454 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-575b5d47b6-n66fd" Nov 24 12:01:21 crc kubenswrapper[5072]: I1124 12:01:21.701329 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-575b5d47b6-n66fd" Nov 24 12:01:21 crc kubenswrapper[5072]: I1124 12:01:21.762176 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-587d57694d-km6sf"] Nov 24 12:01:21 crc kubenswrapper[5072]: I1124 12:01:21.762420 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-587d57694d-km6sf" podUID="3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f" containerName="horizon-log" containerID="cri-o://aec4b15829b4affb5daa97f04b55773c915c3c649ce3aa744732507ee9bac4c7" gracePeriod=30 Nov 24 12:01:21 crc kubenswrapper[5072]: I1124 12:01:21.762543 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-587d57694d-km6sf" podUID="3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f" containerName="horizon" containerID="cri-o://5c8a7216ac20c05b9c591c0c4e102ed060f4b3017033e3a9088f3e50a15ca7ed" gracePeriod=30 Nov 24 12:01:22 crc kubenswrapper[5072]: I1124 12:01:22.313579 5072 generic.go:334] "Generic (PLEG): container finished" podID="3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f" containerID="5c8a7216ac20c05b9c591c0c4e102ed060f4b3017033e3a9088f3e50a15ca7ed" exitCode=0 Nov 24 12:01:22 crc kubenswrapper[5072]: I1124 12:01:22.313618 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-587d57694d-km6sf" event={"ID":"3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f","Type":"ContainerDied","Data":"5c8a7216ac20c05b9c591c0c4e102ed060f4b3017033e3a9088f3e50a15ca7ed"} Nov 24 12:01:22 crc kubenswrapper[5072]: I1124 12:01:22.313932 5072 scope.go:117] "RemoveContainer" containerID="66054e0d1c884046c07bdf9ebcfb3c6f1bbbdc040b8d3e2aff52418bbfaa52d3" Nov 24 12:01:31 crc kubenswrapper[5072]: I1124 12:01:31.017716 5072 scope.go:117] "RemoveContainer" containerID="4c463b6823449c0875f1fec4633ea521827aee0fee045719621150bcb1ac1a4f" Nov 24 12:01:31 crc kubenswrapper[5072]: E1124 12:01:31.018495 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 12:01:46 crc kubenswrapper[5072]: I1124 12:01:46.016805 5072 scope.go:117] "RemoveContainer" containerID="4c463b6823449c0875f1fec4633ea521827aee0fee045719621150bcb1ac1a4f" Nov 24 12:01:46 crc kubenswrapper[5072]: E1124 12:01:46.017588 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 12:01:52 crc kubenswrapper[5072]: I1124 12:01:52.234730 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-587d57694d-km6sf" Nov 24 12:01:52 crc kubenswrapper[5072]: I1124 12:01:52.392043 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f-combined-ca-bundle\") pod \"3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f\" (UID: \"3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f\") " Nov 24 12:01:52 crc kubenswrapper[5072]: I1124 12:01:52.392132 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f-scripts\") pod \"3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f\" (UID: \"3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f\") " Nov 24 12:01:52 crc kubenswrapper[5072]: I1124 12:01:52.392212 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f-horizon-secret-key\") pod \"3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f\" (UID: \"3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f\") " Nov 24 12:01:52 crc kubenswrapper[5072]: I1124 12:01:52.392236 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f-config-data\") pod \"3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f\" (UID: \"3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f\") " Nov 24 12:01:52 crc kubenswrapper[5072]: I1124 12:01:52.393058 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-97wgk\" (UniqueName: \"kubernetes.io/projected/3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f-kube-api-access-97wgk\") pod \"3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f\" (UID: \"3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f\") " Nov 24 12:01:52 crc kubenswrapper[5072]: I1124 12:01:52.393225 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f-logs\") pod \"3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f\" (UID: \"3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f\") " Nov 24 12:01:52 crc kubenswrapper[5072]: I1124 12:01:52.393318 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f-horizon-tls-certs\") pod \"3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f\" (UID: \"3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f\") " Nov 24 12:01:52 crc kubenswrapper[5072]: I1124 12:01:52.393745 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f-logs" (OuterVolumeSpecName: "logs") pod "3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f" (UID: "3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:01:52 crc kubenswrapper[5072]: I1124 12:01:52.394435 5072 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f-logs\") on node \"crc\" DevicePath \"\"" Nov 24 12:01:52 crc kubenswrapper[5072]: I1124 12:01:52.399068 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f" (UID: "3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:01:52 crc kubenswrapper[5072]: I1124 12:01:52.400016 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f-kube-api-access-97wgk" (OuterVolumeSpecName: "kube-api-access-97wgk") pod "3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f" (UID: "3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f"). InnerVolumeSpecName "kube-api-access-97wgk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:01:52 crc kubenswrapper[5072]: I1124 12:01:52.431250 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f-scripts" (OuterVolumeSpecName: "scripts") pod "3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f" (UID: "3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:01:52 crc kubenswrapper[5072]: I1124 12:01:52.438047 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f" (UID: "3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:01:52 crc kubenswrapper[5072]: I1124 12:01:52.441477 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f-config-data" (OuterVolumeSpecName: "config-data") pod "3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f" (UID: "3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:01:52 crc kubenswrapper[5072]: I1124 12:01:52.481017 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f" (UID: "3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:01:52 crc kubenswrapper[5072]: I1124 12:01:52.496224 5072 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:01:52 crc kubenswrapper[5072]: I1124 12:01:52.496257 5072 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 24 12:01:52 crc kubenswrapper[5072]: I1124 12:01:52.496267 5072 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:01:52 crc kubenswrapper[5072]: I1124 12:01:52.496278 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-97wgk\" (UniqueName: \"kubernetes.io/projected/3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f-kube-api-access-97wgk\") on node \"crc\" DevicePath \"\"" Nov 24 12:01:52 crc kubenswrapper[5072]: I1124 12:01:52.496287 5072 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 12:01:52 crc kubenswrapper[5072]: I1124 12:01:52.496297 5072 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:01:52 crc kubenswrapper[5072]: I1124 12:01:52.622220 5072 generic.go:334] "Generic (PLEG): container finished" podID="3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f" containerID="aec4b15829b4affb5daa97f04b55773c915c3c649ce3aa744732507ee9bac4c7" exitCode=137 Nov 24 12:01:52 crc kubenswrapper[5072]: I1124 12:01:52.622293 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-587d57694d-km6sf" event={"ID":"3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f","Type":"ContainerDied","Data":"aec4b15829b4affb5daa97f04b55773c915c3c649ce3aa744732507ee9bac4c7"} Nov 24 12:01:52 crc kubenswrapper[5072]: I1124 12:01:52.622334 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-587d57694d-km6sf" Nov 24 12:01:52 crc kubenswrapper[5072]: I1124 12:01:52.622350 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-587d57694d-km6sf" event={"ID":"3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f","Type":"ContainerDied","Data":"c6d1efd7e2eb92c89e6fe373f194bd3a485005398840d3f80a84925037318db1"} Nov 24 12:01:52 crc kubenswrapper[5072]: I1124 12:01:52.622418 5072 scope.go:117] "RemoveContainer" containerID="5c8a7216ac20c05b9c591c0c4e102ed060f4b3017033e3a9088f3e50a15ca7ed" Nov 24 12:01:52 crc kubenswrapper[5072]: I1124 12:01:52.664398 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-587d57694d-km6sf"] Nov 24 12:01:52 crc kubenswrapper[5072]: I1124 12:01:52.677614 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-587d57694d-km6sf"] Nov 24 12:01:52 crc kubenswrapper[5072]: I1124 12:01:52.839978 5072 scope.go:117] "RemoveContainer" containerID="aec4b15829b4affb5daa97f04b55773c915c3c649ce3aa744732507ee9bac4c7" Nov 24 12:01:52 crc kubenswrapper[5072]: I1124 12:01:52.860275 5072 scope.go:117] "RemoveContainer" containerID="5c8a7216ac20c05b9c591c0c4e102ed060f4b3017033e3a9088f3e50a15ca7ed" Nov 24 12:01:52 crc kubenswrapper[5072]: E1124 12:01:52.860920 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c8a7216ac20c05b9c591c0c4e102ed060f4b3017033e3a9088f3e50a15ca7ed\": container with ID starting with 5c8a7216ac20c05b9c591c0c4e102ed060f4b3017033e3a9088f3e50a15ca7ed not found: ID does not exist" containerID="5c8a7216ac20c05b9c591c0c4e102ed060f4b3017033e3a9088f3e50a15ca7ed" Nov 24 12:01:52 crc kubenswrapper[5072]: I1124 12:01:52.860962 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c8a7216ac20c05b9c591c0c4e102ed060f4b3017033e3a9088f3e50a15ca7ed"} err="failed to get container status \"5c8a7216ac20c05b9c591c0c4e102ed060f4b3017033e3a9088f3e50a15ca7ed\": rpc error: code = NotFound desc = could not find container \"5c8a7216ac20c05b9c591c0c4e102ed060f4b3017033e3a9088f3e50a15ca7ed\": container with ID starting with 5c8a7216ac20c05b9c591c0c4e102ed060f4b3017033e3a9088f3e50a15ca7ed not found: ID does not exist" Nov 24 12:01:52 crc kubenswrapper[5072]: I1124 12:01:52.860988 5072 scope.go:117] "RemoveContainer" containerID="aec4b15829b4affb5daa97f04b55773c915c3c649ce3aa744732507ee9bac4c7" Nov 24 12:01:52 crc kubenswrapper[5072]: E1124 12:01:52.861264 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aec4b15829b4affb5daa97f04b55773c915c3c649ce3aa744732507ee9bac4c7\": container with ID starting with aec4b15829b4affb5daa97f04b55773c915c3c649ce3aa744732507ee9bac4c7 not found: ID does not exist" containerID="aec4b15829b4affb5daa97f04b55773c915c3c649ce3aa744732507ee9bac4c7" Nov 24 12:01:52 crc kubenswrapper[5072]: I1124 12:01:52.861308 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aec4b15829b4affb5daa97f04b55773c915c3c649ce3aa744732507ee9bac4c7"} err="failed to get container status \"aec4b15829b4affb5daa97f04b55773c915c3c649ce3aa744732507ee9bac4c7\": rpc error: code = NotFound desc = could not find container \"aec4b15829b4affb5daa97f04b55773c915c3c649ce3aa744732507ee9bac4c7\": container with ID starting with aec4b15829b4affb5daa97f04b55773c915c3c649ce3aa744732507ee9bac4c7 not found: ID does not exist" Nov 24 12:01:53 crc kubenswrapper[5072]: I1124 12:01:53.034743 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f" path="/var/lib/kubelet/pods/3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f/volumes" Nov 24 12:01:59 crc kubenswrapper[5072]: I1124 12:01:59.026619 5072 scope.go:117] "RemoveContainer" containerID="4c463b6823449c0875f1fec4633ea521827aee0fee045719621150bcb1ac1a4f" Nov 24 12:01:59 crc kubenswrapper[5072]: E1124 12:01:59.027267 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 12:02:11 crc kubenswrapper[5072]: I1124 12:02:11.019300 5072 scope.go:117] "RemoveContainer" containerID="4c463b6823449c0875f1fec4633ea521827aee0fee045719621150bcb1ac1a4f" Nov 24 12:02:11 crc kubenswrapper[5072]: E1124 12:02:11.020333 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 12:02:25 crc kubenswrapper[5072]: I1124 12:02:25.016683 5072 scope.go:117] "RemoveContainer" containerID="4c463b6823449c0875f1fec4633ea521827aee0fee045719621150bcb1ac1a4f" Nov 24 12:02:25 crc kubenswrapper[5072]: E1124 12:02:25.017438 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 12:02:39 crc kubenswrapper[5072]: I1124 12:02:39.025324 5072 scope.go:117] "RemoveContainer" containerID="4c463b6823449c0875f1fec4633ea521827aee0fee045719621150bcb1ac1a4f" Nov 24 12:02:39 crc kubenswrapper[5072]: E1124 12:02:39.027689 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 12:02:52 crc kubenswrapper[5072]: I1124 12:02:52.016992 5072 scope.go:117] "RemoveContainer" containerID="4c463b6823449c0875f1fec4633ea521827aee0fee045719621150bcb1ac1a4f" Nov 24 12:02:52 crc kubenswrapper[5072]: E1124 12:02:52.018248 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 12:03:03 crc kubenswrapper[5072]: I1124 12:03:03.016877 5072 scope.go:117] "RemoveContainer" containerID="4c463b6823449c0875f1fec4633ea521827aee0fee045719621150bcb1ac1a4f" Nov 24 12:03:03 crc kubenswrapper[5072]: E1124 12:03:03.017798 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 12:03:04 crc kubenswrapper[5072]: I1124 12:03:04.278040 5072 generic.go:334] "Generic (PLEG): container finished" podID="4a074607-4e56-4d2e-a4ee-87906af89764" containerID="1d87411ad890d3383fdb2466f4b2255ae671da030dc8f2cf61121b7460f5c1b3" exitCode=0 Nov 24 12:03:04 crc kubenswrapper[5072]: I1124 12:03:04.278094 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-b55tw" event={"ID":"4a074607-4e56-4d2e-a4ee-87906af89764","Type":"ContainerDied","Data":"1d87411ad890d3383fdb2466f4b2255ae671da030dc8f2cf61121b7460f5c1b3"} Nov 24 12:03:05 crc kubenswrapper[5072]: I1124 12:03:05.770763 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-b55tw" Nov 24 12:03:05 crc kubenswrapper[5072]: I1124 12:03:05.936754 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a074607-4e56-4d2e-a4ee-87906af89764-config-data\") pod \"4a074607-4e56-4d2e-a4ee-87906af89764\" (UID: \"4a074607-4e56-4d2e-a4ee-87906af89764\") " Nov 24 12:03:05 crc kubenswrapper[5072]: I1124 12:03:05.937047 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/4a074607-4e56-4d2e-a4ee-87906af89764-job-config-data\") pod \"4a074607-4e56-4d2e-a4ee-87906af89764\" (UID: \"4a074607-4e56-4d2e-a4ee-87906af89764\") " Nov 24 12:03:05 crc kubenswrapper[5072]: I1124 12:03:05.937156 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a074607-4e56-4d2e-a4ee-87906af89764-combined-ca-bundle\") pod \"4a074607-4e56-4d2e-a4ee-87906af89764\" (UID: \"4a074607-4e56-4d2e-a4ee-87906af89764\") " Nov 24 12:03:05 crc kubenswrapper[5072]: I1124 12:03:05.937649 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t7h55\" (UniqueName: \"kubernetes.io/projected/4a074607-4e56-4d2e-a4ee-87906af89764-kube-api-access-t7h55\") pod \"4a074607-4e56-4d2e-a4ee-87906af89764\" (UID: \"4a074607-4e56-4d2e-a4ee-87906af89764\") " Nov 24 12:03:05 crc kubenswrapper[5072]: I1124 12:03:05.942656 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a074607-4e56-4d2e-a4ee-87906af89764-job-config-data" (OuterVolumeSpecName: "job-config-data") pod "4a074607-4e56-4d2e-a4ee-87906af89764" (UID: "4a074607-4e56-4d2e-a4ee-87906af89764"). InnerVolumeSpecName "job-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:03:05 crc kubenswrapper[5072]: I1124 12:03:05.943035 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a074607-4e56-4d2e-a4ee-87906af89764-kube-api-access-t7h55" (OuterVolumeSpecName: "kube-api-access-t7h55") pod "4a074607-4e56-4d2e-a4ee-87906af89764" (UID: "4a074607-4e56-4d2e-a4ee-87906af89764"). InnerVolumeSpecName "kube-api-access-t7h55". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:03:05 crc kubenswrapper[5072]: I1124 12:03:05.944742 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a074607-4e56-4d2e-a4ee-87906af89764-config-data" (OuterVolumeSpecName: "config-data") pod "4a074607-4e56-4d2e-a4ee-87906af89764" (UID: "4a074607-4e56-4d2e-a4ee-87906af89764"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:03:05 crc kubenswrapper[5072]: I1124 12:03:05.982316 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a074607-4e56-4d2e-a4ee-87906af89764-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4a074607-4e56-4d2e-a4ee-87906af89764" (UID: "4a074607-4e56-4d2e-a4ee-87906af89764"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:03:06 crc kubenswrapper[5072]: I1124 12:03:06.041824 5072 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a074607-4e56-4d2e-a4ee-87906af89764-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:03:06 crc kubenswrapper[5072]: I1124 12:03:06.041876 5072 reconciler_common.go:293] "Volume detached for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/4a074607-4e56-4d2e-a4ee-87906af89764-job-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:03:06 crc kubenswrapper[5072]: I1124 12:03:06.041898 5072 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a074607-4e56-4d2e-a4ee-87906af89764-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:03:06 crc kubenswrapper[5072]: I1124 12:03:06.041915 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t7h55\" (UniqueName: \"kubernetes.io/projected/4a074607-4e56-4d2e-a4ee-87906af89764-kube-api-access-t7h55\") on node \"crc\" DevicePath \"\"" Nov 24 12:03:06 crc kubenswrapper[5072]: I1124 12:03:06.332525 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-b55tw" event={"ID":"4a074607-4e56-4d2e-a4ee-87906af89764","Type":"ContainerDied","Data":"6daad80fe4400ec67e7ab4cfd625d3b2eb92390cc5b7cf71ea478db93ed09e53"} Nov 24 12:03:06 crc kubenswrapper[5072]: I1124 12:03:06.332567 5072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6daad80fe4400ec67e7ab4cfd625d3b2eb92390cc5b7cf71ea478db93ed09e53" Nov 24 12:03:06 crc kubenswrapper[5072]: I1124 12:03:06.332646 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-b55tw" Nov 24 12:03:06 crc kubenswrapper[5072]: I1124 12:03:06.765835 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Nov 24 12:03:06 crc kubenswrapper[5072]: E1124 12:03:06.767468 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f" containerName="horizon-log" Nov 24 12:03:06 crc kubenswrapper[5072]: I1124 12:03:06.767628 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f" containerName="horizon-log" Nov 24 12:03:06 crc kubenswrapper[5072]: E1124 12:03:06.767705 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f" containerName="horizon" Nov 24 12:03:06 crc kubenswrapper[5072]: I1124 12:03:06.767763 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f" containerName="horizon" Nov 24 12:03:06 crc kubenswrapper[5072]: E1124 12:03:06.767835 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="360e5e7f-fc1f-4d24-8446-b97c9c04aa46" containerName="keystone-cron" Nov 24 12:03:06 crc kubenswrapper[5072]: I1124 12:03:06.767917 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="360e5e7f-fc1f-4d24-8446-b97c9c04aa46" containerName="keystone-cron" Nov 24 12:03:06 crc kubenswrapper[5072]: E1124 12:03:06.768015 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a074607-4e56-4d2e-a4ee-87906af89764" containerName="manila-db-sync" Nov 24 12:03:06 crc kubenswrapper[5072]: I1124 12:03:06.768079 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a074607-4e56-4d2e-a4ee-87906af89764" containerName="manila-db-sync" Nov 24 12:03:06 crc kubenswrapper[5072]: I1124 12:03:06.768419 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f" containerName="horizon" Nov 24 12:03:06 crc kubenswrapper[5072]: I1124 12:03:06.768505 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="360e5e7f-fc1f-4d24-8446-b97c9c04aa46" containerName="keystone-cron" Nov 24 12:03:06 crc kubenswrapper[5072]: I1124 12:03:06.768569 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a074607-4e56-4d2e-a4ee-87906af89764" containerName="manila-db-sync" Nov 24 12:03:06 crc kubenswrapper[5072]: I1124 12:03:06.768640 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f" containerName="horizon-log" Nov 24 12:03:06 crc kubenswrapper[5072]: E1124 12:03:06.768909 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f" containerName="horizon" Nov 24 12:03:06 crc kubenswrapper[5072]: I1124 12:03:06.768983 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f" containerName="horizon" Nov 24 12:03:06 crc kubenswrapper[5072]: I1124 12:03:06.769263 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ae3eb1b-1c4a-4e8b-8429-f55ce79cca8f" containerName="horizon" Nov 24 12:03:06 crc kubenswrapper[5072]: I1124 12:03:06.770788 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Nov 24 12:03:06 crc kubenswrapper[5072]: I1124 12:03:06.773071 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Nov 24 12:03:06 crc kubenswrapper[5072]: I1124 12:03:06.774471 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-2wtjm" Nov 24 12:03:06 crc kubenswrapper[5072]: I1124 12:03:06.776835 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Nov 24 12:03:06 crc kubenswrapper[5072]: I1124 12:03:06.777048 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scripts" Nov 24 12:03:06 crc kubenswrapper[5072]: I1124 12:03:06.777080 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Nov 24 12:03:06 crc kubenswrapper[5072]: I1124 12:03:06.832650 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-76b5fdb995-g6frb"] Nov 24 12:03:06 crc kubenswrapper[5072]: I1124 12:03:06.836212 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76b5fdb995-g6frb" Nov 24 12:03:06 crc kubenswrapper[5072]: I1124 12:03:06.858100 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0307a1dc-4248-472b-9b5e-51f2f116ac64-ovsdbserver-sb\") pod \"dnsmasq-dns-76b5fdb995-g6frb\" (UID: \"0307a1dc-4248-472b-9b5e-51f2f116ac64\") " pod="openstack/dnsmasq-dns-76b5fdb995-g6frb" Nov 24 12:03:06 crc kubenswrapper[5072]: I1124 12:03:06.858159 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0307a1dc-4248-472b-9b5e-51f2f116ac64-ovsdbserver-nb\") pod \"dnsmasq-dns-76b5fdb995-g6frb\" (UID: \"0307a1dc-4248-472b-9b5e-51f2f116ac64\") " pod="openstack/dnsmasq-dns-76b5fdb995-g6frb" Nov 24 12:03:06 crc kubenswrapper[5072]: I1124 12:03:06.858174 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18acd9e4-2e54-44ce-a600-f9ba836a6994-scripts\") pod \"manila-scheduler-0\" (UID: \"18acd9e4-2e54-44ce-a600-f9ba836a6994\") " pod="openstack/manila-scheduler-0" Nov 24 12:03:06 crc kubenswrapper[5072]: I1124 12:03:06.858211 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0307a1dc-4248-472b-9b5e-51f2f116ac64-dns-svc\") pod \"dnsmasq-dns-76b5fdb995-g6frb\" (UID: \"0307a1dc-4248-472b-9b5e-51f2f116ac64\") " pod="openstack/dnsmasq-dns-76b5fdb995-g6frb" Nov 24 12:03:06 crc kubenswrapper[5072]: I1124 12:03:06.858240 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glmrt\" (UniqueName: \"kubernetes.io/projected/18acd9e4-2e54-44ce-a600-f9ba836a6994-kube-api-access-glmrt\") pod \"manila-scheduler-0\" (UID: \"18acd9e4-2e54-44ce-a600-f9ba836a6994\") " pod="openstack/manila-scheduler-0" Nov 24 12:03:06 crc kubenswrapper[5072]: I1124 12:03:06.858258 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18acd9e4-2e54-44ce-a600-f9ba836a6994-config-data\") pod \"manila-scheduler-0\" (UID: \"18acd9e4-2e54-44ce-a600-f9ba836a6994\") " pod="openstack/manila-scheduler-0" Nov 24 12:03:06 crc kubenswrapper[5072]: I1124 12:03:06.858282 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0307a1dc-4248-472b-9b5e-51f2f116ac64-config\") pod \"dnsmasq-dns-76b5fdb995-g6frb\" (UID: \"0307a1dc-4248-472b-9b5e-51f2f116ac64\") " pod="openstack/dnsmasq-dns-76b5fdb995-g6frb" Nov 24 12:03:06 crc kubenswrapper[5072]: I1124 12:03:06.858307 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18acd9e4-2e54-44ce-a600-f9ba836a6994-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"18acd9e4-2e54-44ce-a600-f9ba836a6994\") " pod="openstack/manila-scheduler-0" Nov 24 12:03:06 crc kubenswrapper[5072]: I1124 12:03:06.858354 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x58w6\" (UniqueName: \"kubernetes.io/projected/0307a1dc-4248-472b-9b5e-51f2f116ac64-kube-api-access-x58w6\") pod \"dnsmasq-dns-76b5fdb995-g6frb\" (UID: \"0307a1dc-4248-472b-9b5e-51f2f116ac64\") " pod="openstack/dnsmasq-dns-76b5fdb995-g6frb" Nov 24 12:03:06 crc kubenswrapper[5072]: I1124 12:03:06.858413 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/0307a1dc-4248-472b-9b5e-51f2f116ac64-openstack-edpm-ipam\") pod \"dnsmasq-dns-76b5fdb995-g6frb\" (UID: \"0307a1dc-4248-472b-9b5e-51f2f116ac64\") " pod="openstack/dnsmasq-dns-76b5fdb995-g6frb" Nov 24 12:03:06 crc kubenswrapper[5072]: I1124 12:03:06.858433 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/18acd9e4-2e54-44ce-a600-f9ba836a6994-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"18acd9e4-2e54-44ce-a600-f9ba836a6994\") " pod="openstack/manila-scheduler-0" Nov 24 12:03:06 crc kubenswrapper[5072]: I1124 12:03:06.858452 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/18acd9e4-2e54-44ce-a600-f9ba836a6994-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"18acd9e4-2e54-44ce-a600-f9ba836a6994\") " pod="openstack/manila-scheduler-0" Nov 24 12:03:06 crc kubenswrapper[5072]: I1124 12:03:06.865273 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76b5fdb995-g6frb"] Nov 24 12:03:06 crc kubenswrapper[5072]: I1124 12:03:06.892521 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Nov 24 12:03:06 crc kubenswrapper[5072]: I1124 12:03:06.894358 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Nov 24 12:03:06 crc kubenswrapper[5072]: I1124 12:03:06.903067 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Nov 24 12:03:06 crc kubenswrapper[5072]: I1124 12:03:06.911175 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Nov 24 12:03:06 crc kubenswrapper[5072]: I1124 12:03:06.959632 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/18acd9e4-2e54-44ce-a600-f9ba836a6994-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"18acd9e4-2e54-44ce-a600-f9ba836a6994\") " pod="openstack/manila-scheduler-0" Nov 24 12:03:06 crc kubenswrapper[5072]: I1124 12:03:06.959688 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/38124ab6-e614-4256-a175-a4e280a54132-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"38124ab6-e614-4256-a175-a4e280a54132\") " pod="openstack/manila-share-share1-0" Nov 24 12:03:06 crc kubenswrapper[5072]: I1124 12:03:06.959713 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0307a1dc-4248-472b-9b5e-51f2f116ac64-ovsdbserver-sb\") pod \"dnsmasq-dns-76b5fdb995-g6frb\" (UID: \"0307a1dc-4248-472b-9b5e-51f2f116ac64\") " pod="openstack/dnsmasq-dns-76b5fdb995-g6frb" Nov 24 12:03:06 crc kubenswrapper[5072]: I1124 12:03:06.959745 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0307a1dc-4248-472b-9b5e-51f2f116ac64-ovsdbserver-nb\") pod \"dnsmasq-dns-76b5fdb995-g6frb\" (UID: \"0307a1dc-4248-472b-9b5e-51f2f116ac64\") " pod="openstack/dnsmasq-dns-76b5fdb995-g6frb" Nov 24 12:03:06 crc kubenswrapper[5072]: I1124 12:03:06.959760 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pf668\" (UniqueName: \"kubernetes.io/projected/38124ab6-e614-4256-a175-a4e280a54132-kube-api-access-pf668\") pod \"manila-share-share1-0\" (UID: \"38124ab6-e614-4256-a175-a4e280a54132\") " pod="openstack/manila-share-share1-0" Nov 24 12:03:06 crc kubenswrapper[5072]: I1124 12:03:06.959777 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18acd9e4-2e54-44ce-a600-f9ba836a6994-scripts\") pod \"manila-scheduler-0\" (UID: \"18acd9e4-2e54-44ce-a600-f9ba836a6994\") " pod="openstack/manila-scheduler-0" Nov 24 12:03:06 crc kubenswrapper[5072]: I1124 12:03:06.959811 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0307a1dc-4248-472b-9b5e-51f2f116ac64-dns-svc\") pod \"dnsmasq-dns-76b5fdb995-g6frb\" (UID: \"0307a1dc-4248-472b-9b5e-51f2f116ac64\") " pod="openstack/dnsmasq-dns-76b5fdb995-g6frb" Nov 24 12:03:06 crc kubenswrapper[5072]: I1124 12:03:06.959836 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glmrt\" (UniqueName: \"kubernetes.io/projected/18acd9e4-2e54-44ce-a600-f9ba836a6994-kube-api-access-glmrt\") pod \"manila-scheduler-0\" (UID: \"18acd9e4-2e54-44ce-a600-f9ba836a6994\") " pod="openstack/manila-scheduler-0" Nov 24 12:03:06 crc kubenswrapper[5072]: I1124 12:03:06.959855 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18acd9e4-2e54-44ce-a600-f9ba836a6994-config-data\") pod \"manila-scheduler-0\" (UID: \"18acd9e4-2e54-44ce-a600-f9ba836a6994\") " pod="openstack/manila-scheduler-0" Nov 24 12:03:06 crc kubenswrapper[5072]: I1124 12:03:06.959875 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0307a1dc-4248-472b-9b5e-51f2f116ac64-config\") pod \"dnsmasq-dns-76b5fdb995-g6frb\" (UID: \"0307a1dc-4248-472b-9b5e-51f2f116ac64\") " pod="openstack/dnsmasq-dns-76b5fdb995-g6frb" Nov 24 12:03:06 crc kubenswrapper[5072]: I1124 12:03:06.959909 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18acd9e4-2e54-44ce-a600-f9ba836a6994-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"18acd9e4-2e54-44ce-a600-f9ba836a6994\") " pod="openstack/manila-scheduler-0" Nov 24 12:03:06 crc kubenswrapper[5072]: I1124 12:03:06.959948 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/38124ab6-e614-4256-a175-a4e280a54132-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"38124ab6-e614-4256-a175-a4e280a54132\") " pod="openstack/manila-share-share1-0" Nov 24 12:03:06 crc kubenswrapper[5072]: I1124 12:03:06.959969 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x58w6\" (UniqueName: \"kubernetes.io/projected/0307a1dc-4248-472b-9b5e-51f2f116ac64-kube-api-access-x58w6\") pod \"dnsmasq-dns-76b5fdb995-g6frb\" (UID: \"0307a1dc-4248-472b-9b5e-51f2f116ac64\") " pod="openstack/dnsmasq-dns-76b5fdb995-g6frb" Nov 24 12:03:06 crc kubenswrapper[5072]: I1124 12:03:06.959990 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38124ab6-e614-4256-a175-a4e280a54132-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"38124ab6-e614-4256-a175-a4e280a54132\") " pod="openstack/manila-share-share1-0" Nov 24 12:03:06 crc kubenswrapper[5072]: I1124 12:03:06.960013 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/38124ab6-e614-4256-a175-a4e280a54132-ceph\") pod \"manila-share-share1-0\" (UID: \"38124ab6-e614-4256-a175-a4e280a54132\") " pod="openstack/manila-share-share1-0" Nov 24 12:03:06 crc kubenswrapper[5072]: I1124 12:03:06.960028 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38124ab6-e614-4256-a175-a4e280a54132-scripts\") pod \"manila-share-share1-0\" (UID: \"38124ab6-e614-4256-a175-a4e280a54132\") " pod="openstack/manila-share-share1-0" Nov 24 12:03:06 crc kubenswrapper[5072]: I1124 12:03:06.960043 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/38124ab6-e614-4256-a175-a4e280a54132-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"38124ab6-e614-4256-a175-a4e280a54132\") " pod="openstack/manila-share-share1-0" Nov 24 12:03:06 crc kubenswrapper[5072]: I1124 12:03:06.960060 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38124ab6-e614-4256-a175-a4e280a54132-config-data\") pod \"manila-share-share1-0\" (UID: \"38124ab6-e614-4256-a175-a4e280a54132\") " pod="openstack/manila-share-share1-0" Nov 24 12:03:06 crc kubenswrapper[5072]: I1124 12:03:06.960076 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/0307a1dc-4248-472b-9b5e-51f2f116ac64-openstack-edpm-ipam\") pod \"dnsmasq-dns-76b5fdb995-g6frb\" (UID: \"0307a1dc-4248-472b-9b5e-51f2f116ac64\") " pod="openstack/dnsmasq-dns-76b5fdb995-g6frb" Nov 24 12:03:06 crc kubenswrapper[5072]: I1124 12:03:06.960096 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/18acd9e4-2e54-44ce-a600-f9ba836a6994-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"18acd9e4-2e54-44ce-a600-f9ba836a6994\") " pod="openstack/manila-scheduler-0" Nov 24 12:03:06 crc kubenswrapper[5072]: I1124 12:03:06.961031 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0307a1dc-4248-472b-9b5e-51f2f116ac64-config\") pod \"dnsmasq-dns-76b5fdb995-g6frb\" (UID: \"0307a1dc-4248-472b-9b5e-51f2f116ac64\") " pod="openstack/dnsmasq-dns-76b5fdb995-g6frb" Nov 24 12:03:06 crc kubenswrapper[5072]: I1124 12:03:06.961138 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/18acd9e4-2e54-44ce-a600-f9ba836a6994-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"18acd9e4-2e54-44ce-a600-f9ba836a6994\") " pod="openstack/manila-scheduler-0" Nov 24 12:03:06 crc kubenswrapper[5072]: I1124 12:03:06.961757 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0307a1dc-4248-472b-9b5e-51f2f116ac64-ovsdbserver-sb\") pod \"dnsmasq-dns-76b5fdb995-g6frb\" (UID: \"0307a1dc-4248-472b-9b5e-51f2f116ac64\") " pod="openstack/dnsmasq-dns-76b5fdb995-g6frb" Nov 24 12:03:06 crc kubenswrapper[5072]: I1124 12:03:06.962338 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0307a1dc-4248-472b-9b5e-51f2f116ac64-ovsdbserver-nb\") pod \"dnsmasq-dns-76b5fdb995-g6frb\" (UID: \"0307a1dc-4248-472b-9b5e-51f2f116ac64\") " pod="openstack/dnsmasq-dns-76b5fdb995-g6frb" Nov 24 12:03:06 crc kubenswrapper[5072]: I1124 12:03:06.963693 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/18acd9e4-2e54-44ce-a600-f9ba836a6994-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"18acd9e4-2e54-44ce-a600-f9ba836a6994\") " pod="openstack/manila-scheduler-0" Nov 24 12:03:06 crc kubenswrapper[5072]: I1124 12:03:06.963872 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0307a1dc-4248-472b-9b5e-51f2f116ac64-dns-svc\") pod \"dnsmasq-dns-76b5fdb995-g6frb\" (UID: \"0307a1dc-4248-472b-9b5e-51f2f116ac64\") " pod="openstack/dnsmasq-dns-76b5fdb995-g6frb" Nov 24 12:03:06 crc kubenswrapper[5072]: I1124 12:03:06.964755 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/0307a1dc-4248-472b-9b5e-51f2f116ac64-openstack-edpm-ipam\") pod \"dnsmasq-dns-76b5fdb995-g6frb\" (UID: \"0307a1dc-4248-472b-9b5e-51f2f116ac64\") " pod="openstack/dnsmasq-dns-76b5fdb995-g6frb" Nov 24 12:03:06 crc kubenswrapper[5072]: I1124 12:03:06.967889 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18acd9e4-2e54-44ce-a600-f9ba836a6994-scripts\") pod \"manila-scheduler-0\" (UID: \"18acd9e4-2e54-44ce-a600-f9ba836a6994\") " pod="openstack/manila-scheduler-0" Nov 24 12:03:06 crc kubenswrapper[5072]: I1124 12:03:06.967904 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18acd9e4-2e54-44ce-a600-f9ba836a6994-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"18acd9e4-2e54-44ce-a600-f9ba836a6994\") " pod="openstack/manila-scheduler-0" Nov 24 12:03:06 crc kubenswrapper[5072]: I1124 12:03:06.989570 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18acd9e4-2e54-44ce-a600-f9ba836a6994-config-data\") pod \"manila-scheduler-0\" (UID: \"18acd9e4-2e54-44ce-a600-f9ba836a6994\") " pod="openstack/manila-scheduler-0" Nov 24 12:03:06 crc kubenswrapper[5072]: I1124 12:03:06.990501 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x58w6\" (UniqueName: \"kubernetes.io/projected/0307a1dc-4248-472b-9b5e-51f2f116ac64-kube-api-access-x58w6\") pod \"dnsmasq-dns-76b5fdb995-g6frb\" (UID: \"0307a1dc-4248-472b-9b5e-51f2f116ac64\") " pod="openstack/dnsmasq-dns-76b5fdb995-g6frb" Nov 24 12:03:06 crc kubenswrapper[5072]: I1124 12:03:06.993252 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Nov 24 12:03:06 crc kubenswrapper[5072]: I1124 12:03:06.994828 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Nov 24 12:03:06 crc kubenswrapper[5072]: I1124 12:03:06.997968 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glmrt\" (UniqueName: \"kubernetes.io/projected/18acd9e4-2e54-44ce-a600-f9ba836a6994-kube-api-access-glmrt\") pod \"manila-scheduler-0\" (UID: \"18acd9e4-2e54-44ce-a600-f9ba836a6994\") " pod="openstack/manila-scheduler-0" Nov 24 12:03:06 crc kubenswrapper[5072]: I1124 12:03:06.998187 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Nov 24 12:03:07 crc kubenswrapper[5072]: I1124 12:03:07.043302 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Nov 24 12:03:07 crc kubenswrapper[5072]: I1124 12:03:07.061258 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/38124ab6-e614-4256-a175-a4e280a54132-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"38124ab6-e614-4256-a175-a4e280a54132\") " pod="openstack/manila-share-share1-0" Nov 24 12:03:07 crc kubenswrapper[5072]: I1124 12:03:07.061308 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38124ab6-e614-4256-a175-a4e280a54132-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"38124ab6-e614-4256-a175-a4e280a54132\") " pod="openstack/manila-share-share1-0" Nov 24 12:03:07 crc kubenswrapper[5072]: I1124 12:03:07.061333 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/baab652e-2ccd-4373-8c75-a10f8258bcfd-scripts\") pod \"manila-api-0\" (UID: \"baab652e-2ccd-4373-8c75-a10f8258bcfd\") " pod="openstack/manila-api-0" Nov 24 12:03:07 crc kubenswrapper[5072]: I1124 12:03:07.061350 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baab652e-2ccd-4373-8c75-a10f8258bcfd-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"baab652e-2ccd-4373-8c75-a10f8258bcfd\") " pod="openstack/manila-api-0" Nov 24 12:03:07 crc kubenswrapper[5072]: I1124 12:03:07.061385 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/baab652e-2ccd-4373-8c75-a10f8258bcfd-logs\") pod \"manila-api-0\" (UID: \"baab652e-2ccd-4373-8c75-a10f8258bcfd\") " pod="openstack/manila-api-0" Nov 24 12:03:07 crc kubenswrapper[5072]: I1124 12:03:07.061408 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/38124ab6-e614-4256-a175-a4e280a54132-ceph\") pod \"manila-share-share1-0\" (UID: \"38124ab6-e614-4256-a175-a4e280a54132\") " pod="openstack/manila-share-share1-0" Nov 24 12:03:07 crc kubenswrapper[5072]: I1124 12:03:07.061439 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38124ab6-e614-4256-a175-a4e280a54132-scripts\") pod \"manila-share-share1-0\" (UID: \"38124ab6-e614-4256-a175-a4e280a54132\") " pod="openstack/manila-share-share1-0" Nov 24 12:03:07 crc kubenswrapper[5072]: I1124 12:03:07.061463 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/38124ab6-e614-4256-a175-a4e280a54132-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"38124ab6-e614-4256-a175-a4e280a54132\") " pod="openstack/manila-share-share1-0" Nov 24 12:03:07 crc kubenswrapper[5072]: I1124 12:03:07.061490 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38124ab6-e614-4256-a175-a4e280a54132-config-data\") pod \"manila-share-share1-0\" (UID: \"38124ab6-e614-4256-a175-a4e280a54132\") " pod="openstack/manila-share-share1-0" Nov 24 12:03:07 crc kubenswrapper[5072]: I1124 12:03:07.061514 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/baab652e-2ccd-4373-8c75-a10f8258bcfd-config-data\") pod \"manila-api-0\" (UID: \"baab652e-2ccd-4373-8c75-a10f8258bcfd\") " pod="openstack/manila-api-0" Nov 24 12:03:07 crc kubenswrapper[5072]: I1124 12:03:07.061552 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/baab652e-2ccd-4373-8c75-a10f8258bcfd-etc-machine-id\") pod \"manila-api-0\" (UID: \"baab652e-2ccd-4373-8c75-a10f8258bcfd\") " pod="openstack/manila-api-0" Nov 24 12:03:07 crc kubenswrapper[5072]: I1124 12:03:07.061572 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/38124ab6-e614-4256-a175-a4e280a54132-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"38124ab6-e614-4256-a175-a4e280a54132\") " pod="openstack/manila-share-share1-0" Nov 24 12:03:07 crc kubenswrapper[5072]: I1124 12:03:07.061596 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/baab652e-2ccd-4373-8c75-a10f8258bcfd-config-data-custom\") pod \"manila-api-0\" (UID: \"baab652e-2ccd-4373-8c75-a10f8258bcfd\") " pod="openstack/manila-api-0" Nov 24 12:03:07 crc kubenswrapper[5072]: I1124 12:03:07.061618 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pf668\" (UniqueName: \"kubernetes.io/projected/38124ab6-e614-4256-a175-a4e280a54132-kube-api-access-pf668\") pod \"manila-share-share1-0\" (UID: \"38124ab6-e614-4256-a175-a4e280a54132\") " pod="openstack/manila-share-share1-0" Nov 24 12:03:07 crc kubenswrapper[5072]: I1124 12:03:07.061631 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkv5g\" (UniqueName: \"kubernetes.io/projected/baab652e-2ccd-4373-8c75-a10f8258bcfd-kube-api-access-xkv5g\") pod \"manila-api-0\" (UID: \"baab652e-2ccd-4373-8c75-a10f8258bcfd\") " pod="openstack/manila-api-0" Nov 24 12:03:07 crc kubenswrapper[5072]: I1124 12:03:07.061807 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/38124ab6-e614-4256-a175-a4e280a54132-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"38124ab6-e614-4256-a175-a4e280a54132\") " pod="openstack/manila-share-share1-0" Nov 24 12:03:07 crc kubenswrapper[5072]: I1124 12:03:07.061843 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/38124ab6-e614-4256-a175-a4e280a54132-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"38124ab6-e614-4256-a175-a4e280a54132\") " pod="openstack/manila-share-share1-0" Nov 24 12:03:07 crc kubenswrapper[5072]: I1124 12:03:07.067055 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/38124ab6-e614-4256-a175-a4e280a54132-ceph\") pod \"manila-share-share1-0\" (UID: \"38124ab6-e614-4256-a175-a4e280a54132\") " pod="openstack/manila-share-share1-0" Nov 24 12:03:07 crc kubenswrapper[5072]: I1124 12:03:07.067275 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/38124ab6-e614-4256-a175-a4e280a54132-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"38124ab6-e614-4256-a175-a4e280a54132\") " pod="openstack/manila-share-share1-0" Nov 24 12:03:07 crc kubenswrapper[5072]: I1124 12:03:07.069363 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38124ab6-e614-4256-a175-a4e280a54132-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"38124ab6-e614-4256-a175-a4e280a54132\") " pod="openstack/manila-share-share1-0" Nov 24 12:03:07 crc kubenswrapper[5072]: I1124 12:03:07.077768 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38124ab6-e614-4256-a175-a4e280a54132-scripts\") pod \"manila-share-share1-0\" (UID: \"38124ab6-e614-4256-a175-a4e280a54132\") " pod="openstack/manila-share-share1-0" Nov 24 12:03:07 crc kubenswrapper[5072]: I1124 12:03:07.082851 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38124ab6-e614-4256-a175-a4e280a54132-config-data\") pod \"manila-share-share1-0\" (UID: \"38124ab6-e614-4256-a175-a4e280a54132\") " pod="openstack/manila-share-share1-0" Nov 24 12:03:07 crc kubenswrapper[5072]: I1124 12:03:07.087177 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pf668\" (UniqueName: \"kubernetes.io/projected/38124ab6-e614-4256-a175-a4e280a54132-kube-api-access-pf668\") pod \"manila-share-share1-0\" (UID: \"38124ab6-e614-4256-a175-a4e280a54132\") " pod="openstack/manila-share-share1-0" Nov 24 12:03:07 crc kubenswrapper[5072]: I1124 12:03:07.094850 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Nov 24 12:03:07 crc kubenswrapper[5072]: I1124 12:03:07.159562 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76b5fdb995-g6frb" Nov 24 12:03:07 crc kubenswrapper[5072]: I1124 12:03:07.165672 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/baab652e-2ccd-4373-8c75-a10f8258bcfd-scripts\") pod \"manila-api-0\" (UID: \"baab652e-2ccd-4373-8c75-a10f8258bcfd\") " pod="openstack/manila-api-0" Nov 24 12:03:07 crc kubenswrapper[5072]: I1124 12:03:07.165727 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baab652e-2ccd-4373-8c75-a10f8258bcfd-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"baab652e-2ccd-4373-8c75-a10f8258bcfd\") " pod="openstack/manila-api-0" Nov 24 12:03:07 crc kubenswrapper[5072]: I1124 12:03:07.165755 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/baab652e-2ccd-4373-8c75-a10f8258bcfd-logs\") pod \"manila-api-0\" (UID: \"baab652e-2ccd-4373-8c75-a10f8258bcfd\") " pod="openstack/manila-api-0" Nov 24 12:03:07 crc kubenswrapper[5072]: I1124 12:03:07.165808 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/baab652e-2ccd-4373-8c75-a10f8258bcfd-config-data\") pod \"manila-api-0\" (UID: \"baab652e-2ccd-4373-8c75-a10f8258bcfd\") " pod="openstack/manila-api-0" Nov 24 12:03:07 crc kubenswrapper[5072]: I1124 12:03:07.165869 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/baab652e-2ccd-4373-8c75-a10f8258bcfd-etc-machine-id\") pod \"manila-api-0\" (UID: \"baab652e-2ccd-4373-8c75-a10f8258bcfd\") " pod="openstack/manila-api-0" Nov 24 12:03:07 crc kubenswrapper[5072]: I1124 12:03:07.165945 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/baab652e-2ccd-4373-8c75-a10f8258bcfd-config-data-custom\") pod \"manila-api-0\" (UID: \"baab652e-2ccd-4373-8c75-a10f8258bcfd\") " pod="openstack/manila-api-0" Nov 24 12:03:07 crc kubenswrapper[5072]: I1124 12:03:07.165976 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkv5g\" (UniqueName: \"kubernetes.io/projected/baab652e-2ccd-4373-8c75-a10f8258bcfd-kube-api-access-xkv5g\") pod \"manila-api-0\" (UID: \"baab652e-2ccd-4373-8c75-a10f8258bcfd\") " pod="openstack/manila-api-0" Nov 24 12:03:07 crc kubenswrapper[5072]: I1124 12:03:07.166700 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/baab652e-2ccd-4373-8c75-a10f8258bcfd-etc-machine-id\") pod \"manila-api-0\" (UID: \"baab652e-2ccd-4373-8c75-a10f8258bcfd\") " pod="openstack/manila-api-0" Nov 24 12:03:07 crc kubenswrapper[5072]: I1124 12:03:07.167363 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/baab652e-2ccd-4373-8c75-a10f8258bcfd-logs\") pod \"manila-api-0\" (UID: \"baab652e-2ccd-4373-8c75-a10f8258bcfd\") " pod="openstack/manila-api-0" Nov 24 12:03:07 crc kubenswrapper[5072]: I1124 12:03:07.171210 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/baab652e-2ccd-4373-8c75-a10f8258bcfd-config-data\") pod \"manila-api-0\" (UID: \"baab652e-2ccd-4373-8c75-a10f8258bcfd\") " pod="openstack/manila-api-0" Nov 24 12:03:07 crc kubenswrapper[5072]: I1124 12:03:07.174169 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baab652e-2ccd-4373-8c75-a10f8258bcfd-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"baab652e-2ccd-4373-8c75-a10f8258bcfd\") " pod="openstack/manila-api-0" Nov 24 12:03:07 crc kubenswrapper[5072]: I1124 12:03:07.180138 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/baab652e-2ccd-4373-8c75-a10f8258bcfd-scripts\") pod \"manila-api-0\" (UID: \"baab652e-2ccd-4373-8c75-a10f8258bcfd\") " pod="openstack/manila-api-0" Nov 24 12:03:07 crc kubenswrapper[5072]: I1124 12:03:07.180274 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/baab652e-2ccd-4373-8c75-a10f8258bcfd-config-data-custom\") pod \"manila-api-0\" (UID: \"baab652e-2ccd-4373-8c75-a10f8258bcfd\") " pod="openstack/manila-api-0" Nov 24 12:03:07 crc kubenswrapper[5072]: I1124 12:03:07.183918 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkv5g\" (UniqueName: \"kubernetes.io/projected/baab652e-2ccd-4373-8c75-a10f8258bcfd-kube-api-access-xkv5g\") pod \"manila-api-0\" (UID: \"baab652e-2ccd-4373-8c75-a10f8258bcfd\") " pod="openstack/manila-api-0" Nov 24 12:03:07 crc kubenswrapper[5072]: I1124 12:03:07.219331 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Nov 24 12:03:07 crc kubenswrapper[5072]: I1124 12:03:07.303579 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Nov 24 12:03:07 crc kubenswrapper[5072]: I1124 12:03:07.668102 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Nov 24 12:03:07 crc kubenswrapper[5072]: I1124 12:03:07.783664 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76b5fdb995-g6frb"] Nov 24 12:03:07 crc kubenswrapper[5072]: W1124 12:03:07.787759 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0307a1dc_4248_472b_9b5e_51f2f116ac64.slice/crio-d5d55ebed13b3734de5d62f5a16000591d412904bbe01e00bdd8ef809668f306 WatchSource:0}: Error finding container d5d55ebed13b3734de5d62f5a16000591d412904bbe01e00bdd8ef809668f306: Status 404 returned error can't find the container with id d5d55ebed13b3734de5d62f5a16000591d412904bbe01e00bdd8ef809668f306 Nov 24 12:03:08 crc kubenswrapper[5072]: I1124 12:03:08.024508 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Nov 24 12:03:08 crc kubenswrapper[5072]: I1124 12:03:08.449283 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76b5fdb995-g6frb" event={"ID":"0307a1dc-4248-472b-9b5e-51f2f116ac64","Type":"ContainerStarted","Data":"ac2df2b1befa48f1050121f2cd95c36a966d423acc55a207b37f3d6ebefc0a66"} Nov 24 12:03:08 crc kubenswrapper[5072]: I1124 12:03:08.449351 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76b5fdb995-g6frb" event={"ID":"0307a1dc-4248-472b-9b5e-51f2f116ac64","Type":"ContainerStarted","Data":"d5d55ebed13b3734de5d62f5a16000591d412904bbe01e00bdd8ef809668f306"} Nov 24 12:03:08 crc kubenswrapper[5072]: I1124 12:03:08.453904 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"38124ab6-e614-4256-a175-a4e280a54132","Type":"ContainerStarted","Data":"93e3809a816660145811c05bad22e6a6108ec0e70e2f050528c38dfbd628a18e"} Nov 24 12:03:08 crc kubenswrapper[5072]: I1124 12:03:08.468328 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"18acd9e4-2e54-44ce-a600-f9ba836a6994","Type":"ContainerStarted","Data":"27e2fe43a76e9ebc770fce644a38f18b12f897d87a8fdb8d94b6c6eed8ad56ae"} Nov 24 12:03:08 crc kubenswrapper[5072]: I1124 12:03:08.713074 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Nov 24 12:03:08 crc kubenswrapper[5072]: W1124 12:03:08.718800 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbaab652e_2ccd_4373_8c75_a10f8258bcfd.slice/crio-6c8137632b188d1b89cd4deda481a161034c221181d5cba3757386b3d318a118 WatchSource:0}: Error finding container 6c8137632b188d1b89cd4deda481a161034c221181d5cba3757386b3d318a118: Status 404 returned error can't find the container with id 6c8137632b188d1b89cd4deda481a161034c221181d5cba3757386b3d318a118 Nov 24 12:03:09 crc kubenswrapper[5072]: I1124 12:03:09.480850 5072 generic.go:334] "Generic (PLEG): container finished" podID="0307a1dc-4248-472b-9b5e-51f2f116ac64" containerID="ac2df2b1befa48f1050121f2cd95c36a966d423acc55a207b37f3d6ebefc0a66" exitCode=0 Nov 24 12:03:09 crc kubenswrapper[5072]: I1124 12:03:09.480932 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76b5fdb995-g6frb" event={"ID":"0307a1dc-4248-472b-9b5e-51f2f116ac64","Type":"ContainerDied","Data":"ac2df2b1befa48f1050121f2cd95c36a966d423acc55a207b37f3d6ebefc0a66"} Nov 24 12:03:09 crc kubenswrapper[5072]: I1124 12:03:09.481456 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-76b5fdb995-g6frb" Nov 24 12:03:09 crc kubenswrapper[5072]: I1124 12:03:09.481469 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76b5fdb995-g6frb" event={"ID":"0307a1dc-4248-472b-9b5e-51f2f116ac64","Type":"ContainerStarted","Data":"68c6ae11c812eaf9e89565ed847bc28d36ad7909fc42186dd425ef3fa31137c5"} Nov 24 12:03:09 crc kubenswrapper[5072]: I1124 12:03:09.489014 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"baab652e-2ccd-4373-8c75-a10f8258bcfd","Type":"ContainerStarted","Data":"e63f8c6b5db9f53c40123918fdffe97d3fcef308cb10730d815a0815a5d5356d"} Nov 24 12:03:09 crc kubenswrapper[5072]: I1124 12:03:09.489070 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"baab652e-2ccd-4373-8c75-a10f8258bcfd","Type":"ContainerStarted","Data":"6c8137632b188d1b89cd4deda481a161034c221181d5cba3757386b3d318a118"} Nov 24 12:03:09 crc kubenswrapper[5072]: I1124 12:03:09.503460 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-76b5fdb995-g6frb" podStartSLOduration=3.503442099 podStartE2EDuration="3.503442099s" podCreationTimestamp="2025-11-24 12:03:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:03:09.497065 +0000 UTC m=+3241.208589476" watchObservedRunningTime="2025-11-24 12:03:09.503442099 +0000 UTC m=+3241.214966575" Nov 24 12:03:09 crc kubenswrapper[5072]: I1124 12:03:09.947125 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-api-0"] Nov 24 12:03:10 crc kubenswrapper[5072]: I1124 12:03:10.503382 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"18acd9e4-2e54-44ce-a600-f9ba836a6994","Type":"ContainerStarted","Data":"51ddc6d164425f4c95638d0a73d5148ba775e3007a5e1e51ff42491dd048fc2a"} Nov 24 12:03:10 crc kubenswrapper[5072]: I1124 12:03:10.503750 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"18acd9e4-2e54-44ce-a600-f9ba836a6994","Type":"ContainerStarted","Data":"6412cbef088f8c03dea954f725ece5a4db13481e834b66f053b787dc95377cdc"} Nov 24 12:03:10 crc kubenswrapper[5072]: I1124 12:03:10.506718 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"baab652e-2ccd-4373-8c75-a10f8258bcfd","Type":"ContainerStarted","Data":"55daa16d88d917071c968a03d09546113f400e633e0c2a745e44231f85549ab4"} Nov 24 12:03:10 crc kubenswrapper[5072]: I1124 12:03:10.506874 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Nov 24 12:03:10 crc kubenswrapper[5072]: I1124 12:03:10.529422 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=3.065957488 podStartE2EDuration="4.529404347s" podCreationTimestamp="2025-11-24 12:03:06 +0000 UTC" firstStartedPulling="2025-11-24 12:03:07.67650194 +0000 UTC m=+3239.388026416" lastFinishedPulling="2025-11-24 12:03:09.139948779 +0000 UTC m=+3240.851473275" observedRunningTime="2025-11-24 12:03:10.523685264 +0000 UTC m=+3242.235209740" watchObservedRunningTime="2025-11-24 12:03:10.529404347 +0000 UTC m=+3242.240928823" Nov 24 12:03:10 crc kubenswrapper[5072]: I1124 12:03:10.552636 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=4.552613858 podStartE2EDuration="4.552613858s" podCreationTimestamp="2025-11-24 12:03:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:03:10.541396117 +0000 UTC m=+3242.252920613" watchObservedRunningTime="2025-11-24 12:03:10.552613858 +0000 UTC m=+3242.264138334" Nov 24 12:03:11 crc kubenswrapper[5072]: I1124 12:03:11.517988 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="baab652e-2ccd-4373-8c75-a10f8258bcfd" containerName="manila-api-log" containerID="cri-o://e63f8c6b5db9f53c40123918fdffe97d3fcef308cb10730d815a0815a5d5356d" gracePeriod=30 Nov 24 12:03:11 crc kubenswrapper[5072]: I1124 12:03:11.518464 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="baab652e-2ccd-4373-8c75-a10f8258bcfd" containerName="manila-api" containerID="cri-o://55daa16d88d917071c968a03d09546113f400e633e0c2a745e44231f85549ab4" gracePeriod=30 Nov 24 12:03:12 crc kubenswrapper[5072]: I1124 12:03:12.395528 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:03:12 crc kubenswrapper[5072]: I1124 12:03:12.397038 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="761b2964-cd70-47d9-ade7-8ddfb3eb73c3" containerName="ceilometer-central-agent" containerID="cri-o://ffd0b3500c9774fad4dcbaf75c93c9ea57223eb9a31a2ce6a5960ac413fb7291" gracePeriod=30 Nov 24 12:03:12 crc kubenswrapper[5072]: I1124 12:03:12.397064 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="761b2964-cd70-47d9-ade7-8ddfb3eb73c3" containerName="proxy-httpd" containerID="cri-o://64f401f26854854a6a44fed6bc7b451c23dc5e2140b0b0a71a493d5fe27c9b8a" gracePeriod=30 Nov 24 12:03:12 crc kubenswrapper[5072]: I1124 12:03:12.397092 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="761b2964-cd70-47d9-ade7-8ddfb3eb73c3" containerName="sg-core" containerID="cri-o://972dc3a765f700930ddd30765dfcfd8c0d7199181792814ea03e27923f79a850" gracePeriod=30 Nov 24 12:03:12 crc kubenswrapper[5072]: I1124 12:03:12.397100 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="761b2964-cd70-47d9-ade7-8ddfb3eb73c3" containerName="ceilometer-notification-agent" containerID="cri-o://4630d6afa767f2b989b968e94698ffa151c51abba3dbaf45c5337880ca956ce5" gracePeriod=30 Nov 24 12:03:12 crc kubenswrapper[5072]: I1124 12:03:12.544576 5072 generic.go:334] "Generic (PLEG): container finished" podID="761b2964-cd70-47d9-ade7-8ddfb3eb73c3" containerID="972dc3a765f700930ddd30765dfcfd8c0d7199181792814ea03e27923f79a850" exitCode=2 Nov 24 12:03:12 crc kubenswrapper[5072]: I1124 12:03:12.544660 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"761b2964-cd70-47d9-ade7-8ddfb3eb73c3","Type":"ContainerDied","Data":"972dc3a765f700930ddd30765dfcfd8c0d7199181792814ea03e27923f79a850"} Nov 24 12:03:12 crc kubenswrapper[5072]: I1124 12:03:12.559562 5072 generic.go:334] "Generic (PLEG): container finished" podID="baab652e-2ccd-4373-8c75-a10f8258bcfd" containerID="55daa16d88d917071c968a03d09546113f400e633e0c2a745e44231f85549ab4" exitCode=0 Nov 24 12:03:12 crc kubenswrapper[5072]: I1124 12:03:12.559597 5072 generic.go:334] "Generic (PLEG): container finished" podID="baab652e-2ccd-4373-8c75-a10f8258bcfd" containerID="e63f8c6b5db9f53c40123918fdffe97d3fcef308cb10730d815a0815a5d5356d" exitCode=143 Nov 24 12:03:12 crc kubenswrapper[5072]: I1124 12:03:12.559621 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"baab652e-2ccd-4373-8c75-a10f8258bcfd","Type":"ContainerDied","Data":"55daa16d88d917071c968a03d09546113f400e633e0c2a745e44231f85549ab4"} Nov 24 12:03:12 crc kubenswrapper[5072]: I1124 12:03:12.559649 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"baab652e-2ccd-4373-8c75-a10f8258bcfd","Type":"ContainerDied","Data":"e63f8c6b5db9f53c40123918fdffe97d3fcef308cb10730d815a0815a5d5356d"} Nov 24 12:03:13 crc kubenswrapper[5072]: I1124 12:03:13.576427 5072 generic.go:334] "Generic (PLEG): container finished" podID="761b2964-cd70-47d9-ade7-8ddfb3eb73c3" containerID="64f401f26854854a6a44fed6bc7b451c23dc5e2140b0b0a71a493d5fe27c9b8a" exitCode=0 Nov 24 12:03:13 crc kubenswrapper[5072]: I1124 12:03:13.576753 5072 generic.go:334] "Generic (PLEG): container finished" podID="761b2964-cd70-47d9-ade7-8ddfb3eb73c3" containerID="ffd0b3500c9774fad4dcbaf75c93c9ea57223eb9a31a2ce6a5960ac413fb7291" exitCode=0 Nov 24 12:03:13 crc kubenswrapper[5072]: I1124 12:03:13.576510 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"761b2964-cd70-47d9-ade7-8ddfb3eb73c3","Type":"ContainerDied","Data":"64f401f26854854a6a44fed6bc7b451c23dc5e2140b0b0a71a493d5fe27c9b8a"} Nov 24 12:03:13 crc kubenswrapper[5072]: I1124 12:03:13.576808 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"761b2964-cd70-47d9-ade7-8ddfb3eb73c3","Type":"ContainerDied","Data":"ffd0b3500c9774fad4dcbaf75c93c9ea57223eb9a31a2ce6a5960ac413fb7291"} Nov 24 12:03:15 crc kubenswrapper[5072]: I1124 12:03:15.620192 5072 generic.go:334] "Generic (PLEG): container finished" podID="761b2964-cd70-47d9-ade7-8ddfb3eb73c3" containerID="4630d6afa767f2b989b968e94698ffa151c51abba3dbaf45c5337880ca956ce5" exitCode=0 Nov 24 12:03:15 crc kubenswrapper[5072]: I1124 12:03:15.620512 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"761b2964-cd70-47d9-ade7-8ddfb3eb73c3","Type":"ContainerDied","Data":"4630d6afa767f2b989b968e94698ffa151c51abba3dbaf45c5337880ca956ce5"} Nov 24 12:03:15 crc kubenswrapper[5072]: I1124 12:03:15.632467 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"baab652e-2ccd-4373-8c75-a10f8258bcfd","Type":"ContainerDied","Data":"6c8137632b188d1b89cd4deda481a161034c221181d5cba3757386b3d318a118"} Nov 24 12:03:15 crc kubenswrapper[5072]: I1124 12:03:15.632517 5072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c8137632b188d1b89cd4deda481a161034c221181d5cba3757386b3d318a118" Nov 24 12:03:15 crc kubenswrapper[5072]: I1124 12:03:15.644223 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Nov 24 12:03:15 crc kubenswrapper[5072]: I1124 12:03:15.761330 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xkv5g\" (UniqueName: \"kubernetes.io/projected/baab652e-2ccd-4373-8c75-a10f8258bcfd-kube-api-access-xkv5g\") pod \"baab652e-2ccd-4373-8c75-a10f8258bcfd\" (UID: \"baab652e-2ccd-4373-8c75-a10f8258bcfd\") " Nov 24 12:03:15 crc kubenswrapper[5072]: I1124 12:03:15.761426 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/baab652e-2ccd-4373-8c75-a10f8258bcfd-logs\") pod \"baab652e-2ccd-4373-8c75-a10f8258bcfd\" (UID: \"baab652e-2ccd-4373-8c75-a10f8258bcfd\") " Nov 24 12:03:15 crc kubenswrapper[5072]: I1124 12:03:15.761601 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/baab652e-2ccd-4373-8c75-a10f8258bcfd-scripts\") pod \"baab652e-2ccd-4373-8c75-a10f8258bcfd\" (UID: \"baab652e-2ccd-4373-8c75-a10f8258bcfd\") " Nov 24 12:03:15 crc kubenswrapper[5072]: I1124 12:03:15.761711 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baab652e-2ccd-4373-8c75-a10f8258bcfd-combined-ca-bundle\") pod \"baab652e-2ccd-4373-8c75-a10f8258bcfd\" (UID: \"baab652e-2ccd-4373-8c75-a10f8258bcfd\") " Nov 24 12:03:15 crc kubenswrapper[5072]: I1124 12:03:15.761807 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/baab652e-2ccd-4373-8c75-a10f8258bcfd-etc-machine-id\") pod \"baab652e-2ccd-4373-8c75-a10f8258bcfd\" (UID: \"baab652e-2ccd-4373-8c75-a10f8258bcfd\") " Nov 24 12:03:15 crc kubenswrapper[5072]: I1124 12:03:15.761865 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/baab652e-2ccd-4373-8c75-a10f8258bcfd-config-data-custom\") pod \"baab652e-2ccd-4373-8c75-a10f8258bcfd\" (UID: \"baab652e-2ccd-4373-8c75-a10f8258bcfd\") " Nov 24 12:03:15 crc kubenswrapper[5072]: I1124 12:03:15.761911 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/baab652e-2ccd-4373-8c75-a10f8258bcfd-config-data\") pod \"baab652e-2ccd-4373-8c75-a10f8258bcfd\" (UID: \"baab652e-2ccd-4373-8c75-a10f8258bcfd\") " Nov 24 12:03:15 crc kubenswrapper[5072]: I1124 12:03:15.762154 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/baab652e-2ccd-4373-8c75-a10f8258bcfd-logs" (OuterVolumeSpecName: "logs") pod "baab652e-2ccd-4373-8c75-a10f8258bcfd" (UID: "baab652e-2ccd-4373-8c75-a10f8258bcfd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:03:15 crc kubenswrapper[5072]: I1124 12:03:15.762358 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/baab652e-2ccd-4373-8c75-a10f8258bcfd-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "baab652e-2ccd-4373-8c75-a10f8258bcfd" (UID: "baab652e-2ccd-4373-8c75-a10f8258bcfd"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 12:03:15 crc kubenswrapper[5072]: I1124 12:03:15.762897 5072 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/baab652e-2ccd-4373-8c75-a10f8258bcfd-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 24 12:03:15 crc kubenswrapper[5072]: I1124 12:03:15.762922 5072 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/baab652e-2ccd-4373-8c75-a10f8258bcfd-logs\") on node \"crc\" DevicePath \"\"" Nov 24 12:03:15 crc kubenswrapper[5072]: I1124 12:03:15.767430 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/baab652e-2ccd-4373-8c75-a10f8258bcfd-scripts" (OuterVolumeSpecName: "scripts") pod "baab652e-2ccd-4373-8c75-a10f8258bcfd" (UID: "baab652e-2ccd-4373-8c75-a10f8258bcfd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:03:15 crc kubenswrapper[5072]: I1124 12:03:15.768088 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/baab652e-2ccd-4373-8c75-a10f8258bcfd-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "baab652e-2ccd-4373-8c75-a10f8258bcfd" (UID: "baab652e-2ccd-4373-8c75-a10f8258bcfd"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:03:15 crc kubenswrapper[5072]: I1124 12:03:15.769762 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/baab652e-2ccd-4373-8c75-a10f8258bcfd-kube-api-access-xkv5g" (OuterVolumeSpecName: "kube-api-access-xkv5g") pod "baab652e-2ccd-4373-8c75-a10f8258bcfd" (UID: "baab652e-2ccd-4373-8c75-a10f8258bcfd"). InnerVolumeSpecName "kube-api-access-xkv5g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:03:15 crc kubenswrapper[5072]: I1124 12:03:15.804282 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/baab652e-2ccd-4373-8c75-a10f8258bcfd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "baab652e-2ccd-4373-8c75-a10f8258bcfd" (UID: "baab652e-2ccd-4373-8c75-a10f8258bcfd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:03:15 crc kubenswrapper[5072]: I1124 12:03:15.855214 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/baab652e-2ccd-4373-8c75-a10f8258bcfd-config-data" (OuterVolumeSpecName: "config-data") pod "baab652e-2ccd-4373-8c75-a10f8258bcfd" (UID: "baab652e-2ccd-4373-8c75-a10f8258bcfd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:03:15 crc kubenswrapper[5072]: I1124 12:03:15.864756 5072 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/baab652e-2ccd-4373-8c75-a10f8258bcfd-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:03:15 crc kubenswrapper[5072]: I1124 12:03:15.864793 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xkv5g\" (UniqueName: \"kubernetes.io/projected/baab652e-2ccd-4373-8c75-a10f8258bcfd-kube-api-access-xkv5g\") on node \"crc\" DevicePath \"\"" Nov 24 12:03:15 crc kubenswrapper[5072]: I1124 12:03:15.864810 5072 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/baab652e-2ccd-4373-8c75-a10f8258bcfd-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:03:15 crc kubenswrapper[5072]: I1124 12:03:15.864822 5072 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baab652e-2ccd-4373-8c75-a10f8258bcfd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:03:15 crc kubenswrapper[5072]: I1124 12:03:15.864835 5072 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/baab652e-2ccd-4373-8c75-a10f8258bcfd-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 24 12:03:15 crc kubenswrapper[5072]: I1124 12:03:15.966034 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.043867 5072 scope.go:117] "RemoveContainer" containerID="4c463b6823449c0875f1fec4633ea521827aee0fee045719621150bcb1ac1a4f" Nov 24 12:03:16 crc kubenswrapper[5072]: E1124 12:03:16.044274 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.068540 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/761b2964-cd70-47d9-ade7-8ddfb3eb73c3-log-httpd\") pod \"761b2964-cd70-47d9-ade7-8ddfb3eb73c3\" (UID: \"761b2964-cd70-47d9-ade7-8ddfb3eb73c3\") " Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.068643 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/761b2964-cd70-47d9-ade7-8ddfb3eb73c3-scripts\") pod \"761b2964-cd70-47d9-ade7-8ddfb3eb73c3\" (UID: \"761b2964-cd70-47d9-ade7-8ddfb3eb73c3\") " Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.068740 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k44jn\" (UniqueName: \"kubernetes.io/projected/761b2964-cd70-47d9-ade7-8ddfb3eb73c3-kube-api-access-k44jn\") pod \"761b2964-cd70-47d9-ade7-8ddfb3eb73c3\" (UID: \"761b2964-cd70-47d9-ade7-8ddfb3eb73c3\") " Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.068776 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/761b2964-cd70-47d9-ade7-8ddfb3eb73c3-sg-core-conf-yaml\") pod \"761b2964-cd70-47d9-ade7-8ddfb3eb73c3\" (UID: \"761b2964-cd70-47d9-ade7-8ddfb3eb73c3\") " Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.068811 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/761b2964-cd70-47d9-ade7-8ddfb3eb73c3-ceilometer-tls-certs\") pod \"761b2964-cd70-47d9-ade7-8ddfb3eb73c3\" (UID: \"761b2964-cd70-47d9-ade7-8ddfb3eb73c3\") " Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.068828 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/761b2964-cd70-47d9-ade7-8ddfb3eb73c3-combined-ca-bundle\") pod \"761b2964-cd70-47d9-ade7-8ddfb3eb73c3\" (UID: \"761b2964-cd70-47d9-ade7-8ddfb3eb73c3\") " Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.068922 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/761b2964-cd70-47d9-ade7-8ddfb3eb73c3-run-httpd\") pod \"761b2964-cd70-47d9-ade7-8ddfb3eb73c3\" (UID: \"761b2964-cd70-47d9-ade7-8ddfb3eb73c3\") " Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.068949 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/761b2964-cd70-47d9-ade7-8ddfb3eb73c3-config-data\") pod \"761b2964-cd70-47d9-ade7-8ddfb3eb73c3\" (UID: \"761b2964-cd70-47d9-ade7-8ddfb3eb73c3\") " Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.071788 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/761b2964-cd70-47d9-ade7-8ddfb3eb73c3-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "761b2964-cd70-47d9-ade7-8ddfb3eb73c3" (UID: "761b2964-cd70-47d9-ade7-8ddfb3eb73c3"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.072431 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/761b2964-cd70-47d9-ade7-8ddfb3eb73c3-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "761b2964-cd70-47d9-ade7-8ddfb3eb73c3" (UID: "761b2964-cd70-47d9-ade7-8ddfb3eb73c3"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.074608 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/761b2964-cd70-47d9-ade7-8ddfb3eb73c3-scripts" (OuterVolumeSpecName: "scripts") pod "761b2964-cd70-47d9-ade7-8ddfb3eb73c3" (UID: "761b2964-cd70-47d9-ade7-8ddfb3eb73c3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.075650 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/761b2964-cd70-47d9-ade7-8ddfb3eb73c3-kube-api-access-k44jn" (OuterVolumeSpecName: "kube-api-access-k44jn") pod "761b2964-cd70-47d9-ade7-8ddfb3eb73c3" (UID: "761b2964-cd70-47d9-ade7-8ddfb3eb73c3"). InnerVolumeSpecName "kube-api-access-k44jn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.098767 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/761b2964-cd70-47d9-ade7-8ddfb3eb73c3-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "761b2964-cd70-47d9-ade7-8ddfb3eb73c3" (UID: "761b2964-cd70-47d9-ade7-8ddfb3eb73c3"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.127806 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/761b2964-cd70-47d9-ade7-8ddfb3eb73c3-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "761b2964-cd70-47d9-ade7-8ddfb3eb73c3" (UID: "761b2964-cd70-47d9-ade7-8ddfb3eb73c3"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.153269 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/761b2964-cd70-47d9-ade7-8ddfb3eb73c3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "761b2964-cd70-47d9-ade7-8ddfb3eb73c3" (UID: "761b2964-cd70-47d9-ade7-8ddfb3eb73c3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.171403 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/761b2964-cd70-47d9-ade7-8ddfb3eb73c3-config-data" (OuterVolumeSpecName: "config-data") pod "761b2964-cd70-47d9-ade7-8ddfb3eb73c3" (UID: "761b2964-cd70-47d9-ade7-8ddfb3eb73c3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.171607 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/761b2964-cd70-47d9-ade7-8ddfb3eb73c3-config-data\") pod \"761b2964-cd70-47d9-ade7-8ddfb3eb73c3\" (UID: \"761b2964-cd70-47d9-ade7-8ddfb3eb73c3\") " Nov 24 12:03:16 crc kubenswrapper[5072]: W1124 12:03:16.171748 5072 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/761b2964-cd70-47d9-ade7-8ddfb3eb73c3/volumes/kubernetes.io~secret/config-data Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.171770 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/761b2964-cd70-47d9-ade7-8ddfb3eb73c3-config-data" (OuterVolumeSpecName: "config-data") pod "761b2964-cd70-47d9-ade7-8ddfb3eb73c3" (UID: "761b2964-cd70-47d9-ade7-8ddfb3eb73c3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.172485 5072 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/761b2964-cd70-47d9-ade7-8ddfb3eb73c3-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.172508 5072 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/761b2964-cd70-47d9-ade7-8ddfb3eb73c3-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.172523 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k44jn\" (UniqueName: \"kubernetes.io/projected/761b2964-cd70-47d9-ade7-8ddfb3eb73c3-kube-api-access-k44jn\") on node \"crc\" DevicePath \"\"" Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.172535 5072 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/761b2964-cd70-47d9-ade7-8ddfb3eb73c3-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.172547 5072 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/761b2964-cd70-47d9-ade7-8ddfb3eb73c3-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.172559 5072 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/761b2964-cd70-47d9-ade7-8ddfb3eb73c3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.172570 5072 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/761b2964-cd70-47d9-ade7-8ddfb3eb73c3-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.172580 5072 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/761b2964-cd70-47d9-ade7-8ddfb3eb73c3-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.644510 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.644508 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"761b2964-cd70-47d9-ade7-8ddfb3eb73c3","Type":"ContainerDied","Data":"af411ad8d3469e55fb5440dd5046e8278b736be7bb284db06c93028f44c90340"} Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.644916 5072 scope.go:117] "RemoveContainer" containerID="64f401f26854854a6a44fed6bc7b451c23dc5e2140b0b0a71a493d5fe27c9b8a" Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.644520 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.702710 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-api-0"] Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.713208 5072 scope.go:117] "RemoveContainer" containerID="972dc3a765f700930ddd30765dfcfd8c0d7199181792814ea03e27923f79a850" Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.718664 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-api-0"] Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.748362 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Nov 24 12:03:16 crc kubenswrapper[5072]: E1124 12:03:16.748954 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="761b2964-cd70-47d9-ade7-8ddfb3eb73c3" containerName="proxy-httpd" Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.748975 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="761b2964-cd70-47d9-ade7-8ddfb3eb73c3" containerName="proxy-httpd" Nov 24 12:03:16 crc kubenswrapper[5072]: E1124 12:03:16.748990 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="baab652e-2ccd-4373-8c75-a10f8258bcfd" containerName="manila-api" Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.748997 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="baab652e-2ccd-4373-8c75-a10f8258bcfd" containerName="manila-api" Nov 24 12:03:16 crc kubenswrapper[5072]: E1124 12:03:16.749006 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="761b2964-cd70-47d9-ade7-8ddfb3eb73c3" containerName="ceilometer-notification-agent" Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.749014 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="761b2964-cd70-47d9-ade7-8ddfb3eb73c3" containerName="ceilometer-notification-agent" Nov 24 12:03:16 crc kubenswrapper[5072]: E1124 12:03:16.749028 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="761b2964-cd70-47d9-ade7-8ddfb3eb73c3" containerName="ceilometer-central-agent" Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.749036 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="761b2964-cd70-47d9-ade7-8ddfb3eb73c3" containerName="ceilometer-central-agent" Nov 24 12:03:16 crc kubenswrapper[5072]: E1124 12:03:16.749051 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="baab652e-2ccd-4373-8c75-a10f8258bcfd" containerName="manila-api-log" Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.749058 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="baab652e-2ccd-4373-8c75-a10f8258bcfd" containerName="manila-api-log" Nov 24 12:03:16 crc kubenswrapper[5072]: E1124 12:03:16.749093 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="761b2964-cd70-47d9-ade7-8ddfb3eb73c3" containerName="sg-core" Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.749103 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="761b2964-cd70-47d9-ade7-8ddfb3eb73c3" containerName="sg-core" Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.749363 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="761b2964-cd70-47d9-ade7-8ddfb3eb73c3" containerName="ceilometer-notification-agent" Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.749402 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="761b2964-cd70-47d9-ade7-8ddfb3eb73c3" containerName="ceilometer-central-agent" Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.749419 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="baab652e-2ccd-4373-8c75-a10f8258bcfd" containerName="manila-api-log" Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.749433 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="761b2964-cd70-47d9-ade7-8ddfb3eb73c3" containerName="proxy-httpd" Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.749459 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="baab652e-2ccd-4373-8c75-a10f8258bcfd" containerName="manila-api" Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.749475 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="761b2964-cd70-47d9-ade7-8ddfb3eb73c3" containerName="sg-core" Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.750817 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.753554 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.753945 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-public-svc" Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.754119 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-internal-svc" Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.762686 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.764663 5072 scope.go:117] "RemoveContainer" containerID="4630d6afa767f2b989b968e94698ffa151c51abba3dbaf45c5337880ca956ce5" Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.776006 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.789056 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.804199 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.806865 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.811651 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.811716 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.811884 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.814616 5072 scope.go:117] "RemoveContainer" containerID="ffd0b3500c9774fad4dcbaf75c93c9ea57223eb9a31a2ce6a5960ac413fb7291" Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.825664 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.885357 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f4e064b6-df4e-436b-9dec-c72ff87569f2-etc-machine-id\") pod \"manila-api-0\" (UID: \"f4e064b6-df4e-436b-9dec-c72ff87569f2\") " pod="openstack/manila-api-0" Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.885461 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5778c6d9-fc74-4bc8-b5da-97b24931714a-run-httpd\") pod \"ceilometer-0\" (UID: \"5778c6d9-fc74-4bc8-b5da-97b24931714a\") " pod="openstack/ceilometer-0" Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.885487 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5778c6d9-fc74-4bc8-b5da-97b24931714a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5778c6d9-fc74-4bc8-b5da-97b24931714a\") " pod="openstack/ceilometer-0" Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.885525 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9547l\" (UniqueName: \"kubernetes.io/projected/5778c6d9-fc74-4bc8-b5da-97b24931714a-kube-api-access-9547l\") pod \"ceilometer-0\" (UID: \"5778c6d9-fc74-4bc8-b5da-97b24931714a\") " pod="openstack/ceilometer-0" Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.885542 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5778c6d9-fc74-4bc8-b5da-97b24931714a-config-data\") pod \"ceilometer-0\" (UID: \"5778c6d9-fc74-4bc8-b5da-97b24931714a\") " pod="openstack/ceilometer-0" Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.885586 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5778c6d9-fc74-4bc8-b5da-97b24931714a-scripts\") pod \"ceilometer-0\" (UID: \"5778c6d9-fc74-4bc8-b5da-97b24931714a\") " pod="openstack/ceilometer-0" Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.885614 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5778c6d9-fc74-4bc8-b5da-97b24931714a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5778c6d9-fc74-4bc8-b5da-97b24931714a\") " pod="openstack/ceilometer-0" Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.885635 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4e064b6-df4e-436b-9dec-c72ff87569f2-logs\") pod \"manila-api-0\" (UID: \"f4e064b6-df4e-436b-9dec-c72ff87569f2\") " pod="openstack/manila-api-0" Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.885650 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4e064b6-df4e-436b-9dec-c72ff87569f2-internal-tls-certs\") pod \"manila-api-0\" (UID: \"f4e064b6-df4e-436b-9dec-c72ff87569f2\") " pod="openstack/manila-api-0" Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.885672 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4e064b6-df4e-436b-9dec-c72ff87569f2-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"f4e064b6-df4e-436b-9dec-c72ff87569f2\") " pod="openstack/manila-api-0" Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.885691 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4e064b6-df4e-436b-9dec-c72ff87569f2-config-data\") pod \"manila-api-0\" (UID: \"f4e064b6-df4e-436b-9dec-c72ff87569f2\") " pod="openstack/manila-api-0" Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.885707 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f4e064b6-df4e-436b-9dec-c72ff87569f2-config-data-custom\") pod \"manila-api-0\" (UID: \"f4e064b6-df4e-436b-9dec-c72ff87569f2\") " pod="openstack/manila-api-0" Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.885733 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5778c6d9-fc74-4bc8-b5da-97b24931714a-log-httpd\") pod \"ceilometer-0\" (UID: \"5778c6d9-fc74-4bc8-b5da-97b24931714a\") " pod="openstack/ceilometer-0" Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.885750 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vx7r2\" (UniqueName: \"kubernetes.io/projected/f4e064b6-df4e-436b-9dec-c72ff87569f2-kube-api-access-vx7r2\") pod \"manila-api-0\" (UID: \"f4e064b6-df4e-436b-9dec-c72ff87569f2\") " pod="openstack/manila-api-0" Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.885773 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5778c6d9-fc74-4bc8-b5da-97b24931714a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5778c6d9-fc74-4bc8-b5da-97b24931714a\") " pod="openstack/ceilometer-0" Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.885790 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4e064b6-df4e-436b-9dec-c72ff87569f2-public-tls-certs\") pod \"manila-api-0\" (UID: \"f4e064b6-df4e-436b-9dec-c72ff87569f2\") " pod="openstack/manila-api-0" Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.885825 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4e064b6-df4e-436b-9dec-c72ff87569f2-scripts\") pod \"manila-api-0\" (UID: \"f4e064b6-df4e-436b-9dec-c72ff87569f2\") " pod="openstack/manila-api-0" Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.987869 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5778c6d9-fc74-4bc8-b5da-97b24931714a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5778c6d9-fc74-4bc8-b5da-97b24931714a\") " pod="openstack/ceilometer-0" Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.987925 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4e064b6-df4e-436b-9dec-c72ff87569f2-logs\") pod \"manila-api-0\" (UID: \"f4e064b6-df4e-436b-9dec-c72ff87569f2\") " pod="openstack/manila-api-0" Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.987960 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4e064b6-df4e-436b-9dec-c72ff87569f2-internal-tls-certs\") pod \"manila-api-0\" (UID: \"f4e064b6-df4e-436b-9dec-c72ff87569f2\") " pod="openstack/manila-api-0" Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.987992 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4e064b6-df4e-436b-9dec-c72ff87569f2-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"f4e064b6-df4e-436b-9dec-c72ff87569f2\") " pod="openstack/manila-api-0" Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.988021 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4e064b6-df4e-436b-9dec-c72ff87569f2-config-data\") pod \"manila-api-0\" (UID: \"f4e064b6-df4e-436b-9dec-c72ff87569f2\") " pod="openstack/manila-api-0" Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.988044 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f4e064b6-df4e-436b-9dec-c72ff87569f2-config-data-custom\") pod \"manila-api-0\" (UID: \"f4e064b6-df4e-436b-9dec-c72ff87569f2\") " pod="openstack/manila-api-0" Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.988087 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5778c6d9-fc74-4bc8-b5da-97b24931714a-log-httpd\") pod \"ceilometer-0\" (UID: \"5778c6d9-fc74-4bc8-b5da-97b24931714a\") " pod="openstack/ceilometer-0" Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.988111 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vx7r2\" (UniqueName: \"kubernetes.io/projected/f4e064b6-df4e-436b-9dec-c72ff87569f2-kube-api-access-vx7r2\") pod \"manila-api-0\" (UID: \"f4e064b6-df4e-436b-9dec-c72ff87569f2\") " pod="openstack/manila-api-0" Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.988148 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4e064b6-df4e-436b-9dec-c72ff87569f2-public-tls-certs\") pod \"manila-api-0\" (UID: \"f4e064b6-df4e-436b-9dec-c72ff87569f2\") " pod="openstack/manila-api-0" Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.988168 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5778c6d9-fc74-4bc8-b5da-97b24931714a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5778c6d9-fc74-4bc8-b5da-97b24931714a\") " pod="openstack/ceilometer-0" Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.988200 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4e064b6-df4e-436b-9dec-c72ff87569f2-scripts\") pod \"manila-api-0\" (UID: \"f4e064b6-df4e-436b-9dec-c72ff87569f2\") " pod="openstack/manila-api-0" Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.988237 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f4e064b6-df4e-436b-9dec-c72ff87569f2-etc-machine-id\") pod \"manila-api-0\" (UID: \"f4e064b6-df4e-436b-9dec-c72ff87569f2\") " pod="openstack/manila-api-0" Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.988299 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5778c6d9-fc74-4bc8-b5da-97b24931714a-run-httpd\") pod \"ceilometer-0\" (UID: \"5778c6d9-fc74-4bc8-b5da-97b24931714a\") " pod="openstack/ceilometer-0" Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.988325 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5778c6d9-fc74-4bc8-b5da-97b24931714a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5778c6d9-fc74-4bc8-b5da-97b24931714a\") " pod="openstack/ceilometer-0" Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.988420 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9547l\" (UniqueName: \"kubernetes.io/projected/5778c6d9-fc74-4bc8-b5da-97b24931714a-kube-api-access-9547l\") pod \"ceilometer-0\" (UID: \"5778c6d9-fc74-4bc8-b5da-97b24931714a\") " pod="openstack/ceilometer-0" Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.988470 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5778c6d9-fc74-4bc8-b5da-97b24931714a-config-data\") pod \"ceilometer-0\" (UID: \"5778c6d9-fc74-4bc8-b5da-97b24931714a\") " pod="openstack/ceilometer-0" Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.988610 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5778c6d9-fc74-4bc8-b5da-97b24931714a-scripts\") pod \"ceilometer-0\" (UID: \"5778c6d9-fc74-4bc8-b5da-97b24931714a\") " pod="openstack/ceilometer-0" Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.988844 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4e064b6-df4e-436b-9dec-c72ff87569f2-logs\") pod \"manila-api-0\" (UID: \"f4e064b6-df4e-436b-9dec-c72ff87569f2\") " pod="openstack/manila-api-0" Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.990310 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f4e064b6-df4e-436b-9dec-c72ff87569f2-etc-machine-id\") pod \"manila-api-0\" (UID: \"f4e064b6-df4e-436b-9dec-c72ff87569f2\") " pod="openstack/manila-api-0" Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.990847 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5778c6d9-fc74-4bc8-b5da-97b24931714a-run-httpd\") pod \"ceilometer-0\" (UID: \"5778c6d9-fc74-4bc8-b5da-97b24931714a\") " pod="openstack/ceilometer-0" Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.992587 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5778c6d9-fc74-4bc8-b5da-97b24931714a-log-httpd\") pod \"ceilometer-0\" (UID: \"5778c6d9-fc74-4bc8-b5da-97b24931714a\") " pod="openstack/ceilometer-0" Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.996402 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5778c6d9-fc74-4bc8-b5da-97b24931714a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5778c6d9-fc74-4bc8-b5da-97b24931714a\") " pod="openstack/ceilometer-0" Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.996851 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4e064b6-df4e-436b-9dec-c72ff87569f2-public-tls-certs\") pod \"manila-api-0\" (UID: \"f4e064b6-df4e-436b-9dec-c72ff87569f2\") " pod="openstack/manila-api-0" Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.996851 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4e064b6-df4e-436b-9dec-c72ff87569f2-scripts\") pod \"manila-api-0\" (UID: \"f4e064b6-df4e-436b-9dec-c72ff87569f2\") " pod="openstack/manila-api-0" Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.997036 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4e064b6-df4e-436b-9dec-c72ff87569f2-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"f4e064b6-df4e-436b-9dec-c72ff87569f2\") " pod="openstack/manila-api-0" Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.997182 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5778c6d9-fc74-4bc8-b5da-97b24931714a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5778c6d9-fc74-4bc8-b5da-97b24931714a\") " pod="openstack/ceilometer-0" Nov 24 12:03:16 crc kubenswrapper[5072]: I1124 12:03:16.997632 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4e064b6-df4e-436b-9dec-c72ff87569f2-config-data\") pod \"manila-api-0\" (UID: \"f4e064b6-df4e-436b-9dec-c72ff87569f2\") " pod="openstack/manila-api-0" Nov 24 12:03:17 crc kubenswrapper[5072]: I1124 12:03:17.001151 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4e064b6-df4e-436b-9dec-c72ff87569f2-internal-tls-certs\") pod \"manila-api-0\" (UID: \"f4e064b6-df4e-436b-9dec-c72ff87569f2\") " pod="openstack/manila-api-0" Nov 24 12:03:17 crc kubenswrapper[5072]: I1124 12:03:17.003498 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f4e064b6-df4e-436b-9dec-c72ff87569f2-config-data-custom\") pod \"manila-api-0\" (UID: \"f4e064b6-df4e-436b-9dec-c72ff87569f2\") " pod="openstack/manila-api-0" Nov 24 12:03:17 crc kubenswrapper[5072]: I1124 12:03:17.003557 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5778c6d9-fc74-4bc8-b5da-97b24931714a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5778c6d9-fc74-4bc8-b5da-97b24931714a\") " pod="openstack/ceilometer-0" Nov 24 12:03:17 crc kubenswrapper[5072]: I1124 12:03:17.005359 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5778c6d9-fc74-4bc8-b5da-97b24931714a-scripts\") pod \"ceilometer-0\" (UID: \"5778c6d9-fc74-4bc8-b5da-97b24931714a\") " pod="openstack/ceilometer-0" Nov 24 12:03:17 crc kubenswrapper[5072]: I1124 12:03:17.006056 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9547l\" (UniqueName: \"kubernetes.io/projected/5778c6d9-fc74-4bc8-b5da-97b24931714a-kube-api-access-9547l\") pod \"ceilometer-0\" (UID: \"5778c6d9-fc74-4bc8-b5da-97b24931714a\") " pod="openstack/ceilometer-0" Nov 24 12:03:17 crc kubenswrapper[5072]: I1124 12:03:17.009556 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5778c6d9-fc74-4bc8-b5da-97b24931714a-config-data\") pod \"ceilometer-0\" (UID: \"5778c6d9-fc74-4bc8-b5da-97b24931714a\") " pod="openstack/ceilometer-0" Nov 24 12:03:17 crc kubenswrapper[5072]: I1124 12:03:17.011800 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vx7r2\" (UniqueName: \"kubernetes.io/projected/f4e064b6-df4e-436b-9dec-c72ff87569f2-kube-api-access-vx7r2\") pod \"manila-api-0\" (UID: \"f4e064b6-df4e-436b-9dec-c72ff87569f2\") " pod="openstack/manila-api-0" Nov 24 12:03:17 crc kubenswrapper[5072]: I1124 12:03:17.030024 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="761b2964-cd70-47d9-ade7-8ddfb3eb73c3" path="/var/lib/kubelet/pods/761b2964-cd70-47d9-ade7-8ddfb3eb73c3/volumes" Nov 24 12:03:17 crc kubenswrapper[5072]: I1124 12:03:17.031088 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="baab652e-2ccd-4373-8c75-a10f8258bcfd" path="/var/lib/kubelet/pods/baab652e-2ccd-4373-8c75-a10f8258bcfd/volumes" Nov 24 12:03:17 crc kubenswrapper[5072]: I1124 12:03:17.077554 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Nov 24 12:03:17 crc kubenswrapper[5072]: I1124 12:03:17.096178 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Nov 24 12:03:17 crc kubenswrapper[5072]: I1124 12:03:17.161596 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-76b5fdb995-g6frb" Nov 24 12:03:17 crc kubenswrapper[5072]: I1124 12:03:17.254137 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-864d5fc68c-jrg65"] Nov 24 12:03:17 crc kubenswrapper[5072]: I1124 12:03:17.255961 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-864d5fc68c-jrg65" podUID="5621b8b6-4676-4b1c-992c-839a60accf2f" containerName="dnsmasq-dns" containerID="cri-o://a3d4be33e860993bf6cf98325480de7cbe9f49c4cf2d65e2e3c0445b781fb432" gracePeriod=10 Nov 24 12:03:17 crc kubenswrapper[5072]: I1124 12:03:17.273494 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 12:03:17 crc kubenswrapper[5072]: I1124 12:03:17.655248 5072 generic.go:334] "Generic (PLEG): container finished" podID="5621b8b6-4676-4b1c-992c-839a60accf2f" containerID="a3d4be33e860993bf6cf98325480de7cbe9f49c4cf2d65e2e3c0445b781fb432" exitCode=0 Nov 24 12:03:17 crc kubenswrapper[5072]: I1124 12:03:17.655310 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-864d5fc68c-jrg65" event={"ID":"5621b8b6-4676-4b1c-992c-839a60accf2f","Type":"ContainerDied","Data":"a3d4be33e860993bf6cf98325480de7cbe9f49c4cf2d65e2e3c0445b781fb432"} Nov 24 12:03:17 crc kubenswrapper[5072]: I1124 12:03:17.658020 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"38124ab6-e614-4256-a175-a4e280a54132","Type":"ContainerStarted","Data":"e627df6144b89804dfbc0d66ecda3fa8690657b376e18ba26a3923141149220f"} Nov 24 12:03:17 crc kubenswrapper[5072]: I1124 12:03:17.658068 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"38124ab6-e614-4256-a175-a4e280a54132","Type":"ContainerStarted","Data":"14343f9fa448753f261f46b3f99393ff96c5b753a5347ff2622b2c7baba901d2"} Nov 24 12:03:17 crc kubenswrapper[5072]: I1124 12:03:17.691365 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=3.473948782 podStartE2EDuration="11.691347097s" podCreationTimestamp="2025-11-24 12:03:06 +0000 UTC" firstStartedPulling="2025-11-24 12:03:08.033550439 +0000 UTC m=+3239.745074915" lastFinishedPulling="2025-11-24 12:03:16.250948764 +0000 UTC m=+3247.962473230" observedRunningTime="2025-11-24 12:03:17.687489131 +0000 UTC m=+3249.399013607" watchObservedRunningTime="2025-11-24 12:03:17.691347097 +0000 UTC m=+3249.402871573" Nov 24 12:03:17 crc kubenswrapper[5072]: I1124 12:03:17.766098 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Nov 24 12:03:17 crc kubenswrapper[5072]: I1124 12:03:17.855773 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:03:17 crc kubenswrapper[5072]: I1124 12:03:17.980762 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-864d5fc68c-jrg65" Nov 24 12:03:18 crc kubenswrapper[5072]: I1124 12:03:18.116889 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5621b8b6-4676-4b1c-992c-839a60accf2f-config\") pod \"5621b8b6-4676-4b1c-992c-839a60accf2f\" (UID: \"5621b8b6-4676-4b1c-992c-839a60accf2f\") " Nov 24 12:03:18 crc kubenswrapper[5072]: I1124 12:03:18.116979 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5621b8b6-4676-4b1c-992c-839a60accf2f-ovsdbserver-sb\") pod \"5621b8b6-4676-4b1c-992c-839a60accf2f\" (UID: \"5621b8b6-4676-4b1c-992c-839a60accf2f\") " Nov 24 12:03:18 crc kubenswrapper[5072]: I1124 12:03:18.117063 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57q7p\" (UniqueName: \"kubernetes.io/projected/5621b8b6-4676-4b1c-992c-839a60accf2f-kube-api-access-57q7p\") pod \"5621b8b6-4676-4b1c-992c-839a60accf2f\" (UID: \"5621b8b6-4676-4b1c-992c-839a60accf2f\") " Nov 24 12:03:18 crc kubenswrapper[5072]: I1124 12:03:18.117130 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5621b8b6-4676-4b1c-992c-839a60accf2f-openstack-edpm-ipam\") pod \"5621b8b6-4676-4b1c-992c-839a60accf2f\" (UID: \"5621b8b6-4676-4b1c-992c-839a60accf2f\") " Nov 24 12:03:18 crc kubenswrapper[5072]: I1124 12:03:18.117197 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5621b8b6-4676-4b1c-992c-839a60accf2f-dns-svc\") pod \"5621b8b6-4676-4b1c-992c-839a60accf2f\" (UID: \"5621b8b6-4676-4b1c-992c-839a60accf2f\") " Nov 24 12:03:18 crc kubenswrapper[5072]: I1124 12:03:18.117266 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5621b8b6-4676-4b1c-992c-839a60accf2f-ovsdbserver-nb\") pod \"5621b8b6-4676-4b1c-992c-839a60accf2f\" (UID: \"5621b8b6-4676-4b1c-992c-839a60accf2f\") " Nov 24 12:03:18 crc kubenswrapper[5072]: I1124 12:03:18.150182 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5621b8b6-4676-4b1c-992c-839a60accf2f-kube-api-access-57q7p" (OuterVolumeSpecName: "kube-api-access-57q7p") pod "5621b8b6-4676-4b1c-992c-839a60accf2f" (UID: "5621b8b6-4676-4b1c-992c-839a60accf2f"). InnerVolumeSpecName "kube-api-access-57q7p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:03:18 crc kubenswrapper[5072]: I1124 12:03:18.219483 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-57q7p\" (UniqueName: \"kubernetes.io/projected/5621b8b6-4676-4b1c-992c-839a60accf2f-kube-api-access-57q7p\") on node \"crc\" DevicePath \"\"" Nov 24 12:03:18 crc kubenswrapper[5072]: I1124 12:03:18.223277 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5621b8b6-4676-4b1c-992c-839a60accf2f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5621b8b6-4676-4b1c-992c-839a60accf2f" (UID: "5621b8b6-4676-4b1c-992c-839a60accf2f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:03:18 crc kubenswrapper[5072]: I1124 12:03:18.224563 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5621b8b6-4676-4b1c-992c-839a60accf2f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5621b8b6-4676-4b1c-992c-839a60accf2f" (UID: "5621b8b6-4676-4b1c-992c-839a60accf2f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:03:18 crc kubenswrapper[5072]: I1124 12:03:18.231221 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5621b8b6-4676-4b1c-992c-839a60accf2f-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "5621b8b6-4676-4b1c-992c-839a60accf2f" (UID: "5621b8b6-4676-4b1c-992c-839a60accf2f"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:03:18 crc kubenswrapper[5072]: I1124 12:03:18.238509 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5621b8b6-4676-4b1c-992c-839a60accf2f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5621b8b6-4676-4b1c-992c-839a60accf2f" (UID: "5621b8b6-4676-4b1c-992c-839a60accf2f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:03:18 crc kubenswrapper[5072]: I1124 12:03:18.252495 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5621b8b6-4676-4b1c-992c-839a60accf2f-config" (OuterVolumeSpecName: "config") pod "5621b8b6-4676-4b1c-992c-839a60accf2f" (UID: "5621b8b6-4676-4b1c-992c-839a60accf2f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:03:18 crc kubenswrapper[5072]: I1124 12:03:18.321562 5072 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5621b8b6-4676-4b1c-992c-839a60accf2f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 12:03:18 crc kubenswrapper[5072]: I1124 12:03:18.321610 5072 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5621b8b6-4676-4b1c-992c-839a60accf2f-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Nov 24 12:03:18 crc kubenswrapper[5072]: I1124 12:03:18.321625 5072 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5621b8b6-4676-4b1c-992c-839a60accf2f-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 12:03:18 crc kubenswrapper[5072]: I1124 12:03:18.321640 5072 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5621b8b6-4676-4b1c-992c-839a60accf2f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 12:03:18 crc kubenswrapper[5072]: I1124 12:03:18.321653 5072 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5621b8b6-4676-4b1c-992c-839a60accf2f-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:03:18 crc kubenswrapper[5072]: I1124 12:03:18.673660 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"f4e064b6-df4e-436b-9dec-c72ff87569f2","Type":"ContainerStarted","Data":"d4a90ccbfcb56d24c012b0939b893bd66e1ee8c96d2a3fc34695f72f3ab4212b"} Nov 24 12:03:18 crc kubenswrapper[5072]: I1124 12:03:18.675118 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"f4e064b6-df4e-436b-9dec-c72ff87569f2","Type":"ContainerStarted","Data":"a51919399fec924fce398300b3b1f7ce5fed789c9018ee33920759a9658be321"} Nov 24 12:03:18 crc kubenswrapper[5072]: I1124 12:03:18.678033 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5778c6d9-fc74-4bc8-b5da-97b24931714a","Type":"ContainerStarted","Data":"3fe4f31aa16c2370d8c8c2d0fa02ce06b783d11869d1e0f457d6e1d4ab7e6507"} Nov 24 12:03:18 crc kubenswrapper[5072]: I1124 12:03:18.682535 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-864d5fc68c-jrg65" event={"ID":"5621b8b6-4676-4b1c-992c-839a60accf2f","Type":"ContainerDied","Data":"3716264b6193e6ed9589b5a0c86c39b2eaab02ece8cd351b639fcb5baa94459a"} Nov 24 12:03:18 crc kubenswrapper[5072]: I1124 12:03:18.682590 5072 scope.go:117] "RemoveContainer" containerID="a3d4be33e860993bf6cf98325480de7cbe9f49c4cf2d65e2e3c0445b781fb432" Nov 24 12:03:18 crc kubenswrapper[5072]: I1124 12:03:18.682711 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-864d5fc68c-jrg65" Nov 24 12:03:18 crc kubenswrapper[5072]: I1124 12:03:18.725023 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-864d5fc68c-jrg65"] Nov 24 12:03:18 crc kubenswrapper[5072]: I1124 12:03:18.732530 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-864d5fc68c-jrg65"] Nov 24 12:03:18 crc kubenswrapper[5072]: I1124 12:03:18.782748 5072 scope.go:117] "RemoveContainer" containerID="9d904d00700c38dbefc8e8705784549afa994843f6e475f67fc3b4ee79347a20" Nov 24 12:03:19 crc kubenswrapper[5072]: I1124 12:03:19.037111 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5621b8b6-4676-4b1c-992c-839a60accf2f" path="/var/lib/kubelet/pods/5621b8b6-4676-4b1c-992c-839a60accf2f/volumes" Nov 24 12:03:19 crc kubenswrapper[5072]: I1124 12:03:19.697856 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5778c6d9-fc74-4bc8-b5da-97b24931714a","Type":"ContainerStarted","Data":"844c018bc7163f6c3b82fbd6138216b4b8df38297044fa7f5af0ffadfab336c6"} Nov 24 12:03:19 crc kubenswrapper[5072]: I1124 12:03:19.699366 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"f4e064b6-df4e-436b-9dec-c72ff87569f2","Type":"ContainerStarted","Data":"4fe9bd1d93c2465798030c88f8dbaa21afb961fa96d734170af7c3524caa1e71"} Nov 24 12:03:19 crc kubenswrapper[5072]: I1124 12:03:19.699574 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Nov 24 12:03:19 crc kubenswrapper[5072]: I1124 12:03:19.717388 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=3.717340634 podStartE2EDuration="3.717340634s" podCreationTimestamp="2025-11-24 12:03:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:03:19.717060297 +0000 UTC m=+3251.428584793" watchObservedRunningTime="2025-11-24 12:03:19.717340634 +0000 UTC m=+3251.428865110" Nov 24 12:03:20 crc kubenswrapper[5072]: I1124 12:03:20.791134 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:03:23 crc kubenswrapper[5072]: I1124 12:03:23.741459 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5778c6d9-fc74-4bc8-b5da-97b24931714a","Type":"ContainerStarted","Data":"ad378b08318928714643ce4f98d542b634a3800c100debe90b0f99362447a9e2"} Nov 24 12:03:25 crc kubenswrapper[5072]: I1124 12:03:25.758726 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5778c6d9-fc74-4bc8-b5da-97b24931714a","Type":"ContainerStarted","Data":"d164df8c86a15784373ce94fd976f52735b4763dcd3a0cb65e7cac64ac416673"} Nov 24 12:03:27 crc kubenswrapper[5072]: I1124 12:03:27.220195 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Nov 24 12:03:28 crc kubenswrapper[5072]: I1124 12:03:28.775016 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Nov 24 12:03:28 crc kubenswrapper[5072]: I1124 12:03:28.789739 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5778c6d9-fc74-4bc8-b5da-97b24931714a","Type":"ContainerStarted","Data":"10240a5c6ef81039a5443102982f014194a12f6c8d92ee15800b712f124ffc91"} Nov 24 12:03:28 crc kubenswrapper[5072]: I1124 12:03:28.789921 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5778c6d9-fc74-4bc8-b5da-97b24931714a" containerName="ceilometer-central-agent" containerID="cri-o://844c018bc7163f6c3b82fbd6138216b4b8df38297044fa7f5af0ffadfab336c6" gracePeriod=30 Nov 24 12:03:28 crc kubenswrapper[5072]: I1124 12:03:28.790108 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 12:03:28 crc kubenswrapper[5072]: I1124 12:03:28.790178 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5778c6d9-fc74-4bc8-b5da-97b24931714a" containerName="proxy-httpd" containerID="cri-o://10240a5c6ef81039a5443102982f014194a12f6c8d92ee15800b712f124ffc91" gracePeriod=30 Nov 24 12:03:28 crc kubenswrapper[5072]: I1124 12:03:28.790236 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5778c6d9-fc74-4bc8-b5da-97b24931714a" containerName="sg-core" containerID="cri-o://d164df8c86a15784373ce94fd976f52735b4763dcd3a0cb65e7cac64ac416673" gracePeriod=30 Nov 24 12:03:28 crc kubenswrapper[5072]: I1124 12:03:28.790287 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5778c6d9-fc74-4bc8-b5da-97b24931714a" containerName="ceilometer-notification-agent" containerID="cri-o://ad378b08318928714643ce4f98d542b634a3800c100debe90b0f99362447a9e2" gracePeriod=30 Nov 24 12:03:28 crc kubenswrapper[5072]: I1124 12:03:28.841833 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-scheduler-0"] Nov 24 12:03:28 crc kubenswrapper[5072]: I1124 12:03:28.842294 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-scheduler-0" podUID="18acd9e4-2e54-44ce-a600-f9ba836a6994" containerName="manila-scheduler" containerID="cri-o://6412cbef088f8c03dea954f725ece5a4db13481e834b66f053b787dc95377cdc" gracePeriod=30 Nov 24 12:03:28 crc kubenswrapper[5072]: I1124 12:03:28.842358 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-scheduler-0" podUID="18acd9e4-2e54-44ce-a600-f9ba836a6994" containerName="probe" containerID="cri-o://51ddc6d164425f4c95638d0a73d5148ba775e3007a5e1e51ff42491dd048fc2a" gracePeriod=30 Nov 24 12:03:28 crc kubenswrapper[5072]: I1124 12:03:28.860960 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.735345205 podStartE2EDuration="12.860940101s" podCreationTimestamp="2025-11-24 12:03:16 +0000 UTC" firstStartedPulling="2025-11-24 12:03:17.898969139 +0000 UTC m=+3249.610493615" lastFinishedPulling="2025-11-24 12:03:28.024564035 +0000 UTC m=+3259.736088511" observedRunningTime="2025-11-24 12:03:28.851224408 +0000 UTC m=+3260.562748884" watchObservedRunningTime="2025-11-24 12:03:28.860940101 +0000 UTC m=+3260.572464587" Nov 24 12:03:28 crc kubenswrapper[5072]: I1124 12:03:28.911358 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Nov 24 12:03:28 crc kubenswrapper[5072]: I1124 12:03:28.963844 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-share-share1-0"] Nov 24 12:03:29 crc kubenswrapper[5072]: I1124 12:03:29.025402 5072 scope.go:117] "RemoveContainer" containerID="4c463b6823449c0875f1fec4633ea521827aee0fee045719621150bcb1ac1a4f" Nov 24 12:03:29 crc kubenswrapper[5072]: E1124 12:03:29.025928 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 12:03:29 crc kubenswrapper[5072]: I1124 12:03:29.709335 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 12:03:29 crc kubenswrapper[5072]: I1124 12:03:29.808410 5072 generic.go:334] "Generic (PLEG): container finished" podID="18acd9e4-2e54-44ce-a600-f9ba836a6994" containerID="51ddc6d164425f4c95638d0a73d5148ba775e3007a5e1e51ff42491dd048fc2a" exitCode=0 Nov 24 12:03:29 crc kubenswrapper[5072]: I1124 12:03:29.808509 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"18acd9e4-2e54-44ce-a600-f9ba836a6994","Type":"ContainerDied","Data":"51ddc6d164425f4c95638d0a73d5148ba775e3007a5e1e51ff42491dd048fc2a"} Nov 24 12:03:29 crc kubenswrapper[5072]: I1124 12:03:29.814590 5072 generic.go:334] "Generic (PLEG): container finished" podID="5778c6d9-fc74-4bc8-b5da-97b24931714a" containerID="10240a5c6ef81039a5443102982f014194a12f6c8d92ee15800b712f124ffc91" exitCode=0 Nov 24 12:03:29 crc kubenswrapper[5072]: I1124 12:03:29.814617 5072 generic.go:334] "Generic (PLEG): container finished" podID="5778c6d9-fc74-4bc8-b5da-97b24931714a" containerID="d164df8c86a15784373ce94fd976f52735b4763dcd3a0cb65e7cac64ac416673" exitCode=2 Nov 24 12:03:29 crc kubenswrapper[5072]: I1124 12:03:29.814626 5072 generic.go:334] "Generic (PLEG): container finished" podID="5778c6d9-fc74-4bc8-b5da-97b24931714a" containerID="ad378b08318928714643ce4f98d542b634a3800c100debe90b0f99362447a9e2" exitCode=0 Nov 24 12:03:29 crc kubenswrapper[5072]: I1124 12:03:29.814632 5072 generic.go:334] "Generic (PLEG): container finished" podID="5778c6d9-fc74-4bc8-b5da-97b24931714a" containerID="844c018bc7163f6c3b82fbd6138216b4b8df38297044fa7f5af0ffadfab336c6" exitCode=0 Nov 24 12:03:29 crc kubenswrapper[5072]: I1124 12:03:29.814658 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 12:03:29 crc kubenswrapper[5072]: I1124 12:03:29.814731 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5778c6d9-fc74-4bc8-b5da-97b24931714a","Type":"ContainerDied","Data":"10240a5c6ef81039a5443102982f014194a12f6c8d92ee15800b712f124ffc91"} Nov 24 12:03:29 crc kubenswrapper[5072]: I1124 12:03:29.814757 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5778c6d9-fc74-4bc8-b5da-97b24931714a","Type":"ContainerDied","Data":"d164df8c86a15784373ce94fd976f52735b4763dcd3a0cb65e7cac64ac416673"} Nov 24 12:03:29 crc kubenswrapper[5072]: I1124 12:03:29.814767 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5778c6d9-fc74-4bc8-b5da-97b24931714a","Type":"ContainerDied","Data":"ad378b08318928714643ce4f98d542b634a3800c100debe90b0f99362447a9e2"} Nov 24 12:03:29 crc kubenswrapper[5072]: I1124 12:03:29.814776 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5778c6d9-fc74-4bc8-b5da-97b24931714a","Type":"ContainerDied","Data":"844c018bc7163f6c3b82fbd6138216b4b8df38297044fa7f5af0ffadfab336c6"} Nov 24 12:03:29 crc kubenswrapper[5072]: I1124 12:03:29.814786 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5778c6d9-fc74-4bc8-b5da-97b24931714a","Type":"ContainerDied","Data":"3fe4f31aa16c2370d8c8c2d0fa02ce06b783d11869d1e0f457d6e1d4ab7e6507"} Nov 24 12:03:29 crc kubenswrapper[5072]: I1124 12:03:29.814801 5072 scope.go:117] "RemoveContainer" containerID="10240a5c6ef81039a5443102982f014194a12f6c8d92ee15800b712f124ffc91" Nov 24 12:03:29 crc kubenswrapper[5072]: I1124 12:03:29.815428 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-share-share1-0" podUID="38124ab6-e614-4256-a175-a4e280a54132" containerName="manila-share" containerID="cri-o://14343f9fa448753f261f46b3f99393ff96c5b753a5347ff2622b2c7baba901d2" gracePeriod=30 Nov 24 12:03:29 crc kubenswrapper[5072]: I1124 12:03:29.815540 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-share-share1-0" podUID="38124ab6-e614-4256-a175-a4e280a54132" containerName="probe" containerID="cri-o://e627df6144b89804dfbc0d66ecda3fa8690657b376e18ba26a3923141149220f" gracePeriod=30 Nov 24 12:03:29 crc kubenswrapper[5072]: I1124 12:03:29.835780 5072 scope.go:117] "RemoveContainer" containerID="d164df8c86a15784373ce94fd976f52735b4763dcd3a0cb65e7cac64ac416673" Nov 24 12:03:29 crc kubenswrapper[5072]: I1124 12:03:29.853086 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5778c6d9-fc74-4bc8-b5da-97b24931714a-ceilometer-tls-certs\") pod \"5778c6d9-fc74-4bc8-b5da-97b24931714a\" (UID: \"5778c6d9-fc74-4bc8-b5da-97b24931714a\") " Nov 24 12:03:29 crc kubenswrapper[5072]: I1124 12:03:29.853297 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5778c6d9-fc74-4bc8-b5da-97b24931714a-log-httpd\") pod \"5778c6d9-fc74-4bc8-b5da-97b24931714a\" (UID: \"5778c6d9-fc74-4bc8-b5da-97b24931714a\") " Nov 24 12:03:29 crc kubenswrapper[5072]: I1124 12:03:29.853389 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5778c6d9-fc74-4bc8-b5da-97b24931714a-scripts\") pod \"5778c6d9-fc74-4bc8-b5da-97b24931714a\" (UID: \"5778c6d9-fc74-4bc8-b5da-97b24931714a\") " Nov 24 12:03:29 crc kubenswrapper[5072]: I1124 12:03:29.853432 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5778c6d9-fc74-4bc8-b5da-97b24931714a-sg-core-conf-yaml\") pod \"5778c6d9-fc74-4bc8-b5da-97b24931714a\" (UID: \"5778c6d9-fc74-4bc8-b5da-97b24931714a\") " Nov 24 12:03:29 crc kubenswrapper[5072]: I1124 12:03:29.853453 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5778c6d9-fc74-4bc8-b5da-97b24931714a-combined-ca-bundle\") pod \"5778c6d9-fc74-4bc8-b5da-97b24931714a\" (UID: \"5778c6d9-fc74-4bc8-b5da-97b24931714a\") " Nov 24 12:03:29 crc kubenswrapper[5072]: I1124 12:03:29.853472 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5778c6d9-fc74-4bc8-b5da-97b24931714a-config-data\") pod \"5778c6d9-fc74-4bc8-b5da-97b24931714a\" (UID: \"5778c6d9-fc74-4bc8-b5da-97b24931714a\") " Nov 24 12:03:29 crc kubenswrapper[5072]: I1124 12:03:29.853538 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9547l\" (UniqueName: \"kubernetes.io/projected/5778c6d9-fc74-4bc8-b5da-97b24931714a-kube-api-access-9547l\") pod \"5778c6d9-fc74-4bc8-b5da-97b24931714a\" (UID: \"5778c6d9-fc74-4bc8-b5da-97b24931714a\") " Nov 24 12:03:29 crc kubenswrapper[5072]: I1124 12:03:29.853566 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5778c6d9-fc74-4bc8-b5da-97b24931714a-run-httpd\") pod \"5778c6d9-fc74-4bc8-b5da-97b24931714a\" (UID: \"5778c6d9-fc74-4bc8-b5da-97b24931714a\") " Nov 24 12:03:29 crc kubenswrapper[5072]: I1124 12:03:29.853881 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5778c6d9-fc74-4bc8-b5da-97b24931714a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "5778c6d9-fc74-4bc8-b5da-97b24931714a" (UID: "5778c6d9-fc74-4bc8-b5da-97b24931714a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:03:29 crc kubenswrapper[5072]: I1124 12:03:29.854016 5072 scope.go:117] "RemoveContainer" containerID="ad378b08318928714643ce4f98d542b634a3800c100debe90b0f99362447a9e2" Nov 24 12:03:29 crc kubenswrapper[5072]: I1124 12:03:29.854193 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5778c6d9-fc74-4bc8-b5da-97b24931714a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "5778c6d9-fc74-4bc8-b5da-97b24931714a" (UID: "5778c6d9-fc74-4bc8-b5da-97b24931714a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:03:29 crc kubenswrapper[5072]: I1124 12:03:29.854286 5072 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5778c6d9-fc74-4bc8-b5da-97b24931714a-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 12:03:29 crc kubenswrapper[5072]: I1124 12:03:29.858488 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5778c6d9-fc74-4bc8-b5da-97b24931714a-scripts" (OuterVolumeSpecName: "scripts") pod "5778c6d9-fc74-4bc8-b5da-97b24931714a" (UID: "5778c6d9-fc74-4bc8-b5da-97b24931714a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:03:29 crc kubenswrapper[5072]: I1124 12:03:29.859346 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5778c6d9-fc74-4bc8-b5da-97b24931714a-kube-api-access-9547l" (OuterVolumeSpecName: "kube-api-access-9547l") pod "5778c6d9-fc74-4bc8-b5da-97b24931714a" (UID: "5778c6d9-fc74-4bc8-b5da-97b24931714a"). InnerVolumeSpecName "kube-api-access-9547l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:03:29 crc kubenswrapper[5072]: I1124 12:03:29.878461 5072 scope.go:117] "RemoveContainer" containerID="844c018bc7163f6c3b82fbd6138216b4b8df38297044fa7f5af0ffadfab336c6" Nov 24 12:03:29 crc kubenswrapper[5072]: I1124 12:03:29.891602 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5778c6d9-fc74-4bc8-b5da-97b24931714a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "5778c6d9-fc74-4bc8-b5da-97b24931714a" (UID: "5778c6d9-fc74-4bc8-b5da-97b24931714a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:03:29 crc kubenswrapper[5072]: I1124 12:03:29.900225 5072 scope.go:117] "RemoveContainer" containerID="10240a5c6ef81039a5443102982f014194a12f6c8d92ee15800b712f124ffc91" Nov 24 12:03:29 crc kubenswrapper[5072]: E1124 12:03:29.900782 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10240a5c6ef81039a5443102982f014194a12f6c8d92ee15800b712f124ffc91\": container with ID starting with 10240a5c6ef81039a5443102982f014194a12f6c8d92ee15800b712f124ffc91 not found: ID does not exist" containerID="10240a5c6ef81039a5443102982f014194a12f6c8d92ee15800b712f124ffc91" Nov 24 12:03:29 crc kubenswrapper[5072]: I1124 12:03:29.900824 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10240a5c6ef81039a5443102982f014194a12f6c8d92ee15800b712f124ffc91"} err="failed to get container status \"10240a5c6ef81039a5443102982f014194a12f6c8d92ee15800b712f124ffc91\": rpc error: code = NotFound desc = could not find container \"10240a5c6ef81039a5443102982f014194a12f6c8d92ee15800b712f124ffc91\": container with ID starting with 10240a5c6ef81039a5443102982f014194a12f6c8d92ee15800b712f124ffc91 not found: ID does not exist" Nov 24 12:03:29 crc kubenswrapper[5072]: I1124 12:03:29.900849 5072 scope.go:117] "RemoveContainer" containerID="d164df8c86a15784373ce94fd976f52735b4763dcd3a0cb65e7cac64ac416673" Nov 24 12:03:29 crc kubenswrapper[5072]: E1124 12:03:29.901206 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d164df8c86a15784373ce94fd976f52735b4763dcd3a0cb65e7cac64ac416673\": container with ID starting with d164df8c86a15784373ce94fd976f52735b4763dcd3a0cb65e7cac64ac416673 not found: ID does not exist" containerID="d164df8c86a15784373ce94fd976f52735b4763dcd3a0cb65e7cac64ac416673" Nov 24 12:03:29 crc kubenswrapper[5072]: I1124 12:03:29.901228 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d164df8c86a15784373ce94fd976f52735b4763dcd3a0cb65e7cac64ac416673"} err="failed to get container status \"d164df8c86a15784373ce94fd976f52735b4763dcd3a0cb65e7cac64ac416673\": rpc error: code = NotFound desc = could not find container \"d164df8c86a15784373ce94fd976f52735b4763dcd3a0cb65e7cac64ac416673\": container with ID starting with d164df8c86a15784373ce94fd976f52735b4763dcd3a0cb65e7cac64ac416673 not found: ID does not exist" Nov 24 12:03:29 crc kubenswrapper[5072]: I1124 12:03:29.901243 5072 scope.go:117] "RemoveContainer" containerID="ad378b08318928714643ce4f98d542b634a3800c100debe90b0f99362447a9e2" Nov 24 12:03:29 crc kubenswrapper[5072]: E1124 12:03:29.902395 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad378b08318928714643ce4f98d542b634a3800c100debe90b0f99362447a9e2\": container with ID starting with ad378b08318928714643ce4f98d542b634a3800c100debe90b0f99362447a9e2 not found: ID does not exist" containerID="ad378b08318928714643ce4f98d542b634a3800c100debe90b0f99362447a9e2" Nov 24 12:03:29 crc kubenswrapper[5072]: I1124 12:03:29.902459 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad378b08318928714643ce4f98d542b634a3800c100debe90b0f99362447a9e2"} err="failed to get container status \"ad378b08318928714643ce4f98d542b634a3800c100debe90b0f99362447a9e2\": rpc error: code = NotFound desc = could not find container \"ad378b08318928714643ce4f98d542b634a3800c100debe90b0f99362447a9e2\": container with ID starting with ad378b08318928714643ce4f98d542b634a3800c100debe90b0f99362447a9e2 not found: ID does not exist" Nov 24 12:03:29 crc kubenswrapper[5072]: I1124 12:03:29.902497 5072 scope.go:117] "RemoveContainer" containerID="844c018bc7163f6c3b82fbd6138216b4b8df38297044fa7f5af0ffadfab336c6" Nov 24 12:03:29 crc kubenswrapper[5072]: E1124 12:03:29.902938 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"844c018bc7163f6c3b82fbd6138216b4b8df38297044fa7f5af0ffadfab336c6\": container with ID starting with 844c018bc7163f6c3b82fbd6138216b4b8df38297044fa7f5af0ffadfab336c6 not found: ID does not exist" containerID="844c018bc7163f6c3b82fbd6138216b4b8df38297044fa7f5af0ffadfab336c6" Nov 24 12:03:29 crc kubenswrapper[5072]: I1124 12:03:29.902962 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"844c018bc7163f6c3b82fbd6138216b4b8df38297044fa7f5af0ffadfab336c6"} err="failed to get container status \"844c018bc7163f6c3b82fbd6138216b4b8df38297044fa7f5af0ffadfab336c6\": rpc error: code = NotFound desc = could not find container \"844c018bc7163f6c3b82fbd6138216b4b8df38297044fa7f5af0ffadfab336c6\": container with ID starting with 844c018bc7163f6c3b82fbd6138216b4b8df38297044fa7f5af0ffadfab336c6 not found: ID does not exist" Nov 24 12:03:29 crc kubenswrapper[5072]: I1124 12:03:29.902976 5072 scope.go:117] "RemoveContainer" containerID="10240a5c6ef81039a5443102982f014194a12f6c8d92ee15800b712f124ffc91" Nov 24 12:03:29 crc kubenswrapper[5072]: I1124 12:03:29.903328 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10240a5c6ef81039a5443102982f014194a12f6c8d92ee15800b712f124ffc91"} err="failed to get container status \"10240a5c6ef81039a5443102982f014194a12f6c8d92ee15800b712f124ffc91\": rpc error: code = NotFound desc = could not find container \"10240a5c6ef81039a5443102982f014194a12f6c8d92ee15800b712f124ffc91\": container with ID starting with 10240a5c6ef81039a5443102982f014194a12f6c8d92ee15800b712f124ffc91 not found: ID does not exist" Nov 24 12:03:29 crc kubenswrapper[5072]: I1124 12:03:29.903411 5072 scope.go:117] "RemoveContainer" containerID="d164df8c86a15784373ce94fd976f52735b4763dcd3a0cb65e7cac64ac416673" Nov 24 12:03:29 crc kubenswrapper[5072]: I1124 12:03:29.903777 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d164df8c86a15784373ce94fd976f52735b4763dcd3a0cb65e7cac64ac416673"} err="failed to get container status \"d164df8c86a15784373ce94fd976f52735b4763dcd3a0cb65e7cac64ac416673\": rpc error: code = NotFound desc = could not find container \"d164df8c86a15784373ce94fd976f52735b4763dcd3a0cb65e7cac64ac416673\": container with ID starting with d164df8c86a15784373ce94fd976f52735b4763dcd3a0cb65e7cac64ac416673 not found: ID does not exist" Nov 24 12:03:29 crc kubenswrapper[5072]: I1124 12:03:29.903808 5072 scope.go:117] "RemoveContainer" containerID="ad378b08318928714643ce4f98d542b634a3800c100debe90b0f99362447a9e2" Nov 24 12:03:29 crc kubenswrapper[5072]: I1124 12:03:29.904079 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad378b08318928714643ce4f98d542b634a3800c100debe90b0f99362447a9e2"} err="failed to get container status \"ad378b08318928714643ce4f98d542b634a3800c100debe90b0f99362447a9e2\": rpc error: code = NotFound desc = could not find container \"ad378b08318928714643ce4f98d542b634a3800c100debe90b0f99362447a9e2\": container with ID starting with ad378b08318928714643ce4f98d542b634a3800c100debe90b0f99362447a9e2 not found: ID does not exist" Nov 24 12:03:29 crc kubenswrapper[5072]: I1124 12:03:29.904103 5072 scope.go:117] "RemoveContainer" containerID="844c018bc7163f6c3b82fbd6138216b4b8df38297044fa7f5af0ffadfab336c6" Nov 24 12:03:29 crc kubenswrapper[5072]: I1124 12:03:29.904331 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"844c018bc7163f6c3b82fbd6138216b4b8df38297044fa7f5af0ffadfab336c6"} err="failed to get container status \"844c018bc7163f6c3b82fbd6138216b4b8df38297044fa7f5af0ffadfab336c6\": rpc error: code = NotFound desc = could not find container \"844c018bc7163f6c3b82fbd6138216b4b8df38297044fa7f5af0ffadfab336c6\": container with ID starting with 844c018bc7163f6c3b82fbd6138216b4b8df38297044fa7f5af0ffadfab336c6 not found: ID does not exist" Nov 24 12:03:29 crc kubenswrapper[5072]: I1124 12:03:29.904357 5072 scope.go:117] "RemoveContainer" containerID="10240a5c6ef81039a5443102982f014194a12f6c8d92ee15800b712f124ffc91" Nov 24 12:03:29 crc kubenswrapper[5072]: I1124 12:03:29.904625 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10240a5c6ef81039a5443102982f014194a12f6c8d92ee15800b712f124ffc91"} err="failed to get container status \"10240a5c6ef81039a5443102982f014194a12f6c8d92ee15800b712f124ffc91\": rpc error: code = NotFound desc = could not find container \"10240a5c6ef81039a5443102982f014194a12f6c8d92ee15800b712f124ffc91\": container with ID starting with 10240a5c6ef81039a5443102982f014194a12f6c8d92ee15800b712f124ffc91 not found: ID does not exist" Nov 24 12:03:29 crc kubenswrapper[5072]: I1124 12:03:29.904657 5072 scope.go:117] "RemoveContainer" containerID="d164df8c86a15784373ce94fd976f52735b4763dcd3a0cb65e7cac64ac416673" Nov 24 12:03:29 crc kubenswrapper[5072]: I1124 12:03:29.904895 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d164df8c86a15784373ce94fd976f52735b4763dcd3a0cb65e7cac64ac416673"} err="failed to get container status \"d164df8c86a15784373ce94fd976f52735b4763dcd3a0cb65e7cac64ac416673\": rpc error: code = NotFound desc = could not find container \"d164df8c86a15784373ce94fd976f52735b4763dcd3a0cb65e7cac64ac416673\": container with ID starting with d164df8c86a15784373ce94fd976f52735b4763dcd3a0cb65e7cac64ac416673 not found: ID does not exist" Nov 24 12:03:29 crc kubenswrapper[5072]: I1124 12:03:29.904931 5072 scope.go:117] "RemoveContainer" containerID="ad378b08318928714643ce4f98d542b634a3800c100debe90b0f99362447a9e2" Nov 24 12:03:29 crc kubenswrapper[5072]: I1124 12:03:29.905236 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad378b08318928714643ce4f98d542b634a3800c100debe90b0f99362447a9e2"} err="failed to get container status \"ad378b08318928714643ce4f98d542b634a3800c100debe90b0f99362447a9e2\": rpc error: code = NotFound desc = could not find container \"ad378b08318928714643ce4f98d542b634a3800c100debe90b0f99362447a9e2\": container with ID starting with ad378b08318928714643ce4f98d542b634a3800c100debe90b0f99362447a9e2 not found: ID does not exist" Nov 24 12:03:29 crc kubenswrapper[5072]: I1124 12:03:29.905271 5072 scope.go:117] "RemoveContainer" containerID="844c018bc7163f6c3b82fbd6138216b4b8df38297044fa7f5af0ffadfab336c6" Nov 24 12:03:29 crc kubenswrapper[5072]: I1124 12:03:29.905683 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"844c018bc7163f6c3b82fbd6138216b4b8df38297044fa7f5af0ffadfab336c6"} err="failed to get container status \"844c018bc7163f6c3b82fbd6138216b4b8df38297044fa7f5af0ffadfab336c6\": rpc error: code = NotFound desc = could not find container \"844c018bc7163f6c3b82fbd6138216b4b8df38297044fa7f5af0ffadfab336c6\": container with ID starting with 844c018bc7163f6c3b82fbd6138216b4b8df38297044fa7f5af0ffadfab336c6 not found: ID does not exist" Nov 24 12:03:29 crc kubenswrapper[5072]: I1124 12:03:29.905709 5072 scope.go:117] "RemoveContainer" containerID="10240a5c6ef81039a5443102982f014194a12f6c8d92ee15800b712f124ffc91" Nov 24 12:03:29 crc kubenswrapper[5072]: I1124 12:03:29.906096 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10240a5c6ef81039a5443102982f014194a12f6c8d92ee15800b712f124ffc91"} err="failed to get container status \"10240a5c6ef81039a5443102982f014194a12f6c8d92ee15800b712f124ffc91\": rpc error: code = NotFound desc = could not find container \"10240a5c6ef81039a5443102982f014194a12f6c8d92ee15800b712f124ffc91\": container with ID starting with 10240a5c6ef81039a5443102982f014194a12f6c8d92ee15800b712f124ffc91 not found: ID does not exist" Nov 24 12:03:29 crc kubenswrapper[5072]: I1124 12:03:29.906119 5072 scope.go:117] "RemoveContainer" containerID="d164df8c86a15784373ce94fd976f52735b4763dcd3a0cb65e7cac64ac416673" Nov 24 12:03:29 crc kubenswrapper[5072]: I1124 12:03:29.906509 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d164df8c86a15784373ce94fd976f52735b4763dcd3a0cb65e7cac64ac416673"} err="failed to get container status \"d164df8c86a15784373ce94fd976f52735b4763dcd3a0cb65e7cac64ac416673\": rpc error: code = NotFound desc = could not find container \"d164df8c86a15784373ce94fd976f52735b4763dcd3a0cb65e7cac64ac416673\": container with ID starting with d164df8c86a15784373ce94fd976f52735b4763dcd3a0cb65e7cac64ac416673 not found: ID does not exist" Nov 24 12:03:29 crc kubenswrapper[5072]: I1124 12:03:29.906536 5072 scope.go:117] "RemoveContainer" containerID="ad378b08318928714643ce4f98d542b634a3800c100debe90b0f99362447a9e2" Nov 24 12:03:29 crc kubenswrapper[5072]: I1124 12:03:29.906895 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad378b08318928714643ce4f98d542b634a3800c100debe90b0f99362447a9e2"} err="failed to get container status \"ad378b08318928714643ce4f98d542b634a3800c100debe90b0f99362447a9e2\": rpc error: code = NotFound desc = could not find container \"ad378b08318928714643ce4f98d542b634a3800c100debe90b0f99362447a9e2\": container with ID starting with ad378b08318928714643ce4f98d542b634a3800c100debe90b0f99362447a9e2 not found: ID does not exist" Nov 24 12:03:29 crc kubenswrapper[5072]: I1124 12:03:29.906918 5072 scope.go:117] "RemoveContainer" containerID="844c018bc7163f6c3b82fbd6138216b4b8df38297044fa7f5af0ffadfab336c6" Nov 24 12:03:29 crc kubenswrapper[5072]: I1124 12:03:29.907161 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"844c018bc7163f6c3b82fbd6138216b4b8df38297044fa7f5af0ffadfab336c6"} err="failed to get container status \"844c018bc7163f6c3b82fbd6138216b4b8df38297044fa7f5af0ffadfab336c6\": rpc error: code = NotFound desc = could not find container \"844c018bc7163f6c3b82fbd6138216b4b8df38297044fa7f5af0ffadfab336c6\": container with ID starting with 844c018bc7163f6c3b82fbd6138216b4b8df38297044fa7f5af0ffadfab336c6 not found: ID does not exist" Nov 24 12:03:29 crc kubenswrapper[5072]: I1124 12:03:29.922651 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5778c6d9-fc74-4bc8-b5da-97b24931714a-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "5778c6d9-fc74-4bc8-b5da-97b24931714a" (UID: "5778c6d9-fc74-4bc8-b5da-97b24931714a"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:03:29 crc kubenswrapper[5072]: I1124 12:03:29.955601 5072 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5778c6d9-fc74-4bc8-b5da-97b24931714a-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 12:03:29 crc kubenswrapper[5072]: I1124 12:03:29.955637 5072 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5778c6d9-fc74-4bc8-b5da-97b24931714a-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:03:29 crc kubenswrapper[5072]: I1124 12:03:29.955649 5072 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5778c6d9-fc74-4bc8-b5da-97b24931714a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 12:03:29 crc kubenswrapper[5072]: I1124 12:03:29.955660 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9547l\" (UniqueName: \"kubernetes.io/projected/5778c6d9-fc74-4bc8-b5da-97b24931714a-kube-api-access-9547l\") on node \"crc\" DevicePath \"\"" Nov 24 12:03:29 crc kubenswrapper[5072]: I1124 12:03:29.955674 5072 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5778c6d9-fc74-4bc8-b5da-97b24931714a-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 12:03:29 crc kubenswrapper[5072]: I1124 12:03:29.971533 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5778c6d9-fc74-4bc8-b5da-97b24931714a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5778c6d9-fc74-4bc8-b5da-97b24931714a" (UID: "5778c6d9-fc74-4bc8-b5da-97b24931714a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:03:29 crc kubenswrapper[5072]: I1124 12:03:29.993560 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5778c6d9-fc74-4bc8-b5da-97b24931714a-config-data" (OuterVolumeSpecName: "config-data") pod "5778c6d9-fc74-4bc8-b5da-97b24931714a" (UID: "5778c6d9-fc74-4bc8-b5da-97b24931714a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:03:30 crc kubenswrapper[5072]: I1124 12:03:30.059342 5072 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5778c6d9-fc74-4bc8-b5da-97b24931714a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:03:30 crc kubenswrapper[5072]: I1124 12:03:30.059397 5072 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5778c6d9-fc74-4bc8-b5da-97b24931714a-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:03:30 crc kubenswrapper[5072]: I1124 12:03:30.164288 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:03:30 crc kubenswrapper[5072]: I1124 12:03:30.183400 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:03:30 crc kubenswrapper[5072]: I1124 12:03:30.198503 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:03:30 crc kubenswrapper[5072]: E1124 12:03:30.199139 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5778c6d9-fc74-4bc8-b5da-97b24931714a" containerName="proxy-httpd" Nov 24 12:03:30 crc kubenswrapper[5072]: I1124 12:03:30.199208 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="5778c6d9-fc74-4bc8-b5da-97b24931714a" containerName="proxy-httpd" Nov 24 12:03:30 crc kubenswrapper[5072]: E1124 12:03:30.199284 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5621b8b6-4676-4b1c-992c-839a60accf2f" containerName="init" Nov 24 12:03:30 crc kubenswrapper[5072]: I1124 12:03:30.199333 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="5621b8b6-4676-4b1c-992c-839a60accf2f" containerName="init" Nov 24 12:03:30 crc kubenswrapper[5072]: E1124 12:03:30.199413 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5778c6d9-fc74-4bc8-b5da-97b24931714a" containerName="ceilometer-central-agent" Nov 24 12:03:30 crc kubenswrapper[5072]: I1124 12:03:30.199466 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="5778c6d9-fc74-4bc8-b5da-97b24931714a" containerName="ceilometer-central-agent" Nov 24 12:03:30 crc kubenswrapper[5072]: E1124 12:03:30.199533 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5621b8b6-4676-4b1c-992c-839a60accf2f" containerName="dnsmasq-dns" Nov 24 12:03:30 crc kubenswrapper[5072]: I1124 12:03:30.199585 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="5621b8b6-4676-4b1c-992c-839a60accf2f" containerName="dnsmasq-dns" Nov 24 12:03:30 crc kubenswrapper[5072]: E1124 12:03:30.199645 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5778c6d9-fc74-4bc8-b5da-97b24931714a" containerName="sg-core" Nov 24 12:03:30 crc kubenswrapper[5072]: I1124 12:03:30.199702 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="5778c6d9-fc74-4bc8-b5da-97b24931714a" containerName="sg-core" Nov 24 12:03:30 crc kubenswrapper[5072]: E1124 12:03:30.199758 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5778c6d9-fc74-4bc8-b5da-97b24931714a" containerName="ceilometer-notification-agent" Nov 24 12:03:30 crc kubenswrapper[5072]: I1124 12:03:30.199821 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="5778c6d9-fc74-4bc8-b5da-97b24931714a" containerName="ceilometer-notification-agent" Nov 24 12:03:30 crc kubenswrapper[5072]: I1124 12:03:30.200041 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="5778c6d9-fc74-4bc8-b5da-97b24931714a" containerName="sg-core" Nov 24 12:03:30 crc kubenswrapper[5072]: I1124 12:03:30.200122 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="5778c6d9-fc74-4bc8-b5da-97b24931714a" containerName="proxy-httpd" Nov 24 12:03:30 crc kubenswrapper[5072]: I1124 12:03:30.200189 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="5778c6d9-fc74-4bc8-b5da-97b24931714a" containerName="ceilometer-central-agent" Nov 24 12:03:30 crc kubenswrapper[5072]: I1124 12:03:30.200245 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="5621b8b6-4676-4b1c-992c-839a60accf2f" containerName="dnsmasq-dns" Nov 24 12:03:30 crc kubenswrapper[5072]: I1124 12:03:30.200293 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="5778c6d9-fc74-4bc8-b5da-97b24931714a" containerName="ceilometer-notification-agent" Nov 24 12:03:30 crc kubenswrapper[5072]: I1124 12:03:30.202282 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 12:03:30 crc kubenswrapper[5072]: I1124 12:03:30.204871 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 24 12:03:30 crc kubenswrapper[5072]: I1124 12:03:30.205029 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 12:03:30 crc kubenswrapper[5072]: I1124 12:03:30.205257 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 12:03:30 crc kubenswrapper[5072]: I1124 12:03:30.209161 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:03:30 crc kubenswrapper[5072]: I1124 12:03:30.262756 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e6e58a4b-cc8d-45ea-8aad-10f44bcc2c21-log-httpd\") pod \"ceilometer-0\" (UID: \"e6e58a4b-cc8d-45ea-8aad-10f44bcc2c21\") " pod="openstack/ceilometer-0" Nov 24 12:03:30 crc kubenswrapper[5072]: I1124 12:03:30.263032 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e6e58a4b-cc8d-45ea-8aad-10f44bcc2c21-run-httpd\") pod \"ceilometer-0\" (UID: \"e6e58a4b-cc8d-45ea-8aad-10f44bcc2c21\") " pod="openstack/ceilometer-0" Nov 24 12:03:30 crc kubenswrapper[5072]: I1124 12:03:30.263092 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6e58a4b-cc8d-45ea-8aad-10f44bcc2c21-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e6e58a4b-cc8d-45ea-8aad-10f44bcc2c21\") " pod="openstack/ceilometer-0" Nov 24 12:03:30 crc kubenswrapper[5072]: I1124 12:03:30.263167 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fh62m\" (UniqueName: \"kubernetes.io/projected/e6e58a4b-cc8d-45ea-8aad-10f44bcc2c21-kube-api-access-fh62m\") pod \"ceilometer-0\" (UID: \"e6e58a4b-cc8d-45ea-8aad-10f44bcc2c21\") " pod="openstack/ceilometer-0" Nov 24 12:03:30 crc kubenswrapper[5072]: I1124 12:03:30.263476 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6e58a4b-cc8d-45ea-8aad-10f44bcc2c21-config-data\") pod \"ceilometer-0\" (UID: \"e6e58a4b-cc8d-45ea-8aad-10f44bcc2c21\") " pod="openstack/ceilometer-0" Nov 24 12:03:30 crc kubenswrapper[5072]: I1124 12:03:30.263526 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e6e58a4b-cc8d-45ea-8aad-10f44bcc2c21-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e6e58a4b-cc8d-45ea-8aad-10f44bcc2c21\") " pod="openstack/ceilometer-0" Nov 24 12:03:30 crc kubenswrapper[5072]: I1124 12:03:30.263594 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6e58a4b-cc8d-45ea-8aad-10f44bcc2c21-scripts\") pod \"ceilometer-0\" (UID: \"e6e58a4b-cc8d-45ea-8aad-10f44bcc2c21\") " pod="openstack/ceilometer-0" Nov 24 12:03:30 crc kubenswrapper[5072]: I1124 12:03:30.263616 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6e58a4b-cc8d-45ea-8aad-10f44bcc2c21-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e6e58a4b-cc8d-45ea-8aad-10f44bcc2c21\") " pod="openstack/ceilometer-0" Nov 24 12:03:30 crc kubenswrapper[5072]: I1124 12:03:30.366048 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e6e58a4b-cc8d-45ea-8aad-10f44bcc2c21-run-httpd\") pod \"ceilometer-0\" (UID: \"e6e58a4b-cc8d-45ea-8aad-10f44bcc2c21\") " pod="openstack/ceilometer-0" Nov 24 12:03:30 crc kubenswrapper[5072]: I1124 12:03:30.366104 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6e58a4b-cc8d-45ea-8aad-10f44bcc2c21-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e6e58a4b-cc8d-45ea-8aad-10f44bcc2c21\") " pod="openstack/ceilometer-0" Nov 24 12:03:30 crc kubenswrapper[5072]: I1124 12:03:30.366143 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fh62m\" (UniqueName: \"kubernetes.io/projected/e6e58a4b-cc8d-45ea-8aad-10f44bcc2c21-kube-api-access-fh62m\") pod \"ceilometer-0\" (UID: \"e6e58a4b-cc8d-45ea-8aad-10f44bcc2c21\") " pod="openstack/ceilometer-0" Nov 24 12:03:30 crc kubenswrapper[5072]: I1124 12:03:30.366252 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6e58a4b-cc8d-45ea-8aad-10f44bcc2c21-config-data\") pod \"ceilometer-0\" (UID: \"e6e58a4b-cc8d-45ea-8aad-10f44bcc2c21\") " pod="openstack/ceilometer-0" Nov 24 12:03:30 crc kubenswrapper[5072]: I1124 12:03:30.366270 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e6e58a4b-cc8d-45ea-8aad-10f44bcc2c21-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e6e58a4b-cc8d-45ea-8aad-10f44bcc2c21\") " pod="openstack/ceilometer-0" Nov 24 12:03:30 crc kubenswrapper[5072]: I1124 12:03:30.366290 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6e58a4b-cc8d-45ea-8aad-10f44bcc2c21-scripts\") pod \"ceilometer-0\" (UID: \"e6e58a4b-cc8d-45ea-8aad-10f44bcc2c21\") " pod="openstack/ceilometer-0" Nov 24 12:03:30 crc kubenswrapper[5072]: I1124 12:03:30.366308 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6e58a4b-cc8d-45ea-8aad-10f44bcc2c21-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e6e58a4b-cc8d-45ea-8aad-10f44bcc2c21\") " pod="openstack/ceilometer-0" Nov 24 12:03:30 crc kubenswrapper[5072]: I1124 12:03:30.366356 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e6e58a4b-cc8d-45ea-8aad-10f44bcc2c21-log-httpd\") pod \"ceilometer-0\" (UID: \"e6e58a4b-cc8d-45ea-8aad-10f44bcc2c21\") " pod="openstack/ceilometer-0" Nov 24 12:03:30 crc kubenswrapper[5072]: I1124 12:03:30.367345 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e6e58a4b-cc8d-45ea-8aad-10f44bcc2c21-log-httpd\") pod \"ceilometer-0\" (UID: \"e6e58a4b-cc8d-45ea-8aad-10f44bcc2c21\") " pod="openstack/ceilometer-0" Nov 24 12:03:30 crc kubenswrapper[5072]: I1124 12:03:30.367938 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e6e58a4b-cc8d-45ea-8aad-10f44bcc2c21-run-httpd\") pod \"ceilometer-0\" (UID: \"e6e58a4b-cc8d-45ea-8aad-10f44bcc2c21\") " pod="openstack/ceilometer-0" Nov 24 12:03:30 crc kubenswrapper[5072]: I1124 12:03:30.371345 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e6e58a4b-cc8d-45ea-8aad-10f44bcc2c21-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e6e58a4b-cc8d-45ea-8aad-10f44bcc2c21\") " pod="openstack/ceilometer-0" Nov 24 12:03:30 crc kubenswrapper[5072]: I1124 12:03:30.373666 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6e58a4b-cc8d-45ea-8aad-10f44bcc2c21-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e6e58a4b-cc8d-45ea-8aad-10f44bcc2c21\") " pod="openstack/ceilometer-0" Nov 24 12:03:30 crc kubenswrapper[5072]: I1124 12:03:30.373860 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6e58a4b-cc8d-45ea-8aad-10f44bcc2c21-scripts\") pod \"ceilometer-0\" (UID: \"e6e58a4b-cc8d-45ea-8aad-10f44bcc2c21\") " pod="openstack/ceilometer-0" Nov 24 12:03:30 crc kubenswrapper[5072]: I1124 12:03:30.374390 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6e58a4b-cc8d-45ea-8aad-10f44bcc2c21-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e6e58a4b-cc8d-45ea-8aad-10f44bcc2c21\") " pod="openstack/ceilometer-0" Nov 24 12:03:30 crc kubenswrapper[5072]: I1124 12:03:30.374702 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6e58a4b-cc8d-45ea-8aad-10f44bcc2c21-config-data\") pod \"ceilometer-0\" (UID: \"e6e58a4b-cc8d-45ea-8aad-10f44bcc2c21\") " pod="openstack/ceilometer-0" Nov 24 12:03:30 crc kubenswrapper[5072]: I1124 12:03:30.386767 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fh62m\" (UniqueName: \"kubernetes.io/projected/e6e58a4b-cc8d-45ea-8aad-10f44bcc2c21-kube-api-access-fh62m\") pod \"ceilometer-0\" (UID: \"e6e58a4b-cc8d-45ea-8aad-10f44bcc2c21\") " pod="openstack/ceilometer-0" Nov 24 12:03:30 crc kubenswrapper[5072]: I1124 12:03:30.590902 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 12:03:30 crc kubenswrapper[5072]: I1124 12:03:30.832850 5072 generic.go:334] "Generic (PLEG): container finished" podID="38124ab6-e614-4256-a175-a4e280a54132" containerID="e627df6144b89804dfbc0d66ecda3fa8690657b376e18ba26a3923141149220f" exitCode=0 Nov 24 12:03:30 crc kubenswrapper[5072]: I1124 12:03:30.833240 5072 generic.go:334] "Generic (PLEG): container finished" podID="38124ab6-e614-4256-a175-a4e280a54132" containerID="14343f9fa448753f261f46b3f99393ff96c5b753a5347ff2622b2c7baba901d2" exitCode=1 Nov 24 12:03:30 crc kubenswrapper[5072]: I1124 12:03:30.832938 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"38124ab6-e614-4256-a175-a4e280a54132","Type":"ContainerDied","Data":"e627df6144b89804dfbc0d66ecda3fa8690657b376e18ba26a3923141149220f"} Nov 24 12:03:30 crc kubenswrapper[5072]: I1124 12:03:30.833287 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"38124ab6-e614-4256-a175-a4e280a54132","Type":"ContainerDied","Data":"14343f9fa448753f261f46b3f99393ff96c5b753a5347ff2622b2c7baba901d2"} Nov 24 12:03:31 crc kubenswrapper[5072]: I1124 12:03:31.045771 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5778c6d9-fc74-4bc8-b5da-97b24931714a" path="/var/lib/kubelet/pods/5778c6d9-fc74-4bc8-b5da-97b24931714a/volumes" Nov 24 12:03:31 crc kubenswrapper[5072]: I1124 12:03:31.096250 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:03:31 crc kubenswrapper[5072]: I1124 12:03:31.098634 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Nov 24 12:03:31 crc kubenswrapper[5072]: W1124 12:03:31.103338 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode6e58a4b_cc8d_45ea_8aad_10f44bcc2c21.slice/crio-8748f19781e7c8d0d6c5563c54e9c14ffefb53130058ba3a05f074c88dcf085b WatchSource:0}: Error finding container 8748f19781e7c8d0d6c5563c54e9c14ffefb53130058ba3a05f074c88dcf085b: Status 404 returned error can't find the container with id 8748f19781e7c8d0d6c5563c54e9c14ffefb53130058ba3a05f074c88dcf085b Nov 24 12:03:31 crc kubenswrapper[5072]: I1124 12:03:31.190600 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/38124ab6-e614-4256-a175-a4e280a54132-ceph\") pod \"38124ab6-e614-4256-a175-a4e280a54132\" (UID: \"38124ab6-e614-4256-a175-a4e280a54132\") " Nov 24 12:03:31 crc kubenswrapper[5072]: I1124 12:03:31.190724 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38124ab6-e614-4256-a175-a4e280a54132-combined-ca-bundle\") pod \"38124ab6-e614-4256-a175-a4e280a54132\" (UID: \"38124ab6-e614-4256-a175-a4e280a54132\") " Nov 24 12:03:31 crc kubenswrapper[5072]: I1124 12:03:31.190757 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/38124ab6-e614-4256-a175-a4e280a54132-etc-machine-id\") pod \"38124ab6-e614-4256-a175-a4e280a54132\" (UID: \"38124ab6-e614-4256-a175-a4e280a54132\") " Nov 24 12:03:31 crc kubenswrapper[5072]: I1124 12:03:31.190856 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/38124ab6-e614-4256-a175-a4e280a54132-var-lib-manila\") pod \"38124ab6-e614-4256-a175-a4e280a54132\" (UID: \"38124ab6-e614-4256-a175-a4e280a54132\") " Nov 24 12:03:31 crc kubenswrapper[5072]: I1124 12:03:31.190903 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/38124ab6-e614-4256-a175-a4e280a54132-config-data-custom\") pod \"38124ab6-e614-4256-a175-a4e280a54132\" (UID: \"38124ab6-e614-4256-a175-a4e280a54132\") " Nov 24 12:03:31 crc kubenswrapper[5072]: I1124 12:03:31.190938 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pf668\" (UniqueName: \"kubernetes.io/projected/38124ab6-e614-4256-a175-a4e280a54132-kube-api-access-pf668\") pod \"38124ab6-e614-4256-a175-a4e280a54132\" (UID: \"38124ab6-e614-4256-a175-a4e280a54132\") " Nov 24 12:03:31 crc kubenswrapper[5072]: I1124 12:03:31.190966 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38124ab6-e614-4256-a175-a4e280a54132-scripts\") pod \"38124ab6-e614-4256-a175-a4e280a54132\" (UID: \"38124ab6-e614-4256-a175-a4e280a54132\") " Nov 24 12:03:31 crc kubenswrapper[5072]: I1124 12:03:31.191024 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38124ab6-e614-4256-a175-a4e280a54132-config-data\") pod \"38124ab6-e614-4256-a175-a4e280a54132\" (UID: \"38124ab6-e614-4256-a175-a4e280a54132\") " Nov 24 12:03:31 crc kubenswrapper[5072]: I1124 12:03:31.191773 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/38124ab6-e614-4256-a175-a4e280a54132-var-lib-manila" (OuterVolumeSpecName: "var-lib-manila") pod "38124ab6-e614-4256-a175-a4e280a54132" (UID: "38124ab6-e614-4256-a175-a4e280a54132"). InnerVolumeSpecName "var-lib-manila". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 12:03:31 crc kubenswrapper[5072]: I1124 12:03:31.192504 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/38124ab6-e614-4256-a175-a4e280a54132-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "38124ab6-e614-4256-a175-a4e280a54132" (UID: "38124ab6-e614-4256-a175-a4e280a54132"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 12:03:31 crc kubenswrapper[5072]: I1124 12:03:31.196776 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38124ab6-e614-4256-a175-a4e280a54132-kube-api-access-pf668" (OuterVolumeSpecName: "kube-api-access-pf668") pod "38124ab6-e614-4256-a175-a4e280a54132" (UID: "38124ab6-e614-4256-a175-a4e280a54132"). InnerVolumeSpecName "kube-api-access-pf668". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:03:31 crc kubenswrapper[5072]: I1124 12:03:31.197510 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38124ab6-e614-4256-a175-a4e280a54132-ceph" (OuterVolumeSpecName: "ceph") pod "38124ab6-e614-4256-a175-a4e280a54132" (UID: "38124ab6-e614-4256-a175-a4e280a54132"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:03:31 crc kubenswrapper[5072]: I1124 12:03:31.197616 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38124ab6-e614-4256-a175-a4e280a54132-scripts" (OuterVolumeSpecName: "scripts") pod "38124ab6-e614-4256-a175-a4e280a54132" (UID: "38124ab6-e614-4256-a175-a4e280a54132"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:03:31 crc kubenswrapper[5072]: I1124 12:03:31.198498 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38124ab6-e614-4256-a175-a4e280a54132-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "38124ab6-e614-4256-a175-a4e280a54132" (UID: "38124ab6-e614-4256-a175-a4e280a54132"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:03:31 crc kubenswrapper[5072]: I1124 12:03:31.252257 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38124ab6-e614-4256-a175-a4e280a54132-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "38124ab6-e614-4256-a175-a4e280a54132" (UID: "38124ab6-e614-4256-a175-a4e280a54132"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:03:31 crc kubenswrapper[5072]: I1124 12:03:31.293094 5072 reconciler_common.go:293] "Volume detached for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/38124ab6-e614-4256-a175-a4e280a54132-var-lib-manila\") on node \"crc\" DevicePath \"\"" Nov 24 12:03:31 crc kubenswrapper[5072]: I1124 12:03:31.293132 5072 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/38124ab6-e614-4256-a175-a4e280a54132-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 24 12:03:31 crc kubenswrapper[5072]: I1124 12:03:31.293142 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pf668\" (UniqueName: \"kubernetes.io/projected/38124ab6-e614-4256-a175-a4e280a54132-kube-api-access-pf668\") on node \"crc\" DevicePath \"\"" Nov 24 12:03:31 crc kubenswrapper[5072]: I1124 12:03:31.293152 5072 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38124ab6-e614-4256-a175-a4e280a54132-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:03:31 crc kubenswrapper[5072]: I1124 12:03:31.293161 5072 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/38124ab6-e614-4256-a175-a4e280a54132-ceph\") on node \"crc\" DevicePath \"\"" Nov 24 12:03:31 crc kubenswrapper[5072]: I1124 12:03:31.293172 5072 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38124ab6-e614-4256-a175-a4e280a54132-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:03:31 crc kubenswrapper[5072]: I1124 12:03:31.293182 5072 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/38124ab6-e614-4256-a175-a4e280a54132-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 24 12:03:31 crc kubenswrapper[5072]: I1124 12:03:31.349527 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38124ab6-e614-4256-a175-a4e280a54132-config-data" (OuterVolumeSpecName: "config-data") pod "38124ab6-e614-4256-a175-a4e280a54132" (UID: "38124ab6-e614-4256-a175-a4e280a54132"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:03:31 crc kubenswrapper[5072]: I1124 12:03:31.395006 5072 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38124ab6-e614-4256-a175-a4e280a54132-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:03:31 crc kubenswrapper[5072]: I1124 12:03:31.708535 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xtx6h"] Nov 24 12:03:31 crc kubenswrapper[5072]: E1124 12:03:31.709304 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38124ab6-e614-4256-a175-a4e280a54132" containerName="manila-share" Nov 24 12:03:31 crc kubenswrapper[5072]: I1124 12:03:31.709330 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="38124ab6-e614-4256-a175-a4e280a54132" containerName="manila-share" Nov 24 12:03:31 crc kubenswrapper[5072]: E1124 12:03:31.710530 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38124ab6-e614-4256-a175-a4e280a54132" containerName="probe" Nov 24 12:03:31 crc kubenswrapper[5072]: I1124 12:03:31.710573 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="38124ab6-e614-4256-a175-a4e280a54132" containerName="probe" Nov 24 12:03:31 crc kubenswrapper[5072]: I1124 12:03:31.710812 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="38124ab6-e614-4256-a175-a4e280a54132" containerName="probe" Nov 24 12:03:31 crc kubenswrapper[5072]: I1124 12:03:31.710829 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="38124ab6-e614-4256-a175-a4e280a54132" containerName="manila-share" Nov 24 12:03:31 crc kubenswrapper[5072]: I1124 12:03:31.718227 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xtx6h" Nov 24 12:03:31 crc kubenswrapper[5072]: I1124 12:03:31.721966 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xtx6h"] Nov 24 12:03:31 crc kubenswrapper[5072]: I1124 12:03:31.800574 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0a41c35-fdd9-4f33-befd-5b8540cb7c4f-catalog-content\") pod \"certified-operators-xtx6h\" (UID: \"a0a41c35-fdd9-4f33-befd-5b8540cb7c4f\") " pod="openshift-marketplace/certified-operators-xtx6h" Nov 24 12:03:31 crc kubenswrapper[5072]: I1124 12:03:31.800628 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clcx2\" (UniqueName: \"kubernetes.io/projected/a0a41c35-fdd9-4f33-befd-5b8540cb7c4f-kube-api-access-clcx2\") pod \"certified-operators-xtx6h\" (UID: \"a0a41c35-fdd9-4f33-befd-5b8540cb7c4f\") " pod="openshift-marketplace/certified-operators-xtx6h" Nov 24 12:03:31 crc kubenswrapper[5072]: I1124 12:03:31.800696 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0a41c35-fdd9-4f33-befd-5b8540cb7c4f-utilities\") pod \"certified-operators-xtx6h\" (UID: \"a0a41c35-fdd9-4f33-befd-5b8540cb7c4f\") " pod="openshift-marketplace/certified-operators-xtx6h" Nov 24 12:03:31 crc kubenswrapper[5072]: I1124 12:03:31.844001 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e6e58a4b-cc8d-45ea-8aad-10f44bcc2c21","Type":"ContainerStarted","Data":"8748f19781e7c8d0d6c5563c54e9c14ffefb53130058ba3a05f074c88dcf085b"} Nov 24 12:03:31 crc kubenswrapper[5072]: I1124 12:03:31.846934 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"38124ab6-e614-4256-a175-a4e280a54132","Type":"ContainerDied","Data":"93e3809a816660145811c05bad22e6a6108ec0e70e2f050528c38dfbd628a18e"} Nov 24 12:03:31 crc kubenswrapper[5072]: I1124 12:03:31.846981 5072 scope.go:117] "RemoveContainer" containerID="e627df6144b89804dfbc0d66ecda3fa8690657b376e18ba26a3923141149220f" Nov 24 12:03:31 crc kubenswrapper[5072]: I1124 12:03:31.847094 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Nov 24 12:03:31 crc kubenswrapper[5072]: I1124 12:03:31.872115 5072 scope.go:117] "RemoveContainer" containerID="14343f9fa448753f261f46b3f99393ff96c5b753a5347ff2622b2c7baba901d2" Nov 24 12:03:31 crc kubenswrapper[5072]: I1124 12:03:31.901591 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-share-share1-0"] Nov 24 12:03:31 crc kubenswrapper[5072]: I1124 12:03:31.931848 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0a41c35-fdd9-4f33-befd-5b8540cb7c4f-utilities\") pod \"certified-operators-xtx6h\" (UID: \"a0a41c35-fdd9-4f33-befd-5b8540cb7c4f\") " pod="openshift-marketplace/certified-operators-xtx6h" Nov 24 12:03:31 crc kubenswrapper[5072]: I1124 12:03:31.932213 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0a41c35-fdd9-4f33-befd-5b8540cb7c4f-catalog-content\") pod \"certified-operators-xtx6h\" (UID: \"a0a41c35-fdd9-4f33-befd-5b8540cb7c4f\") " pod="openshift-marketplace/certified-operators-xtx6h" Nov 24 12:03:31 crc kubenswrapper[5072]: I1124 12:03:31.932257 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clcx2\" (UniqueName: \"kubernetes.io/projected/a0a41c35-fdd9-4f33-befd-5b8540cb7c4f-kube-api-access-clcx2\") pod \"certified-operators-xtx6h\" (UID: \"a0a41c35-fdd9-4f33-befd-5b8540cb7c4f\") " pod="openshift-marketplace/certified-operators-xtx6h" Nov 24 12:03:31 crc kubenswrapper[5072]: I1124 12:03:31.933265 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0a41c35-fdd9-4f33-befd-5b8540cb7c4f-utilities\") pod \"certified-operators-xtx6h\" (UID: \"a0a41c35-fdd9-4f33-befd-5b8540cb7c4f\") " pod="openshift-marketplace/certified-operators-xtx6h" Nov 24 12:03:31 crc kubenswrapper[5072]: I1124 12:03:31.933289 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0a41c35-fdd9-4f33-befd-5b8540cb7c4f-catalog-content\") pod \"certified-operators-xtx6h\" (UID: \"a0a41c35-fdd9-4f33-befd-5b8540cb7c4f\") " pod="openshift-marketplace/certified-operators-xtx6h" Nov 24 12:03:31 crc kubenswrapper[5072]: I1124 12:03:31.934160 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-share-share1-0"] Nov 24 12:03:31 crc kubenswrapper[5072]: I1124 12:03:31.951171 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clcx2\" (UniqueName: \"kubernetes.io/projected/a0a41c35-fdd9-4f33-befd-5b8540cb7c4f-kube-api-access-clcx2\") pod \"certified-operators-xtx6h\" (UID: \"a0a41c35-fdd9-4f33-befd-5b8540cb7c4f\") " pod="openshift-marketplace/certified-operators-xtx6h" Nov 24 12:03:31 crc kubenswrapper[5072]: I1124 12:03:31.955515 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Nov 24 12:03:31 crc kubenswrapper[5072]: I1124 12:03:31.965922 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Nov 24 12:03:31 crc kubenswrapper[5072]: I1124 12:03:31.966020 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Nov 24 12:03:31 crc kubenswrapper[5072]: I1124 12:03:31.968366 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Nov 24 12:03:32 crc kubenswrapper[5072]: I1124 12:03:32.038574 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/aee02894-118d-46a9-88b6-4e2099bdf16f-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"aee02894-118d-46a9-88b6-4e2099bdf16f\") " pod="openstack/manila-share-share1-0" Nov 24 12:03:32 crc kubenswrapper[5072]: I1124 12:03:32.038616 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aee02894-118d-46a9-88b6-4e2099bdf16f-config-data\") pod \"manila-share-share1-0\" (UID: \"aee02894-118d-46a9-88b6-4e2099bdf16f\") " pod="openstack/manila-share-share1-0" Nov 24 12:03:32 crc kubenswrapper[5072]: I1124 12:03:32.038655 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aee02894-118d-46a9-88b6-4e2099bdf16f-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"aee02894-118d-46a9-88b6-4e2099bdf16f\") " pod="openstack/manila-share-share1-0" Nov 24 12:03:32 crc kubenswrapper[5072]: I1124 12:03:32.038690 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgxc9\" (UniqueName: \"kubernetes.io/projected/aee02894-118d-46a9-88b6-4e2099bdf16f-kube-api-access-qgxc9\") pod \"manila-share-share1-0\" (UID: \"aee02894-118d-46a9-88b6-4e2099bdf16f\") " pod="openstack/manila-share-share1-0" Nov 24 12:03:32 crc kubenswrapper[5072]: I1124 12:03:32.038708 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aee02894-118d-46a9-88b6-4e2099bdf16f-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"aee02894-118d-46a9-88b6-4e2099bdf16f\") " pod="openstack/manila-share-share1-0" Nov 24 12:03:32 crc kubenswrapper[5072]: I1124 12:03:32.038734 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aee02894-118d-46a9-88b6-4e2099bdf16f-scripts\") pod \"manila-share-share1-0\" (UID: \"aee02894-118d-46a9-88b6-4e2099bdf16f\") " pod="openstack/manila-share-share1-0" Nov 24 12:03:32 crc kubenswrapper[5072]: I1124 12:03:32.038765 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/aee02894-118d-46a9-88b6-4e2099bdf16f-ceph\") pod \"manila-share-share1-0\" (UID: \"aee02894-118d-46a9-88b6-4e2099bdf16f\") " pod="openstack/manila-share-share1-0" Nov 24 12:03:32 crc kubenswrapper[5072]: I1124 12:03:32.038832 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/aee02894-118d-46a9-88b6-4e2099bdf16f-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"aee02894-118d-46a9-88b6-4e2099bdf16f\") " pod="openstack/manila-share-share1-0" Nov 24 12:03:32 crc kubenswrapper[5072]: I1124 12:03:32.039127 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xtx6h" Nov 24 12:03:32 crc kubenswrapper[5072]: I1124 12:03:32.140727 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/aee02894-118d-46a9-88b6-4e2099bdf16f-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"aee02894-118d-46a9-88b6-4e2099bdf16f\") " pod="openstack/manila-share-share1-0" Nov 24 12:03:32 crc kubenswrapper[5072]: I1124 12:03:32.140842 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/aee02894-118d-46a9-88b6-4e2099bdf16f-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"aee02894-118d-46a9-88b6-4e2099bdf16f\") " pod="openstack/manila-share-share1-0" Nov 24 12:03:32 crc kubenswrapper[5072]: I1124 12:03:32.140883 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aee02894-118d-46a9-88b6-4e2099bdf16f-config-data\") pod \"manila-share-share1-0\" (UID: \"aee02894-118d-46a9-88b6-4e2099bdf16f\") " pod="openstack/manila-share-share1-0" Nov 24 12:03:32 crc kubenswrapper[5072]: I1124 12:03:32.140899 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/aee02894-118d-46a9-88b6-4e2099bdf16f-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"aee02894-118d-46a9-88b6-4e2099bdf16f\") " pod="openstack/manila-share-share1-0" Nov 24 12:03:32 crc kubenswrapper[5072]: I1124 12:03:32.140934 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aee02894-118d-46a9-88b6-4e2099bdf16f-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"aee02894-118d-46a9-88b6-4e2099bdf16f\") " pod="openstack/manila-share-share1-0" Nov 24 12:03:32 crc kubenswrapper[5072]: I1124 12:03:32.140988 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/aee02894-118d-46a9-88b6-4e2099bdf16f-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"aee02894-118d-46a9-88b6-4e2099bdf16f\") " pod="openstack/manila-share-share1-0" Nov 24 12:03:32 crc kubenswrapper[5072]: I1124 12:03:32.140994 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgxc9\" (UniqueName: \"kubernetes.io/projected/aee02894-118d-46a9-88b6-4e2099bdf16f-kube-api-access-qgxc9\") pod \"manila-share-share1-0\" (UID: \"aee02894-118d-46a9-88b6-4e2099bdf16f\") " pod="openstack/manila-share-share1-0" Nov 24 12:03:32 crc kubenswrapper[5072]: I1124 12:03:32.141072 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aee02894-118d-46a9-88b6-4e2099bdf16f-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"aee02894-118d-46a9-88b6-4e2099bdf16f\") " pod="openstack/manila-share-share1-0" Nov 24 12:03:32 crc kubenswrapper[5072]: I1124 12:03:32.141150 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aee02894-118d-46a9-88b6-4e2099bdf16f-scripts\") pod \"manila-share-share1-0\" (UID: \"aee02894-118d-46a9-88b6-4e2099bdf16f\") " pod="openstack/manila-share-share1-0" Nov 24 12:03:32 crc kubenswrapper[5072]: I1124 12:03:32.141258 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/aee02894-118d-46a9-88b6-4e2099bdf16f-ceph\") pod \"manila-share-share1-0\" (UID: \"aee02894-118d-46a9-88b6-4e2099bdf16f\") " pod="openstack/manila-share-share1-0" Nov 24 12:03:32 crc kubenswrapper[5072]: I1124 12:03:32.147019 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aee02894-118d-46a9-88b6-4e2099bdf16f-scripts\") pod \"manila-share-share1-0\" (UID: \"aee02894-118d-46a9-88b6-4e2099bdf16f\") " pod="openstack/manila-share-share1-0" Nov 24 12:03:32 crc kubenswrapper[5072]: I1124 12:03:32.154059 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aee02894-118d-46a9-88b6-4e2099bdf16f-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"aee02894-118d-46a9-88b6-4e2099bdf16f\") " pod="openstack/manila-share-share1-0" Nov 24 12:03:32 crc kubenswrapper[5072]: I1124 12:03:32.155143 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aee02894-118d-46a9-88b6-4e2099bdf16f-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"aee02894-118d-46a9-88b6-4e2099bdf16f\") " pod="openstack/manila-share-share1-0" Nov 24 12:03:32 crc kubenswrapper[5072]: I1124 12:03:32.159088 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aee02894-118d-46a9-88b6-4e2099bdf16f-config-data\") pod \"manila-share-share1-0\" (UID: \"aee02894-118d-46a9-88b6-4e2099bdf16f\") " pod="openstack/manila-share-share1-0" Nov 24 12:03:32 crc kubenswrapper[5072]: I1124 12:03:32.177224 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgxc9\" (UniqueName: \"kubernetes.io/projected/aee02894-118d-46a9-88b6-4e2099bdf16f-kube-api-access-qgxc9\") pod \"manila-share-share1-0\" (UID: \"aee02894-118d-46a9-88b6-4e2099bdf16f\") " pod="openstack/manila-share-share1-0" Nov 24 12:03:32 crc kubenswrapper[5072]: I1124 12:03:32.180986 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/aee02894-118d-46a9-88b6-4e2099bdf16f-ceph\") pod \"manila-share-share1-0\" (UID: \"aee02894-118d-46a9-88b6-4e2099bdf16f\") " pod="openstack/manila-share-share1-0" Nov 24 12:03:32 crc kubenswrapper[5072]: I1124 12:03:32.293198 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Nov 24 12:03:32 crc kubenswrapper[5072]: I1124 12:03:32.573790 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xtx6h"] Nov 24 12:03:32 crc kubenswrapper[5072]: W1124 12:03:32.574890 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda0a41c35_fdd9_4f33_befd_5b8540cb7c4f.slice/crio-d260145393d6af63c7339f200dde5132031b2a09d3432dffd5f54dd6e71a5f96 WatchSource:0}: Error finding container d260145393d6af63c7339f200dde5132031b2a09d3432dffd5f54dd6e71a5f96: Status 404 returned error can't find the container with id d260145393d6af63c7339f200dde5132031b2a09d3432dffd5f54dd6e71a5f96 Nov 24 12:03:32 crc kubenswrapper[5072]: I1124 12:03:32.778520 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Nov 24 12:03:32 crc kubenswrapper[5072]: I1124 12:03:32.857399 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xtx6h" event={"ID":"a0a41c35-fdd9-4f33-befd-5b8540cb7c4f","Type":"ContainerStarted","Data":"d260145393d6af63c7339f200dde5132031b2a09d3432dffd5f54dd6e71a5f96"} Nov 24 12:03:32 crc kubenswrapper[5072]: I1124 12:03:32.860830 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e6e58a4b-cc8d-45ea-8aad-10f44bcc2c21","Type":"ContainerStarted","Data":"88156b173fb090f850ec70eeb722c20b3358da0296addff89cb7bcbcf4b766df"} Nov 24 12:03:32 crc kubenswrapper[5072]: I1124 12:03:32.862627 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"aee02894-118d-46a9-88b6-4e2099bdf16f","Type":"ContainerStarted","Data":"e860414770ec89dad1042e0c677959bd0701c33efa8cc712260a5de98af5166d"} Nov 24 12:03:33 crc kubenswrapper[5072]: I1124 12:03:33.034873 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38124ab6-e614-4256-a175-a4e280a54132" path="/var/lib/kubelet/pods/38124ab6-e614-4256-a175-a4e280a54132/volumes" Nov 24 12:03:33 crc kubenswrapper[5072]: I1124 12:03:33.689011 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Nov 24 12:03:33 crc kubenswrapper[5072]: I1124 12:03:33.783505 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18acd9e4-2e54-44ce-a600-f9ba836a6994-config-data\") pod \"18acd9e4-2e54-44ce-a600-f9ba836a6994\" (UID: \"18acd9e4-2e54-44ce-a600-f9ba836a6994\") " Nov 24 12:03:33 crc kubenswrapper[5072]: I1124 12:03:33.783569 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-glmrt\" (UniqueName: \"kubernetes.io/projected/18acd9e4-2e54-44ce-a600-f9ba836a6994-kube-api-access-glmrt\") pod \"18acd9e4-2e54-44ce-a600-f9ba836a6994\" (UID: \"18acd9e4-2e54-44ce-a600-f9ba836a6994\") " Nov 24 12:03:33 crc kubenswrapper[5072]: I1124 12:03:33.783600 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18acd9e4-2e54-44ce-a600-f9ba836a6994-scripts\") pod \"18acd9e4-2e54-44ce-a600-f9ba836a6994\" (UID: \"18acd9e4-2e54-44ce-a600-f9ba836a6994\") " Nov 24 12:03:33 crc kubenswrapper[5072]: I1124 12:03:33.783659 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18acd9e4-2e54-44ce-a600-f9ba836a6994-combined-ca-bundle\") pod \"18acd9e4-2e54-44ce-a600-f9ba836a6994\" (UID: \"18acd9e4-2e54-44ce-a600-f9ba836a6994\") " Nov 24 12:03:33 crc kubenswrapper[5072]: I1124 12:03:33.783723 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/18acd9e4-2e54-44ce-a600-f9ba836a6994-config-data-custom\") pod \"18acd9e4-2e54-44ce-a600-f9ba836a6994\" (UID: \"18acd9e4-2e54-44ce-a600-f9ba836a6994\") " Nov 24 12:03:33 crc kubenswrapper[5072]: I1124 12:03:33.783827 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/18acd9e4-2e54-44ce-a600-f9ba836a6994-etc-machine-id\") pod \"18acd9e4-2e54-44ce-a600-f9ba836a6994\" (UID: \"18acd9e4-2e54-44ce-a600-f9ba836a6994\") " Nov 24 12:03:33 crc kubenswrapper[5072]: I1124 12:03:33.784253 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18acd9e4-2e54-44ce-a600-f9ba836a6994-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "18acd9e4-2e54-44ce-a600-f9ba836a6994" (UID: "18acd9e4-2e54-44ce-a600-f9ba836a6994"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 12:03:33 crc kubenswrapper[5072]: I1124 12:03:33.789466 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18acd9e4-2e54-44ce-a600-f9ba836a6994-scripts" (OuterVolumeSpecName: "scripts") pod "18acd9e4-2e54-44ce-a600-f9ba836a6994" (UID: "18acd9e4-2e54-44ce-a600-f9ba836a6994"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:03:33 crc kubenswrapper[5072]: I1124 12:03:33.789609 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18acd9e4-2e54-44ce-a600-f9ba836a6994-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "18acd9e4-2e54-44ce-a600-f9ba836a6994" (UID: "18acd9e4-2e54-44ce-a600-f9ba836a6994"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:03:33 crc kubenswrapper[5072]: I1124 12:03:33.802543 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18acd9e4-2e54-44ce-a600-f9ba836a6994-kube-api-access-glmrt" (OuterVolumeSpecName: "kube-api-access-glmrt") pod "18acd9e4-2e54-44ce-a600-f9ba836a6994" (UID: "18acd9e4-2e54-44ce-a600-f9ba836a6994"). InnerVolumeSpecName "kube-api-access-glmrt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:03:33 crc kubenswrapper[5072]: I1124 12:03:33.835606 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18acd9e4-2e54-44ce-a600-f9ba836a6994-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "18acd9e4-2e54-44ce-a600-f9ba836a6994" (UID: "18acd9e4-2e54-44ce-a600-f9ba836a6994"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:03:33 crc kubenswrapper[5072]: I1124 12:03:33.877463 5072 generic.go:334] "Generic (PLEG): container finished" podID="a0a41c35-fdd9-4f33-befd-5b8540cb7c4f" containerID="e163da41598549385309bf7380923a453ff54e5388f47eb7c6a1fef28fceab6b" exitCode=0 Nov 24 12:03:33 crc kubenswrapper[5072]: I1124 12:03:33.877560 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xtx6h" event={"ID":"a0a41c35-fdd9-4f33-befd-5b8540cb7c4f","Type":"ContainerDied","Data":"e163da41598549385309bf7380923a453ff54e5388f47eb7c6a1fef28fceab6b"} Nov 24 12:03:33 crc kubenswrapper[5072]: I1124 12:03:33.882395 5072 generic.go:334] "Generic (PLEG): container finished" podID="18acd9e4-2e54-44ce-a600-f9ba836a6994" containerID="6412cbef088f8c03dea954f725ece5a4db13481e834b66f053b787dc95377cdc" exitCode=0 Nov 24 12:03:33 crc kubenswrapper[5072]: I1124 12:03:33.882465 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"18acd9e4-2e54-44ce-a600-f9ba836a6994","Type":"ContainerDied","Data":"6412cbef088f8c03dea954f725ece5a4db13481e834b66f053b787dc95377cdc"} Nov 24 12:03:33 crc kubenswrapper[5072]: I1124 12:03:33.882496 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"18acd9e4-2e54-44ce-a600-f9ba836a6994","Type":"ContainerDied","Data":"27e2fe43a76e9ebc770fce644a38f18b12f897d87a8fdb8d94b6c6eed8ad56ae"} Nov 24 12:03:33 crc kubenswrapper[5072]: I1124 12:03:33.882515 5072 scope.go:117] "RemoveContainer" containerID="51ddc6d164425f4c95638d0a73d5148ba775e3007a5e1e51ff42491dd048fc2a" Nov 24 12:03:33 crc kubenswrapper[5072]: I1124 12:03:33.882603 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Nov 24 12:03:33 crc kubenswrapper[5072]: I1124 12:03:33.885540 5072 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/18acd9e4-2e54-44ce-a600-f9ba836a6994-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 24 12:03:33 crc kubenswrapper[5072]: I1124 12:03:33.885557 5072 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/18acd9e4-2e54-44ce-a600-f9ba836a6994-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 24 12:03:33 crc kubenswrapper[5072]: I1124 12:03:33.885573 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-glmrt\" (UniqueName: \"kubernetes.io/projected/18acd9e4-2e54-44ce-a600-f9ba836a6994-kube-api-access-glmrt\") on node \"crc\" DevicePath \"\"" Nov 24 12:03:33 crc kubenswrapper[5072]: I1124 12:03:33.885583 5072 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18acd9e4-2e54-44ce-a600-f9ba836a6994-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:03:33 crc kubenswrapper[5072]: I1124 12:03:33.885591 5072 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18acd9e4-2e54-44ce-a600-f9ba836a6994-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:03:33 crc kubenswrapper[5072]: I1124 12:03:33.893264 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e6e58a4b-cc8d-45ea-8aad-10f44bcc2c21","Type":"ContainerStarted","Data":"c44e2bccc4b38a5e51d92ac2df2df579e4b0fab190114be5259bfcd646f4bfde"} Nov 24 12:03:33 crc kubenswrapper[5072]: I1124 12:03:33.895173 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"aee02894-118d-46a9-88b6-4e2099bdf16f","Type":"ContainerStarted","Data":"9711665e52e37f3d45f2b5ae6024c935707c6be4f1c5f80cc2e2c0c8385cf3f2"} Nov 24 12:03:33 crc kubenswrapper[5072]: I1124 12:03:33.919782 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18acd9e4-2e54-44ce-a600-f9ba836a6994-config-data" (OuterVolumeSpecName: "config-data") pod "18acd9e4-2e54-44ce-a600-f9ba836a6994" (UID: "18acd9e4-2e54-44ce-a600-f9ba836a6994"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:03:33 crc kubenswrapper[5072]: I1124 12:03:33.922304 5072 scope.go:117] "RemoveContainer" containerID="6412cbef088f8c03dea954f725ece5a4db13481e834b66f053b787dc95377cdc" Nov 24 12:03:33 crc kubenswrapper[5072]: I1124 12:03:33.955196 5072 scope.go:117] "RemoveContainer" containerID="51ddc6d164425f4c95638d0a73d5148ba775e3007a5e1e51ff42491dd048fc2a" Nov 24 12:03:33 crc kubenswrapper[5072]: E1124 12:03:33.955712 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51ddc6d164425f4c95638d0a73d5148ba775e3007a5e1e51ff42491dd048fc2a\": container with ID starting with 51ddc6d164425f4c95638d0a73d5148ba775e3007a5e1e51ff42491dd048fc2a not found: ID does not exist" containerID="51ddc6d164425f4c95638d0a73d5148ba775e3007a5e1e51ff42491dd048fc2a" Nov 24 12:03:33 crc kubenswrapper[5072]: I1124 12:03:33.955742 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51ddc6d164425f4c95638d0a73d5148ba775e3007a5e1e51ff42491dd048fc2a"} err="failed to get container status \"51ddc6d164425f4c95638d0a73d5148ba775e3007a5e1e51ff42491dd048fc2a\": rpc error: code = NotFound desc = could not find container \"51ddc6d164425f4c95638d0a73d5148ba775e3007a5e1e51ff42491dd048fc2a\": container with ID starting with 51ddc6d164425f4c95638d0a73d5148ba775e3007a5e1e51ff42491dd048fc2a not found: ID does not exist" Nov 24 12:03:33 crc kubenswrapper[5072]: I1124 12:03:33.955771 5072 scope.go:117] "RemoveContainer" containerID="6412cbef088f8c03dea954f725ece5a4db13481e834b66f053b787dc95377cdc" Nov 24 12:03:33 crc kubenswrapper[5072]: E1124 12:03:33.956188 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6412cbef088f8c03dea954f725ece5a4db13481e834b66f053b787dc95377cdc\": container with ID starting with 6412cbef088f8c03dea954f725ece5a4db13481e834b66f053b787dc95377cdc not found: ID does not exist" containerID="6412cbef088f8c03dea954f725ece5a4db13481e834b66f053b787dc95377cdc" Nov 24 12:03:33 crc kubenswrapper[5072]: I1124 12:03:33.956212 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6412cbef088f8c03dea954f725ece5a4db13481e834b66f053b787dc95377cdc"} err="failed to get container status \"6412cbef088f8c03dea954f725ece5a4db13481e834b66f053b787dc95377cdc\": rpc error: code = NotFound desc = could not find container \"6412cbef088f8c03dea954f725ece5a4db13481e834b66f053b787dc95377cdc\": container with ID starting with 6412cbef088f8c03dea954f725ece5a4db13481e834b66f053b787dc95377cdc not found: ID does not exist" Nov 24 12:03:33 crc kubenswrapper[5072]: I1124 12:03:33.987648 5072 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18acd9e4-2e54-44ce-a600-f9ba836a6994-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:03:34 crc kubenswrapper[5072]: I1124 12:03:34.104276 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-j5htq"] Nov 24 12:03:34 crc kubenswrapper[5072]: E1124 12:03:34.108750 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18acd9e4-2e54-44ce-a600-f9ba836a6994" containerName="probe" Nov 24 12:03:34 crc kubenswrapper[5072]: I1124 12:03:34.108780 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="18acd9e4-2e54-44ce-a600-f9ba836a6994" containerName="probe" Nov 24 12:03:34 crc kubenswrapper[5072]: E1124 12:03:34.108808 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18acd9e4-2e54-44ce-a600-f9ba836a6994" containerName="manila-scheduler" Nov 24 12:03:34 crc kubenswrapper[5072]: I1124 12:03:34.108815 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="18acd9e4-2e54-44ce-a600-f9ba836a6994" containerName="manila-scheduler" Nov 24 12:03:34 crc kubenswrapper[5072]: I1124 12:03:34.108993 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="18acd9e4-2e54-44ce-a600-f9ba836a6994" containerName="probe" Nov 24 12:03:34 crc kubenswrapper[5072]: I1124 12:03:34.109011 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="18acd9e4-2e54-44ce-a600-f9ba836a6994" containerName="manila-scheduler" Nov 24 12:03:34 crc kubenswrapper[5072]: I1124 12:03:34.110286 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j5htq" Nov 24 12:03:34 crc kubenswrapper[5072]: I1124 12:03:34.128711 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j5htq"] Nov 24 12:03:34 crc kubenswrapper[5072]: I1124 12:03:34.195440 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b8c141a-32f9-41ba-95af-8448cf8cd002-catalog-content\") pod \"redhat-operators-j5htq\" (UID: \"8b8c141a-32f9-41ba-95af-8448cf8cd002\") " pod="openshift-marketplace/redhat-operators-j5htq" Nov 24 12:03:34 crc kubenswrapper[5072]: I1124 12:03:34.195509 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qggnr\" (UniqueName: \"kubernetes.io/projected/8b8c141a-32f9-41ba-95af-8448cf8cd002-kube-api-access-qggnr\") pod \"redhat-operators-j5htq\" (UID: \"8b8c141a-32f9-41ba-95af-8448cf8cd002\") " pod="openshift-marketplace/redhat-operators-j5htq" Nov 24 12:03:34 crc kubenswrapper[5072]: I1124 12:03:34.195622 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b8c141a-32f9-41ba-95af-8448cf8cd002-utilities\") pod \"redhat-operators-j5htq\" (UID: \"8b8c141a-32f9-41ba-95af-8448cf8cd002\") " pod="openshift-marketplace/redhat-operators-j5htq" Nov 24 12:03:34 crc kubenswrapper[5072]: I1124 12:03:34.217013 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-scheduler-0"] Nov 24 12:03:34 crc kubenswrapper[5072]: I1124 12:03:34.228776 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-scheduler-0"] Nov 24 12:03:34 crc kubenswrapper[5072]: I1124 12:03:34.258130 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Nov 24 12:03:34 crc kubenswrapper[5072]: I1124 12:03:34.260154 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Nov 24 12:03:34 crc kubenswrapper[5072]: I1124 12:03:34.262733 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Nov 24 12:03:34 crc kubenswrapper[5072]: I1124 12:03:34.278561 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Nov 24 12:03:34 crc kubenswrapper[5072]: I1124 12:03:34.297088 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qggnr\" (UniqueName: \"kubernetes.io/projected/8b8c141a-32f9-41ba-95af-8448cf8cd002-kube-api-access-qggnr\") pod \"redhat-operators-j5htq\" (UID: \"8b8c141a-32f9-41ba-95af-8448cf8cd002\") " pod="openshift-marketplace/redhat-operators-j5htq" Nov 24 12:03:34 crc kubenswrapper[5072]: I1124 12:03:34.297204 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c1f9647-62ad-452d-84ae-81211ebc18b5-config-data\") pod \"manila-scheduler-0\" (UID: \"7c1f9647-62ad-452d-84ae-81211ebc18b5\") " pod="openstack/manila-scheduler-0" Nov 24 12:03:34 crc kubenswrapper[5072]: I1124 12:03:34.297236 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c1f9647-62ad-452d-84ae-81211ebc18b5-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"7c1f9647-62ad-452d-84ae-81211ebc18b5\") " pod="openstack/manila-scheduler-0" Nov 24 12:03:34 crc kubenswrapper[5072]: I1124 12:03:34.297283 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7c1f9647-62ad-452d-84ae-81211ebc18b5-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"7c1f9647-62ad-452d-84ae-81211ebc18b5\") " pod="openstack/manila-scheduler-0" Nov 24 12:03:34 crc kubenswrapper[5072]: I1124 12:03:34.297306 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b8c141a-32f9-41ba-95af-8448cf8cd002-utilities\") pod \"redhat-operators-j5htq\" (UID: \"8b8c141a-32f9-41ba-95af-8448cf8cd002\") " pod="openshift-marketplace/redhat-operators-j5htq" Nov 24 12:03:34 crc kubenswrapper[5072]: I1124 12:03:34.297323 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vd4t\" (UniqueName: \"kubernetes.io/projected/7c1f9647-62ad-452d-84ae-81211ebc18b5-kube-api-access-7vd4t\") pod \"manila-scheduler-0\" (UID: \"7c1f9647-62ad-452d-84ae-81211ebc18b5\") " pod="openstack/manila-scheduler-0" Nov 24 12:03:34 crc kubenswrapper[5072]: I1124 12:03:34.297351 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c1f9647-62ad-452d-84ae-81211ebc18b5-scripts\") pod \"manila-scheduler-0\" (UID: \"7c1f9647-62ad-452d-84ae-81211ebc18b5\") " pod="openstack/manila-scheduler-0" Nov 24 12:03:34 crc kubenswrapper[5072]: I1124 12:03:34.297389 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7c1f9647-62ad-452d-84ae-81211ebc18b5-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"7c1f9647-62ad-452d-84ae-81211ebc18b5\") " pod="openstack/manila-scheduler-0" Nov 24 12:03:34 crc kubenswrapper[5072]: I1124 12:03:34.297414 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b8c141a-32f9-41ba-95af-8448cf8cd002-catalog-content\") pod \"redhat-operators-j5htq\" (UID: \"8b8c141a-32f9-41ba-95af-8448cf8cd002\") " pod="openshift-marketplace/redhat-operators-j5htq" Nov 24 12:03:34 crc kubenswrapper[5072]: I1124 12:03:34.299779 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b8c141a-32f9-41ba-95af-8448cf8cd002-utilities\") pod \"redhat-operators-j5htq\" (UID: \"8b8c141a-32f9-41ba-95af-8448cf8cd002\") " pod="openshift-marketplace/redhat-operators-j5htq" Nov 24 12:03:34 crc kubenswrapper[5072]: I1124 12:03:34.301319 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b8c141a-32f9-41ba-95af-8448cf8cd002-catalog-content\") pod \"redhat-operators-j5htq\" (UID: \"8b8c141a-32f9-41ba-95af-8448cf8cd002\") " pod="openshift-marketplace/redhat-operators-j5htq" Nov 24 12:03:34 crc kubenswrapper[5072]: I1124 12:03:34.322551 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qggnr\" (UniqueName: \"kubernetes.io/projected/8b8c141a-32f9-41ba-95af-8448cf8cd002-kube-api-access-qggnr\") pod \"redhat-operators-j5htq\" (UID: \"8b8c141a-32f9-41ba-95af-8448cf8cd002\") " pod="openshift-marketplace/redhat-operators-j5htq" Nov 24 12:03:34 crc kubenswrapper[5072]: I1124 12:03:34.399244 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7c1f9647-62ad-452d-84ae-81211ebc18b5-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"7c1f9647-62ad-452d-84ae-81211ebc18b5\") " pod="openstack/manila-scheduler-0" Nov 24 12:03:34 crc kubenswrapper[5072]: I1124 12:03:34.399300 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vd4t\" (UniqueName: \"kubernetes.io/projected/7c1f9647-62ad-452d-84ae-81211ebc18b5-kube-api-access-7vd4t\") pod \"manila-scheduler-0\" (UID: \"7c1f9647-62ad-452d-84ae-81211ebc18b5\") " pod="openstack/manila-scheduler-0" Nov 24 12:03:34 crc kubenswrapper[5072]: I1124 12:03:34.399330 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c1f9647-62ad-452d-84ae-81211ebc18b5-scripts\") pod \"manila-scheduler-0\" (UID: \"7c1f9647-62ad-452d-84ae-81211ebc18b5\") " pod="openstack/manila-scheduler-0" Nov 24 12:03:34 crc kubenswrapper[5072]: I1124 12:03:34.399366 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7c1f9647-62ad-452d-84ae-81211ebc18b5-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"7c1f9647-62ad-452d-84ae-81211ebc18b5\") " pod="openstack/manila-scheduler-0" Nov 24 12:03:34 crc kubenswrapper[5072]: I1124 12:03:34.399395 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7c1f9647-62ad-452d-84ae-81211ebc18b5-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"7c1f9647-62ad-452d-84ae-81211ebc18b5\") " pod="openstack/manila-scheduler-0" Nov 24 12:03:34 crc kubenswrapper[5072]: I1124 12:03:34.399651 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c1f9647-62ad-452d-84ae-81211ebc18b5-config-data\") pod \"manila-scheduler-0\" (UID: \"7c1f9647-62ad-452d-84ae-81211ebc18b5\") " pod="openstack/manila-scheduler-0" Nov 24 12:03:34 crc kubenswrapper[5072]: I1124 12:03:34.399701 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c1f9647-62ad-452d-84ae-81211ebc18b5-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"7c1f9647-62ad-452d-84ae-81211ebc18b5\") " pod="openstack/manila-scheduler-0" Nov 24 12:03:34 crc kubenswrapper[5072]: I1124 12:03:34.402929 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7c1f9647-62ad-452d-84ae-81211ebc18b5-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"7c1f9647-62ad-452d-84ae-81211ebc18b5\") " pod="openstack/manila-scheduler-0" Nov 24 12:03:34 crc kubenswrapper[5072]: I1124 12:03:34.403900 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c1f9647-62ad-452d-84ae-81211ebc18b5-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"7c1f9647-62ad-452d-84ae-81211ebc18b5\") " pod="openstack/manila-scheduler-0" Nov 24 12:03:34 crc kubenswrapper[5072]: I1124 12:03:34.405574 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c1f9647-62ad-452d-84ae-81211ebc18b5-config-data\") pod \"manila-scheduler-0\" (UID: \"7c1f9647-62ad-452d-84ae-81211ebc18b5\") " pod="openstack/manila-scheduler-0" Nov 24 12:03:34 crc kubenswrapper[5072]: I1124 12:03:34.414966 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c1f9647-62ad-452d-84ae-81211ebc18b5-scripts\") pod \"manila-scheduler-0\" (UID: \"7c1f9647-62ad-452d-84ae-81211ebc18b5\") " pod="openstack/manila-scheduler-0" Nov 24 12:03:34 crc kubenswrapper[5072]: I1124 12:03:34.427219 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vd4t\" (UniqueName: \"kubernetes.io/projected/7c1f9647-62ad-452d-84ae-81211ebc18b5-kube-api-access-7vd4t\") pod \"manila-scheduler-0\" (UID: \"7c1f9647-62ad-452d-84ae-81211ebc18b5\") " pod="openstack/manila-scheduler-0" Nov 24 12:03:34 crc kubenswrapper[5072]: I1124 12:03:34.482310 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j5htq" Nov 24 12:03:34 crc kubenswrapper[5072]: I1124 12:03:34.589463 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Nov 24 12:03:34 crc kubenswrapper[5072]: I1124 12:03:34.906283 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"aee02894-118d-46a9-88b6-4e2099bdf16f","Type":"ContainerStarted","Data":"b48b9970d3c15e60cbc0b1fffb95c73908c1f967ce9c5cd67766285d5290295b"} Nov 24 12:03:35 crc kubenswrapper[5072]: I1124 12:03:35.051485 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18acd9e4-2e54-44ce-a600-f9ba836a6994" path="/var/lib/kubelet/pods/18acd9e4-2e54-44ce-a600-f9ba836a6994/volumes" Nov 24 12:03:35 crc kubenswrapper[5072]: I1124 12:03:35.055328 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j5htq"] Nov 24 12:03:35 crc kubenswrapper[5072]: I1124 12:03:35.175679 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Nov 24 12:03:35 crc kubenswrapper[5072]: I1124 12:03:35.914642 5072 generic.go:334] "Generic (PLEG): container finished" podID="8b8c141a-32f9-41ba-95af-8448cf8cd002" containerID="5fc5bce9f573060501f9def0675b172cecda9ef11664c6ed740cb78f6b1651f4" exitCode=0 Nov 24 12:03:35 crc kubenswrapper[5072]: I1124 12:03:35.914696 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j5htq" event={"ID":"8b8c141a-32f9-41ba-95af-8448cf8cd002","Type":"ContainerDied","Data":"5fc5bce9f573060501f9def0675b172cecda9ef11664c6ed740cb78f6b1651f4"} Nov 24 12:03:35 crc kubenswrapper[5072]: I1124 12:03:35.915257 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j5htq" event={"ID":"8b8c141a-32f9-41ba-95af-8448cf8cd002","Type":"ContainerStarted","Data":"e8008c4a5fb6095d4df98193ef6f153412111bcda6069b20b97f1e4366c9932e"} Nov 24 12:03:35 crc kubenswrapper[5072]: I1124 12:03:35.917028 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"7c1f9647-62ad-452d-84ae-81211ebc18b5","Type":"ContainerStarted","Data":"f59d7485ef5653f88c2e07db8a78414ffe6ae1fa2b6d5f3e2824272104cecb35"} Nov 24 12:03:35 crc kubenswrapper[5072]: I1124 12:03:35.962332 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=4.962313616 podStartE2EDuration="4.962313616s" podCreationTimestamp="2025-11-24 12:03:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:03:35.953512276 +0000 UTC m=+3267.665036762" watchObservedRunningTime="2025-11-24 12:03:35.962313616 +0000 UTC m=+3267.673838092" Nov 24 12:03:36 crc kubenswrapper[5072]: I1124 12:03:36.927884 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"7c1f9647-62ad-452d-84ae-81211ebc18b5","Type":"ContainerStarted","Data":"320e78687c2177ff086438851d56bec0b05d3858b51ff6d5c7990a005192fb84"} Nov 24 12:03:37 crc kubenswrapper[5072]: I1124 12:03:37.940110 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e6e58a4b-cc8d-45ea-8aad-10f44bcc2c21","Type":"ContainerStarted","Data":"ebb2c107341c9bcdfe43549f364f5a0291ab6bc41d2cba44f477452f59bbefb2"} Nov 24 12:03:37 crc kubenswrapper[5072]: I1124 12:03:37.945947 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"7c1f9647-62ad-452d-84ae-81211ebc18b5","Type":"ContainerStarted","Data":"b52ed4343da1d15f4249bb7015ea6568223931a1df6f81c0da7301ee0d917e89"} Nov 24 12:03:37 crc kubenswrapper[5072]: I1124 12:03:37.974611 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=3.97458641 podStartE2EDuration="3.97458641s" podCreationTimestamp="2025-11-24 12:03:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:03:37.972921289 +0000 UTC m=+3269.684445785" watchObservedRunningTime="2025-11-24 12:03:37.97458641 +0000 UTC m=+3269.686110906" Nov 24 12:03:38 crc kubenswrapper[5072]: I1124 12:03:38.571188 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/manila-api-0" Nov 24 12:03:38 crc kubenswrapper[5072]: I1124 12:03:38.957829 5072 generic.go:334] "Generic (PLEG): container finished" podID="a0a41c35-fdd9-4f33-befd-5b8540cb7c4f" containerID="2ea592557a38bdb3bd95e00a85bfcf10e64dcfcc08dfd72f2e5adc3ae673b044" exitCode=0 Nov 24 12:03:38 crc kubenswrapper[5072]: I1124 12:03:38.957962 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xtx6h" event={"ID":"a0a41c35-fdd9-4f33-befd-5b8540cb7c4f","Type":"ContainerDied","Data":"2ea592557a38bdb3bd95e00a85bfcf10e64dcfcc08dfd72f2e5adc3ae673b044"} Nov 24 12:03:42 crc kubenswrapper[5072]: I1124 12:03:42.294958 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Nov 24 12:03:43 crc kubenswrapper[5072]: I1124 12:03:43.021186 5072 scope.go:117] "RemoveContainer" containerID="4c463b6823449c0875f1fec4633ea521827aee0fee045719621150bcb1ac1a4f" Nov 24 12:03:43 crc kubenswrapper[5072]: E1124 12:03:43.022781 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 12:03:44 crc kubenswrapper[5072]: I1124 12:03:44.590559 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Nov 24 12:03:52 crc kubenswrapper[5072]: I1124 12:03:52.099850 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e6e58a4b-cc8d-45ea-8aad-10f44bcc2c21","Type":"ContainerStarted","Data":"c7bdfb7cdbcb7d34f9f68b42e01f31ec90aaad0dd90a863fb90915f10d3387e2"} Nov 24 12:03:52 crc kubenswrapper[5072]: I1124 12:03:52.100462 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 12:03:52 crc kubenswrapper[5072]: I1124 12:03:52.101961 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j5htq" event={"ID":"8b8c141a-32f9-41ba-95af-8448cf8cd002","Type":"ContainerStarted","Data":"809feefb85c772a88c0070cb0c565c74b876cdab4628386736043ea621f16fef"} Nov 24 12:03:52 crc kubenswrapper[5072]: I1124 12:03:52.105068 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xtx6h" event={"ID":"a0a41c35-fdd9-4f33-befd-5b8540cb7c4f","Type":"ContainerStarted","Data":"a455c9e2b79808fbb6b39bd835b5581b6abfc572f001f558eede6b05f898c165"} Nov 24 12:03:52 crc kubenswrapper[5072]: I1124 12:03:52.135493 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.465613109 podStartE2EDuration="22.135463572s" podCreationTimestamp="2025-11-24 12:03:30 +0000 UTC" firstStartedPulling="2025-11-24 12:03:31.105876924 +0000 UTC m=+3262.817401400" lastFinishedPulling="2025-11-24 12:03:50.775727387 +0000 UTC m=+3282.487251863" observedRunningTime="2025-11-24 12:03:52.124869817 +0000 UTC m=+3283.836394293" watchObservedRunningTime="2025-11-24 12:03:52.135463572 +0000 UTC m=+3283.846988068" Nov 24 12:03:52 crc kubenswrapper[5072]: I1124 12:03:52.151015 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xtx6h" podStartSLOduration=4.104765921 podStartE2EDuration="21.150994271s" podCreationTimestamp="2025-11-24 12:03:31 +0000 UTC" firstStartedPulling="2025-11-24 12:03:33.879940449 +0000 UTC m=+3265.591464925" lastFinishedPulling="2025-11-24 12:03:50.926168799 +0000 UTC m=+3282.637693275" observedRunningTime="2025-11-24 12:03:52.150746204 +0000 UTC m=+3283.862270680" watchObservedRunningTime="2025-11-24 12:03:52.150994271 +0000 UTC m=+3283.862518747" Nov 24 12:03:53 crc kubenswrapper[5072]: I1124 12:03:53.115994 5072 generic.go:334] "Generic (PLEG): container finished" podID="8b8c141a-32f9-41ba-95af-8448cf8cd002" containerID="809feefb85c772a88c0070cb0c565c74b876cdab4628386736043ea621f16fef" exitCode=0 Nov 24 12:03:53 crc kubenswrapper[5072]: I1124 12:03:53.116073 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j5htq" event={"ID":"8b8c141a-32f9-41ba-95af-8448cf8cd002","Type":"ContainerDied","Data":"809feefb85c772a88c0070cb0c565c74b876cdab4628386736043ea621f16fef"} Nov 24 12:03:53 crc kubenswrapper[5072]: I1124 12:03:53.896794 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Nov 24 12:03:54 crc kubenswrapper[5072]: I1124 12:03:54.082398 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-g8j49"] Nov 24 12:03:54 crc kubenswrapper[5072]: I1124 12:03:54.084513 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g8j49" Nov 24 12:03:54 crc kubenswrapper[5072]: I1124 12:03:54.099175 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-g8j49"] Nov 24 12:03:54 crc kubenswrapper[5072]: I1124 12:03:54.134850 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/900fecab-4458-4ac8-8bb7-e5068e9c74d1-utilities\") pod \"redhat-marketplace-g8j49\" (UID: \"900fecab-4458-4ac8-8bb7-e5068e9c74d1\") " pod="openshift-marketplace/redhat-marketplace-g8j49" Nov 24 12:03:54 crc kubenswrapper[5072]: I1124 12:03:54.135785 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45gbk\" (UniqueName: \"kubernetes.io/projected/900fecab-4458-4ac8-8bb7-e5068e9c74d1-kube-api-access-45gbk\") pod \"redhat-marketplace-g8j49\" (UID: \"900fecab-4458-4ac8-8bb7-e5068e9c74d1\") " pod="openshift-marketplace/redhat-marketplace-g8j49" Nov 24 12:03:54 crc kubenswrapper[5072]: I1124 12:03:54.135970 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/900fecab-4458-4ac8-8bb7-e5068e9c74d1-catalog-content\") pod \"redhat-marketplace-g8j49\" (UID: \"900fecab-4458-4ac8-8bb7-e5068e9c74d1\") " pod="openshift-marketplace/redhat-marketplace-g8j49" Nov 24 12:03:54 crc kubenswrapper[5072]: I1124 12:03:54.237516 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/900fecab-4458-4ac8-8bb7-e5068e9c74d1-catalog-content\") pod \"redhat-marketplace-g8j49\" (UID: \"900fecab-4458-4ac8-8bb7-e5068e9c74d1\") " pod="openshift-marketplace/redhat-marketplace-g8j49" Nov 24 12:03:54 crc kubenswrapper[5072]: I1124 12:03:54.237642 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/900fecab-4458-4ac8-8bb7-e5068e9c74d1-utilities\") pod \"redhat-marketplace-g8j49\" (UID: \"900fecab-4458-4ac8-8bb7-e5068e9c74d1\") " pod="openshift-marketplace/redhat-marketplace-g8j49" Nov 24 12:03:54 crc kubenswrapper[5072]: I1124 12:03:54.237735 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45gbk\" (UniqueName: \"kubernetes.io/projected/900fecab-4458-4ac8-8bb7-e5068e9c74d1-kube-api-access-45gbk\") pod \"redhat-marketplace-g8j49\" (UID: \"900fecab-4458-4ac8-8bb7-e5068e9c74d1\") " pod="openshift-marketplace/redhat-marketplace-g8j49" Nov 24 12:03:54 crc kubenswrapper[5072]: I1124 12:03:54.238262 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/900fecab-4458-4ac8-8bb7-e5068e9c74d1-catalog-content\") pod \"redhat-marketplace-g8j49\" (UID: \"900fecab-4458-4ac8-8bb7-e5068e9c74d1\") " pod="openshift-marketplace/redhat-marketplace-g8j49" Nov 24 12:03:54 crc kubenswrapper[5072]: I1124 12:03:54.238336 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/900fecab-4458-4ac8-8bb7-e5068e9c74d1-utilities\") pod \"redhat-marketplace-g8j49\" (UID: \"900fecab-4458-4ac8-8bb7-e5068e9c74d1\") " pod="openshift-marketplace/redhat-marketplace-g8j49" Nov 24 12:03:54 crc kubenswrapper[5072]: I1124 12:03:54.256822 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45gbk\" (UniqueName: \"kubernetes.io/projected/900fecab-4458-4ac8-8bb7-e5068e9c74d1-kube-api-access-45gbk\") pod \"redhat-marketplace-g8j49\" (UID: \"900fecab-4458-4ac8-8bb7-e5068e9c74d1\") " pod="openshift-marketplace/redhat-marketplace-g8j49" Nov 24 12:03:54 crc kubenswrapper[5072]: I1124 12:03:54.404922 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g8j49" Nov 24 12:03:55 crc kubenswrapper[5072]: I1124 12:03:55.033227 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-g8j49"] Nov 24 12:03:55 crc kubenswrapper[5072]: W1124 12:03:55.036779 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod900fecab_4458_4ac8_8bb7_e5068e9c74d1.slice/crio-f7212b9524d7dec6d41efdf338b323cf42c3ea0af4601b22894628132fd374e3 WatchSource:0}: Error finding container f7212b9524d7dec6d41efdf338b323cf42c3ea0af4601b22894628132fd374e3: Status 404 returned error can't find the container with id f7212b9524d7dec6d41efdf338b323cf42c3ea0af4601b22894628132fd374e3 Nov 24 12:03:55 crc kubenswrapper[5072]: I1124 12:03:55.135088 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g8j49" event={"ID":"900fecab-4458-4ac8-8bb7-e5068e9c74d1","Type":"ContainerStarted","Data":"f7212b9524d7dec6d41efdf338b323cf42c3ea0af4601b22894628132fd374e3"} Nov 24 12:03:56 crc kubenswrapper[5072]: I1124 12:03:56.017412 5072 scope.go:117] "RemoveContainer" containerID="4c463b6823449c0875f1fec4633ea521827aee0fee045719621150bcb1ac1a4f" Nov 24 12:03:56 crc kubenswrapper[5072]: I1124 12:03:56.147634 5072 generic.go:334] "Generic (PLEG): container finished" podID="900fecab-4458-4ac8-8bb7-e5068e9c74d1" containerID="d41c57c797e020c561c78b9c61d10f6f6b59547cc0e683772fcf3b0bab2be7b7" exitCode=0 Nov 24 12:03:56 crc kubenswrapper[5072]: I1124 12:03:56.147699 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g8j49" event={"ID":"900fecab-4458-4ac8-8bb7-e5068e9c74d1","Type":"ContainerDied","Data":"d41c57c797e020c561c78b9c61d10f6f6b59547cc0e683772fcf3b0bab2be7b7"} Nov 24 12:03:56 crc kubenswrapper[5072]: I1124 12:03:56.203965 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Nov 24 12:03:57 crc kubenswrapper[5072]: I1124 12:03:57.169514 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" event={"ID":"85ee6420-36f0-467c-acf4-ebea8b02c8d5","Type":"ContainerStarted","Data":"093652b8bc6216293abf04bfd41ce4561cf02d4cdffda4280a1d2d687ddf566d"} Nov 24 12:03:57 crc kubenswrapper[5072]: I1124 12:03:57.176773 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j5htq" event={"ID":"8b8c141a-32f9-41ba-95af-8448cf8cd002","Type":"ContainerStarted","Data":"e62a0a500033d44246ac1a177c6073e5a2d78192f194115cdbb4f519ed241c32"} Nov 24 12:03:57 crc kubenswrapper[5072]: I1124 12:03:57.177567 5072 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 12:03:57 crc kubenswrapper[5072]: I1124 12:03:57.250972 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-j5htq" podStartSLOduration=2.923497813 podStartE2EDuration="23.250951112s" podCreationTimestamp="2025-11-24 12:03:34 +0000 UTC" firstStartedPulling="2025-11-24 12:03:36.176332258 +0000 UTC m=+3267.887856734" lastFinishedPulling="2025-11-24 12:03:56.503785557 +0000 UTC m=+3288.215310033" observedRunningTime="2025-11-24 12:03:57.248828359 +0000 UTC m=+3288.960352855" watchObservedRunningTime="2025-11-24 12:03:57.250951112 +0000 UTC m=+3288.962475578" Nov 24 12:04:00 crc kubenswrapper[5072]: I1124 12:04:00.206468 5072 generic.go:334] "Generic (PLEG): container finished" podID="900fecab-4458-4ac8-8bb7-e5068e9c74d1" containerID="68159c4ce889557bfcf1abfecb7042915a26912a284a64f82195e2580bd6712f" exitCode=0 Nov 24 12:04:00 crc kubenswrapper[5072]: I1124 12:04:00.206557 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g8j49" event={"ID":"900fecab-4458-4ac8-8bb7-e5068e9c74d1","Type":"ContainerDied","Data":"68159c4ce889557bfcf1abfecb7042915a26912a284a64f82195e2580bd6712f"} Nov 24 12:04:02 crc kubenswrapper[5072]: I1124 12:04:02.040603 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xtx6h" Nov 24 12:04:02 crc kubenswrapper[5072]: I1124 12:04:02.041181 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xtx6h" Nov 24 12:04:02 crc kubenswrapper[5072]: I1124 12:04:02.224632 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g8j49" event={"ID":"900fecab-4458-4ac8-8bb7-e5068e9c74d1","Type":"ContainerStarted","Data":"11b36c28a983f250ba2154a788d970d84f59384792550cb85ff6b6fd42289761"} Nov 24 12:04:02 crc kubenswrapper[5072]: I1124 12:04:02.249254 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-g8j49" podStartSLOduration=4.462714778 podStartE2EDuration="8.249231522s" podCreationTimestamp="2025-11-24 12:03:54 +0000 UTC" firstStartedPulling="2025-11-24 12:03:57.177366443 +0000 UTC m=+3288.888890919" lastFinishedPulling="2025-11-24 12:04:00.963883177 +0000 UTC m=+3292.675407663" observedRunningTime="2025-11-24 12:04:02.241240962 +0000 UTC m=+3293.952765438" watchObservedRunningTime="2025-11-24 12:04:02.249231522 +0000 UTC m=+3293.960755998" Nov 24 12:04:03 crc kubenswrapper[5072]: I1124 12:04:03.088677 5072 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-xtx6h" podUID="a0a41c35-fdd9-4f33-befd-5b8540cb7c4f" containerName="registry-server" probeResult="failure" output=< Nov 24 12:04:03 crc kubenswrapper[5072]: timeout: failed to connect service ":50051" within 1s Nov 24 12:04:03 crc kubenswrapper[5072]: > Nov 24 12:04:04 crc kubenswrapper[5072]: I1124 12:04:04.406466 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-g8j49" Nov 24 12:04:04 crc kubenswrapper[5072]: I1124 12:04:04.406759 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-g8j49" Nov 24 12:04:04 crc kubenswrapper[5072]: I1124 12:04:04.453114 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-g8j49" Nov 24 12:04:04 crc kubenswrapper[5072]: I1124 12:04:04.483067 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-j5htq" Nov 24 12:04:04 crc kubenswrapper[5072]: I1124 12:04:04.483503 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-j5htq" Nov 24 12:04:05 crc kubenswrapper[5072]: I1124 12:04:05.525915 5072 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-j5htq" podUID="8b8c141a-32f9-41ba-95af-8448cf8cd002" containerName="registry-server" probeResult="failure" output=< Nov 24 12:04:05 crc kubenswrapper[5072]: timeout: failed to connect service ":50051" within 1s Nov 24 12:04:05 crc kubenswrapper[5072]: > Nov 24 12:04:13 crc kubenswrapper[5072]: I1124 12:04:13.093613 5072 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-xtx6h" podUID="a0a41c35-fdd9-4f33-befd-5b8540cb7c4f" containerName="registry-server" probeResult="failure" output=< Nov 24 12:04:13 crc kubenswrapper[5072]: timeout: failed to connect service ":50051" within 1s Nov 24 12:04:13 crc kubenswrapper[5072]: > Nov 24 12:04:14 crc kubenswrapper[5072]: I1124 12:04:14.453270 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-g8j49" Nov 24 12:04:14 crc kubenswrapper[5072]: I1124 12:04:14.496306 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-g8j49"] Nov 24 12:04:15 crc kubenswrapper[5072]: I1124 12:04:15.352626 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-g8j49" podUID="900fecab-4458-4ac8-8bb7-e5068e9c74d1" containerName="registry-server" containerID="cri-o://11b36c28a983f250ba2154a788d970d84f59384792550cb85ff6b6fd42289761" gracePeriod=2 Nov 24 12:04:15 crc kubenswrapper[5072]: I1124 12:04:15.524034 5072 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-j5htq" podUID="8b8c141a-32f9-41ba-95af-8448cf8cd002" containerName="registry-server" probeResult="failure" output=< Nov 24 12:04:15 crc kubenswrapper[5072]: timeout: failed to connect service ":50051" within 1s Nov 24 12:04:15 crc kubenswrapper[5072]: > Nov 24 12:04:16 crc kubenswrapper[5072]: I1124 12:04:16.366024 5072 generic.go:334] "Generic (PLEG): container finished" podID="900fecab-4458-4ac8-8bb7-e5068e9c74d1" containerID="11b36c28a983f250ba2154a788d970d84f59384792550cb85ff6b6fd42289761" exitCode=0 Nov 24 12:04:16 crc kubenswrapper[5072]: I1124 12:04:16.366320 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g8j49" event={"ID":"900fecab-4458-4ac8-8bb7-e5068e9c74d1","Type":"ContainerDied","Data":"11b36c28a983f250ba2154a788d970d84f59384792550cb85ff6b6fd42289761"} Nov 24 12:04:16 crc kubenswrapper[5072]: I1124 12:04:16.585059 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g8j49" Nov 24 12:04:16 crc kubenswrapper[5072]: I1124 12:04:16.780346 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45gbk\" (UniqueName: \"kubernetes.io/projected/900fecab-4458-4ac8-8bb7-e5068e9c74d1-kube-api-access-45gbk\") pod \"900fecab-4458-4ac8-8bb7-e5068e9c74d1\" (UID: \"900fecab-4458-4ac8-8bb7-e5068e9c74d1\") " Nov 24 12:04:16 crc kubenswrapper[5072]: I1124 12:04:16.780495 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/900fecab-4458-4ac8-8bb7-e5068e9c74d1-catalog-content\") pod \"900fecab-4458-4ac8-8bb7-e5068e9c74d1\" (UID: \"900fecab-4458-4ac8-8bb7-e5068e9c74d1\") " Nov 24 12:04:16 crc kubenswrapper[5072]: I1124 12:04:16.780559 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/900fecab-4458-4ac8-8bb7-e5068e9c74d1-utilities\") pod \"900fecab-4458-4ac8-8bb7-e5068e9c74d1\" (UID: \"900fecab-4458-4ac8-8bb7-e5068e9c74d1\") " Nov 24 12:04:16 crc kubenswrapper[5072]: I1124 12:04:16.781588 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/900fecab-4458-4ac8-8bb7-e5068e9c74d1-utilities" (OuterVolumeSpecName: "utilities") pod "900fecab-4458-4ac8-8bb7-e5068e9c74d1" (UID: "900fecab-4458-4ac8-8bb7-e5068e9c74d1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:04:16 crc kubenswrapper[5072]: I1124 12:04:16.786478 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/900fecab-4458-4ac8-8bb7-e5068e9c74d1-kube-api-access-45gbk" (OuterVolumeSpecName: "kube-api-access-45gbk") pod "900fecab-4458-4ac8-8bb7-e5068e9c74d1" (UID: "900fecab-4458-4ac8-8bb7-e5068e9c74d1"). InnerVolumeSpecName "kube-api-access-45gbk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:04:16 crc kubenswrapper[5072]: I1124 12:04:16.794333 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/900fecab-4458-4ac8-8bb7-e5068e9c74d1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "900fecab-4458-4ac8-8bb7-e5068e9c74d1" (UID: "900fecab-4458-4ac8-8bb7-e5068e9c74d1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:04:16 crc kubenswrapper[5072]: I1124 12:04:16.883446 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-45gbk\" (UniqueName: \"kubernetes.io/projected/900fecab-4458-4ac8-8bb7-e5068e9c74d1-kube-api-access-45gbk\") on node \"crc\" DevicePath \"\"" Nov 24 12:04:16 crc kubenswrapper[5072]: I1124 12:04:16.883490 5072 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/900fecab-4458-4ac8-8bb7-e5068e9c74d1-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:04:16 crc kubenswrapper[5072]: I1124 12:04:16.883499 5072 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/900fecab-4458-4ac8-8bb7-e5068e9c74d1-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:04:17 crc kubenswrapper[5072]: I1124 12:04:17.376477 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g8j49" event={"ID":"900fecab-4458-4ac8-8bb7-e5068e9c74d1","Type":"ContainerDied","Data":"f7212b9524d7dec6d41efdf338b323cf42c3ea0af4601b22894628132fd374e3"} Nov 24 12:04:17 crc kubenswrapper[5072]: I1124 12:04:17.376531 5072 scope.go:117] "RemoveContainer" containerID="11b36c28a983f250ba2154a788d970d84f59384792550cb85ff6b6fd42289761" Nov 24 12:04:17 crc kubenswrapper[5072]: I1124 12:04:17.376563 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g8j49" Nov 24 12:04:17 crc kubenswrapper[5072]: I1124 12:04:17.400477 5072 scope.go:117] "RemoveContainer" containerID="68159c4ce889557bfcf1abfecb7042915a26912a284a64f82195e2580bd6712f" Nov 24 12:04:17 crc kubenswrapper[5072]: I1124 12:04:17.401161 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-g8j49"] Nov 24 12:04:17 crc kubenswrapper[5072]: I1124 12:04:17.410812 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-g8j49"] Nov 24 12:04:17 crc kubenswrapper[5072]: I1124 12:04:17.419585 5072 scope.go:117] "RemoveContainer" containerID="d41c57c797e020c561c78b9c61d10f6f6b59547cc0e683772fcf3b0bab2be7b7" Nov 24 12:04:19 crc kubenswrapper[5072]: I1124 12:04:19.027749 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="900fecab-4458-4ac8-8bb7-e5068e9c74d1" path="/var/lib/kubelet/pods/900fecab-4458-4ac8-8bb7-e5068e9c74d1/volumes" Nov 24 12:04:23 crc kubenswrapper[5072]: I1124 12:04:23.089031 5072 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-xtx6h" podUID="a0a41c35-fdd9-4f33-befd-5b8540cb7c4f" containerName="registry-server" probeResult="failure" output=< Nov 24 12:04:23 crc kubenswrapper[5072]: timeout: failed to connect service ":50051" within 1s Nov 24 12:04:23 crc kubenswrapper[5072]: > Nov 24 12:04:25 crc kubenswrapper[5072]: I1124 12:04:25.527242 5072 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-j5htq" podUID="8b8c141a-32f9-41ba-95af-8448cf8cd002" containerName="registry-server" probeResult="failure" output=< Nov 24 12:04:25 crc kubenswrapper[5072]: timeout: failed to connect service ":50051" within 1s Nov 24 12:04:25 crc kubenswrapper[5072]: > Nov 24 12:04:30 crc kubenswrapper[5072]: I1124 12:04:30.599863 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 24 12:04:33 crc kubenswrapper[5072]: I1124 12:04:33.086081 5072 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-xtx6h" podUID="a0a41c35-fdd9-4f33-befd-5b8540cb7c4f" containerName="registry-server" probeResult="failure" output=< Nov 24 12:04:33 crc kubenswrapper[5072]: timeout: failed to connect service ":50051" within 1s Nov 24 12:04:33 crc kubenswrapper[5072]: > Nov 24 12:04:35 crc kubenswrapper[5072]: I1124 12:04:35.530698 5072 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-j5htq" podUID="8b8c141a-32f9-41ba-95af-8448cf8cd002" containerName="registry-server" probeResult="failure" output=< Nov 24 12:04:35 crc kubenswrapper[5072]: timeout: failed to connect service ":50051" within 1s Nov 24 12:04:35 crc kubenswrapper[5072]: > Nov 24 12:04:42 crc kubenswrapper[5072]: I1124 12:04:42.093232 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xtx6h" Nov 24 12:04:42 crc kubenswrapper[5072]: I1124 12:04:42.144139 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xtx6h" Nov 24 12:04:42 crc kubenswrapper[5072]: I1124 12:04:42.333935 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xtx6h"] Nov 24 12:04:43 crc kubenswrapper[5072]: I1124 12:04:43.600875 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xtx6h" podUID="a0a41c35-fdd9-4f33-befd-5b8540cb7c4f" containerName="registry-server" containerID="cri-o://a455c9e2b79808fbb6b39bd835b5581b6abfc572f001f558eede6b05f898c165" gracePeriod=2 Nov 24 12:04:44 crc kubenswrapper[5072]: I1124 12:04:44.172831 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xtx6h" Nov 24 12:04:44 crc kubenswrapper[5072]: I1124 12:04:44.185881 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0a41c35-fdd9-4f33-befd-5b8540cb7c4f-catalog-content\") pod \"a0a41c35-fdd9-4f33-befd-5b8540cb7c4f\" (UID: \"a0a41c35-fdd9-4f33-befd-5b8540cb7c4f\") " Nov 24 12:04:44 crc kubenswrapper[5072]: I1124 12:04:44.259326 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0a41c35-fdd9-4f33-befd-5b8540cb7c4f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a0a41c35-fdd9-4f33-befd-5b8540cb7c4f" (UID: "a0a41c35-fdd9-4f33-befd-5b8540cb7c4f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:04:44 crc kubenswrapper[5072]: I1124 12:04:44.287620 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0a41c35-fdd9-4f33-befd-5b8540cb7c4f-utilities\") pod \"a0a41c35-fdd9-4f33-befd-5b8540cb7c4f\" (UID: \"a0a41c35-fdd9-4f33-befd-5b8540cb7c4f\") " Nov 24 12:04:44 crc kubenswrapper[5072]: I1124 12:04:44.287906 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-clcx2\" (UniqueName: \"kubernetes.io/projected/a0a41c35-fdd9-4f33-befd-5b8540cb7c4f-kube-api-access-clcx2\") pod \"a0a41c35-fdd9-4f33-befd-5b8540cb7c4f\" (UID: \"a0a41c35-fdd9-4f33-befd-5b8540cb7c4f\") " Nov 24 12:04:44 crc kubenswrapper[5072]: I1124 12:04:44.288502 5072 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0a41c35-fdd9-4f33-befd-5b8540cb7c4f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:04:44 crc kubenswrapper[5072]: I1124 12:04:44.289516 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0a41c35-fdd9-4f33-befd-5b8540cb7c4f-utilities" (OuterVolumeSpecName: "utilities") pod "a0a41c35-fdd9-4f33-befd-5b8540cb7c4f" (UID: "a0a41c35-fdd9-4f33-befd-5b8540cb7c4f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:04:44 crc kubenswrapper[5072]: I1124 12:04:44.305686 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0a41c35-fdd9-4f33-befd-5b8540cb7c4f-kube-api-access-clcx2" (OuterVolumeSpecName: "kube-api-access-clcx2") pod "a0a41c35-fdd9-4f33-befd-5b8540cb7c4f" (UID: "a0a41c35-fdd9-4f33-befd-5b8540cb7c4f"). InnerVolumeSpecName "kube-api-access-clcx2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:04:44 crc kubenswrapper[5072]: I1124 12:04:44.390554 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-clcx2\" (UniqueName: \"kubernetes.io/projected/a0a41c35-fdd9-4f33-befd-5b8540cb7c4f-kube-api-access-clcx2\") on node \"crc\" DevicePath \"\"" Nov 24 12:04:44 crc kubenswrapper[5072]: I1124 12:04:44.390622 5072 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0a41c35-fdd9-4f33-befd-5b8540cb7c4f-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:04:44 crc kubenswrapper[5072]: I1124 12:04:44.535835 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-j5htq" Nov 24 12:04:44 crc kubenswrapper[5072]: I1124 12:04:44.591588 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-j5htq" Nov 24 12:04:44 crc kubenswrapper[5072]: I1124 12:04:44.624983 5072 generic.go:334] "Generic (PLEG): container finished" podID="a0a41c35-fdd9-4f33-befd-5b8540cb7c4f" containerID="a455c9e2b79808fbb6b39bd835b5581b6abfc572f001f558eede6b05f898c165" exitCode=0 Nov 24 12:04:44 crc kubenswrapper[5072]: I1124 12:04:44.625123 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xtx6h" Nov 24 12:04:44 crc kubenswrapper[5072]: I1124 12:04:44.625136 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xtx6h" event={"ID":"a0a41c35-fdd9-4f33-befd-5b8540cb7c4f","Type":"ContainerDied","Data":"a455c9e2b79808fbb6b39bd835b5581b6abfc572f001f558eede6b05f898c165"} Nov 24 12:04:44 crc kubenswrapper[5072]: I1124 12:04:44.625205 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xtx6h" event={"ID":"a0a41c35-fdd9-4f33-befd-5b8540cb7c4f","Type":"ContainerDied","Data":"d260145393d6af63c7339f200dde5132031b2a09d3432dffd5f54dd6e71a5f96"} Nov 24 12:04:44 crc kubenswrapper[5072]: I1124 12:04:44.625230 5072 scope.go:117] "RemoveContainer" containerID="a455c9e2b79808fbb6b39bd835b5581b6abfc572f001f558eede6b05f898c165" Nov 24 12:04:44 crc kubenswrapper[5072]: I1124 12:04:44.700195 5072 scope.go:117] "RemoveContainer" containerID="2ea592557a38bdb3bd95e00a85bfcf10e64dcfcc08dfd72f2e5adc3ae673b044" Nov 24 12:04:44 crc kubenswrapper[5072]: I1124 12:04:44.701702 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xtx6h"] Nov 24 12:04:44 crc kubenswrapper[5072]: I1124 12:04:44.713529 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xtx6h"] Nov 24 12:04:44 crc kubenswrapper[5072]: I1124 12:04:44.731048 5072 scope.go:117] "RemoveContainer" containerID="e163da41598549385309bf7380923a453ff54e5388f47eb7c6a1fef28fceab6b" Nov 24 12:04:44 crc kubenswrapper[5072]: I1124 12:04:44.781350 5072 scope.go:117] "RemoveContainer" containerID="a455c9e2b79808fbb6b39bd835b5581b6abfc572f001f558eede6b05f898c165" Nov 24 12:04:44 crc kubenswrapper[5072]: E1124 12:04:44.782030 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a455c9e2b79808fbb6b39bd835b5581b6abfc572f001f558eede6b05f898c165\": container with ID starting with a455c9e2b79808fbb6b39bd835b5581b6abfc572f001f558eede6b05f898c165 not found: ID does not exist" containerID="a455c9e2b79808fbb6b39bd835b5581b6abfc572f001f558eede6b05f898c165" Nov 24 12:04:44 crc kubenswrapper[5072]: I1124 12:04:44.782075 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a455c9e2b79808fbb6b39bd835b5581b6abfc572f001f558eede6b05f898c165"} err="failed to get container status \"a455c9e2b79808fbb6b39bd835b5581b6abfc572f001f558eede6b05f898c165\": rpc error: code = NotFound desc = could not find container \"a455c9e2b79808fbb6b39bd835b5581b6abfc572f001f558eede6b05f898c165\": container with ID starting with a455c9e2b79808fbb6b39bd835b5581b6abfc572f001f558eede6b05f898c165 not found: ID does not exist" Nov 24 12:04:44 crc kubenswrapper[5072]: I1124 12:04:44.782103 5072 scope.go:117] "RemoveContainer" containerID="2ea592557a38bdb3bd95e00a85bfcf10e64dcfcc08dfd72f2e5adc3ae673b044" Nov 24 12:04:44 crc kubenswrapper[5072]: E1124 12:04:44.782484 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ea592557a38bdb3bd95e00a85bfcf10e64dcfcc08dfd72f2e5adc3ae673b044\": container with ID starting with 2ea592557a38bdb3bd95e00a85bfcf10e64dcfcc08dfd72f2e5adc3ae673b044 not found: ID does not exist" containerID="2ea592557a38bdb3bd95e00a85bfcf10e64dcfcc08dfd72f2e5adc3ae673b044" Nov 24 12:04:44 crc kubenswrapper[5072]: I1124 12:04:44.782532 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ea592557a38bdb3bd95e00a85bfcf10e64dcfcc08dfd72f2e5adc3ae673b044"} err="failed to get container status \"2ea592557a38bdb3bd95e00a85bfcf10e64dcfcc08dfd72f2e5adc3ae673b044\": rpc error: code = NotFound desc = could not find container \"2ea592557a38bdb3bd95e00a85bfcf10e64dcfcc08dfd72f2e5adc3ae673b044\": container with ID starting with 2ea592557a38bdb3bd95e00a85bfcf10e64dcfcc08dfd72f2e5adc3ae673b044 not found: ID does not exist" Nov 24 12:04:44 crc kubenswrapper[5072]: I1124 12:04:44.782560 5072 scope.go:117] "RemoveContainer" containerID="e163da41598549385309bf7380923a453ff54e5388f47eb7c6a1fef28fceab6b" Nov 24 12:04:44 crc kubenswrapper[5072]: E1124 12:04:44.782853 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e163da41598549385309bf7380923a453ff54e5388f47eb7c6a1fef28fceab6b\": container with ID starting with e163da41598549385309bf7380923a453ff54e5388f47eb7c6a1fef28fceab6b not found: ID does not exist" containerID="e163da41598549385309bf7380923a453ff54e5388f47eb7c6a1fef28fceab6b" Nov 24 12:04:44 crc kubenswrapper[5072]: I1124 12:04:44.782887 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e163da41598549385309bf7380923a453ff54e5388f47eb7c6a1fef28fceab6b"} err="failed to get container status \"e163da41598549385309bf7380923a453ff54e5388f47eb7c6a1fef28fceab6b\": rpc error: code = NotFound desc = could not find container \"e163da41598549385309bf7380923a453ff54e5388f47eb7c6a1fef28fceab6b\": container with ID starting with e163da41598549385309bf7380923a453ff54e5388f47eb7c6a1fef28fceab6b not found: ID does not exist" Nov 24 12:04:45 crc kubenswrapper[5072]: I1124 12:04:45.031925 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0a41c35-fdd9-4f33-befd-5b8540cb7c4f" path="/var/lib/kubelet/pods/a0a41c35-fdd9-4f33-befd-5b8540cb7c4f/volumes" Nov 24 12:04:46 crc kubenswrapper[5072]: I1124 12:04:46.362287 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j5htq"] Nov 24 12:04:46 crc kubenswrapper[5072]: I1124 12:04:46.739807 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ksmz7"] Nov 24 12:04:46 crc kubenswrapper[5072]: I1124 12:04:46.740487 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ksmz7" podUID="467abc7c-eb59-4ec5-a2c4-369c84e0faf0" containerName="registry-server" containerID="cri-o://4988b575732bdb3f1db4a4f92bcc39bafa8b28d2514d18be755d15a6cb247305" gracePeriod=2 Nov 24 12:04:47 crc kubenswrapper[5072]: I1124 12:04:47.671521 5072 generic.go:334] "Generic (PLEG): container finished" podID="467abc7c-eb59-4ec5-a2c4-369c84e0faf0" containerID="4988b575732bdb3f1db4a4f92bcc39bafa8b28d2514d18be755d15a6cb247305" exitCode=0 Nov 24 12:04:47 crc kubenswrapper[5072]: I1124 12:04:47.671610 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ksmz7" event={"ID":"467abc7c-eb59-4ec5-a2c4-369c84e0faf0","Type":"ContainerDied","Data":"4988b575732bdb3f1db4a4f92bcc39bafa8b28d2514d18be755d15a6cb247305"} Nov 24 12:04:48 crc kubenswrapper[5072]: I1124 12:04:48.120886 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ksmz7" Nov 24 12:04:48 crc kubenswrapper[5072]: I1124 12:04:48.271431 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/467abc7c-eb59-4ec5-a2c4-369c84e0faf0-catalog-content\") pod \"467abc7c-eb59-4ec5-a2c4-369c84e0faf0\" (UID: \"467abc7c-eb59-4ec5-a2c4-369c84e0faf0\") " Nov 24 12:04:48 crc kubenswrapper[5072]: I1124 12:04:48.271882 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/467abc7c-eb59-4ec5-a2c4-369c84e0faf0-utilities\") pod \"467abc7c-eb59-4ec5-a2c4-369c84e0faf0\" (UID: \"467abc7c-eb59-4ec5-a2c4-369c84e0faf0\") " Nov 24 12:04:48 crc kubenswrapper[5072]: I1124 12:04:48.272137 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s8w95\" (UniqueName: \"kubernetes.io/projected/467abc7c-eb59-4ec5-a2c4-369c84e0faf0-kube-api-access-s8w95\") pod \"467abc7c-eb59-4ec5-a2c4-369c84e0faf0\" (UID: \"467abc7c-eb59-4ec5-a2c4-369c84e0faf0\") " Nov 24 12:04:48 crc kubenswrapper[5072]: I1124 12:04:48.272364 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/467abc7c-eb59-4ec5-a2c4-369c84e0faf0-utilities" (OuterVolumeSpecName: "utilities") pod "467abc7c-eb59-4ec5-a2c4-369c84e0faf0" (UID: "467abc7c-eb59-4ec5-a2c4-369c84e0faf0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:04:48 crc kubenswrapper[5072]: I1124 12:04:48.272959 5072 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/467abc7c-eb59-4ec5-a2c4-369c84e0faf0-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:04:48 crc kubenswrapper[5072]: I1124 12:04:48.278383 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/467abc7c-eb59-4ec5-a2c4-369c84e0faf0-kube-api-access-s8w95" (OuterVolumeSpecName: "kube-api-access-s8w95") pod "467abc7c-eb59-4ec5-a2c4-369c84e0faf0" (UID: "467abc7c-eb59-4ec5-a2c4-369c84e0faf0"). InnerVolumeSpecName "kube-api-access-s8w95". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:04:48 crc kubenswrapper[5072]: I1124 12:04:48.348755 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/467abc7c-eb59-4ec5-a2c4-369c84e0faf0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "467abc7c-eb59-4ec5-a2c4-369c84e0faf0" (UID: "467abc7c-eb59-4ec5-a2c4-369c84e0faf0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:04:48 crc kubenswrapper[5072]: I1124 12:04:48.374554 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s8w95\" (UniqueName: \"kubernetes.io/projected/467abc7c-eb59-4ec5-a2c4-369c84e0faf0-kube-api-access-s8w95\") on node \"crc\" DevicePath \"\"" Nov 24 12:04:48 crc kubenswrapper[5072]: I1124 12:04:48.374584 5072 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/467abc7c-eb59-4ec5-a2c4-369c84e0faf0-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:04:48 crc kubenswrapper[5072]: I1124 12:04:48.683901 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ksmz7" event={"ID":"467abc7c-eb59-4ec5-a2c4-369c84e0faf0","Type":"ContainerDied","Data":"4060d77882da33da081cb5f154733d3ee098936154f299adff42abec84551738"} Nov 24 12:04:48 crc kubenswrapper[5072]: I1124 12:04:48.685010 5072 scope.go:117] "RemoveContainer" containerID="4988b575732bdb3f1db4a4f92bcc39bafa8b28d2514d18be755d15a6cb247305" Nov 24 12:04:48 crc kubenswrapper[5072]: I1124 12:04:48.683943 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ksmz7" Nov 24 12:04:48 crc kubenswrapper[5072]: I1124 12:04:48.711295 5072 scope.go:117] "RemoveContainer" containerID="6ee720e6a5ffa51974c45dbd7049855b267b3ce32fe74361231e80170f725c96" Nov 24 12:04:48 crc kubenswrapper[5072]: I1124 12:04:48.727757 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ksmz7"] Nov 24 12:04:48 crc kubenswrapper[5072]: I1124 12:04:48.738126 5072 scope.go:117] "RemoveContainer" containerID="baefadfc40c28655b92b039612a9635d5d3a4a1a0be45421895c4dd4af02ab7f" Nov 24 12:04:48 crc kubenswrapper[5072]: I1124 12:04:48.742081 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ksmz7"] Nov 24 12:04:49 crc kubenswrapper[5072]: I1124 12:04:49.029261 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="467abc7c-eb59-4ec5-a2c4-369c84e0faf0" path="/var/lib/kubelet/pods/467abc7c-eb59-4ec5-a2c4-369c84e0faf0/volumes" Nov 24 12:05:24 crc kubenswrapper[5072]: I1124 12:05:24.941835 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Nov 24 12:05:24 crc kubenswrapper[5072]: E1124 12:05:24.944810 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="467abc7c-eb59-4ec5-a2c4-369c84e0faf0" containerName="extract-utilities" Nov 24 12:05:24 crc kubenswrapper[5072]: I1124 12:05:24.944953 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="467abc7c-eb59-4ec5-a2c4-369c84e0faf0" containerName="extract-utilities" Nov 24 12:05:24 crc kubenswrapper[5072]: E1124 12:05:24.945041 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="900fecab-4458-4ac8-8bb7-e5068e9c74d1" containerName="extract-content" Nov 24 12:05:24 crc kubenswrapper[5072]: I1124 12:05:24.945098 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="900fecab-4458-4ac8-8bb7-e5068e9c74d1" containerName="extract-content" Nov 24 12:05:24 crc kubenswrapper[5072]: E1124 12:05:24.945164 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0a41c35-fdd9-4f33-befd-5b8540cb7c4f" containerName="extract-utilities" Nov 24 12:05:24 crc kubenswrapper[5072]: I1124 12:05:24.945216 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0a41c35-fdd9-4f33-befd-5b8540cb7c4f" containerName="extract-utilities" Nov 24 12:05:24 crc kubenswrapper[5072]: E1124 12:05:24.945282 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="467abc7c-eb59-4ec5-a2c4-369c84e0faf0" containerName="extract-content" Nov 24 12:05:24 crc kubenswrapper[5072]: I1124 12:05:24.945333 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="467abc7c-eb59-4ec5-a2c4-369c84e0faf0" containerName="extract-content" Nov 24 12:05:24 crc kubenswrapper[5072]: E1124 12:05:24.945416 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0a41c35-fdd9-4f33-befd-5b8540cb7c4f" containerName="extract-content" Nov 24 12:05:24 crc kubenswrapper[5072]: I1124 12:05:24.945491 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0a41c35-fdd9-4f33-befd-5b8540cb7c4f" containerName="extract-content" Nov 24 12:05:24 crc kubenswrapper[5072]: E1124 12:05:24.945573 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="900fecab-4458-4ac8-8bb7-e5068e9c74d1" containerName="extract-utilities" Nov 24 12:05:24 crc kubenswrapper[5072]: I1124 12:05:24.945661 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="900fecab-4458-4ac8-8bb7-e5068e9c74d1" containerName="extract-utilities" Nov 24 12:05:24 crc kubenswrapper[5072]: E1124 12:05:24.945746 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="900fecab-4458-4ac8-8bb7-e5068e9c74d1" containerName="registry-server" Nov 24 12:05:24 crc kubenswrapper[5072]: I1124 12:05:24.945813 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="900fecab-4458-4ac8-8bb7-e5068e9c74d1" containerName="registry-server" Nov 24 12:05:24 crc kubenswrapper[5072]: E1124 12:05:24.945888 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0a41c35-fdd9-4f33-befd-5b8540cb7c4f" containerName="registry-server" Nov 24 12:05:24 crc kubenswrapper[5072]: I1124 12:05:24.945963 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0a41c35-fdd9-4f33-befd-5b8540cb7c4f" containerName="registry-server" Nov 24 12:05:24 crc kubenswrapper[5072]: E1124 12:05:24.946049 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="467abc7c-eb59-4ec5-a2c4-369c84e0faf0" containerName="registry-server" Nov 24 12:05:24 crc kubenswrapper[5072]: I1124 12:05:24.946122 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="467abc7c-eb59-4ec5-a2c4-369c84e0faf0" containerName="registry-server" Nov 24 12:05:24 crc kubenswrapper[5072]: I1124 12:05:24.946571 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="900fecab-4458-4ac8-8bb7-e5068e9c74d1" containerName="registry-server" Nov 24 12:05:24 crc kubenswrapper[5072]: I1124 12:05:24.946654 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0a41c35-fdd9-4f33-befd-5b8540cb7c4f" containerName="registry-server" Nov 24 12:05:24 crc kubenswrapper[5072]: I1124 12:05:24.946710 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="467abc7c-eb59-4ec5-a2c4-369c84e0faf0" containerName="registry-server" Nov 24 12:05:24 crc kubenswrapper[5072]: I1124 12:05:24.947478 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 24 12:05:24 crc kubenswrapper[5072]: I1124 12:05:24.950110 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-kvkcl" Nov 24 12:05:24 crc kubenswrapper[5072]: I1124 12:05:24.950283 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Nov 24 12:05:24 crc kubenswrapper[5072]: I1124 12:05:24.950452 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Nov 24 12:05:24 crc kubenswrapper[5072]: I1124 12:05:24.950303 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Nov 24 12:05:24 crc kubenswrapper[5072]: I1124 12:05:24.954727 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Nov 24 12:05:25 crc kubenswrapper[5072]: I1124 12:05:25.044789 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qbst\" (UniqueName: \"kubernetes.io/projected/c4384a66-1728-45a3-9ab4-d1479c51cd18-kube-api-access-8qbst\") pod \"tempest-tests-tempest\" (UID: \"c4384a66-1728-45a3-9ab4-d1479c51cd18\") " pod="openstack/tempest-tests-tempest" Nov 24 12:05:25 crc kubenswrapper[5072]: I1124 12:05:25.044940 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/c4384a66-1728-45a3-9ab4-d1479c51cd18-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"c4384a66-1728-45a3-9ab4-d1479c51cd18\") " pod="openstack/tempest-tests-tempest" Nov 24 12:05:25 crc kubenswrapper[5072]: I1124 12:05:25.044992 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c4384a66-1728-45a3-9ab4-d1479c51cd18-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"c4384a66-1728-45a3-9ab4-d1479c51cd18\") " pod="openstack/tempest-tests-tempest" Nov 24 12:05:25 crc kubenswrapper[5072]: I1124 12:05:25.045026 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c4384a66-1728-45a3-9ab4-d1479c51cd18-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"c4384a66-1728-45a3-9ab4-d1479c51cd18\") " pod="openstack/tempest-tests-tempest" Nov 24 12:05:25 crc kubenswrapper[5072]: I1124 12:05:25.045072 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"tempest-tests-tempest\" (UID: \"c4384a66-1728-45a3-9ab4-d1479c51cd18\") " pod="openstack/tempest-tests-tempest" Nov 24 12:05:25 crc kubenswrapper[5072]: I1124 12:05:25.045090 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c4384a66-1728-45a3-9ab4-d1479c51cd18-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"c4384a66-1728-45a3-9ab4-d1479c51cd18\") " pod="openstack/tempest-tests-tempest" Nov 24 12:05:25 crc kubenswrapper[5072]: I1124 12:05:25.045152 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/c4384a66-1728-45a3-9ab4-d1479c51cd18-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"c4384a66-1728-45a3-9ab4-d1479c51cd18\") " pod="openstack/tempest-tests-tempest" Nov 24 12:05:25 crc kubenswrapper[5072]: I1124 12:05:25.045219 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/c4384a66-1728-45a3-9ab4-d1479c51cd18-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"c4384a66-1728-45a3-9ab4-d1479c51cd18\") " pod="openstack/tempest-tests-tempest" Nov 24 12:05:25 crc kubenswrapper[5072]: I1124 12:05:25.045251 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c4384a66-1728-45a3-9ab4-d1479c51cd18-config-data\") pod \"tempest-tests-tempest\" (UID: \"c4384a66-1728-45a3-9ab4-d1479c51cd18\") " pod="openstack/tempest-tests-tempest" Nov 24 12:05:25 crc kubenswrapper[5072]: I1124 12:05:25.146992 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/c4384a66-1728-45a3-9ab4-d1479c51cd18-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"c4384a66-1728-45a3-9ab4-d1479c51cd18\") " pod="openstack/tempest-tests-tempest" Nov 24 12:05:25 crc kubenswrapper[5072]: I1124 12:05:25.147097 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c4384a66-1728-45a3-9ab4-d1479c51cd18-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"c4384a66-1728-45a3-9ab4-d1479c51cd18\") " pod="openstack/tempest-tests-tempest" Nov 24 12:05:25 crc kubenswrapper[5072]: I1124 12:05:25.147150 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c4384a66-1728-45a3-9ab4-d1479c51cd18-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"c4384a66-1728-45a3-9ab4-d1479c51cd18\") " pod="openstack/tempest-tests-tempest" Nov 24 12:05:25 crc kubenswrapper[5072]: I1124 12:05:25.147195 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"tempest-tests-tempest\" (UID: \"c4384a66-1728-45a3-9ab4-d1479c51cd18\") " pod="openstack/tempest-tests-tempest" Nov 24 12:05:25 crc kubenswrapper[5072]: I1124 12:05:25.147217 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c4384a66-1728-45a3-9ab4-d1479c51cd18-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"c4384a66-1728-45a3-9ab4-d1479c51cd18\") " pod="openstack/tempest-tests-tempest" Nov 24 12:05:25 crc kubenswrapper[5072]: I1124 12:05:25.147284 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/c4384a66-1728-45a3-9ab4-d1479c51cd18-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"c4384a66-1728-45a3-9ab4-d1479c51cd18\") " pod="openstack/tempest-tests-tempest" Nov 24 12:05:25 crc kubenswrapper[5072]: I1124 12:05:25.147324 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/c4384a66-1728-45a3-9ab4-d1479c51cd18-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"c4384a66-1728-45a3-9ab4-d1479c51cd18\") " pod="openstack/tempest-tests-tempest" Nov 24 12:05:25 crc kubenswrapper[5072]: I1124 12:05:25.147355 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c4384a66-1728-45a3-9ab4-d1479c51cd18-config-data\") pod \"tempest-tests-tempest\" (UID: \"c4384a66-1728-45a3-9ab4-d1479c51cd18\") " pod="openstack/tempest-tests-tempest" Nov 24 12:05:25 crc kubenswrapper[5072]: I1124 12:05:25.147437 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qbst\" (UniqueName: \"kubernetes.io/projected/c4384a66-1728-45a3-9ab4-d1479c51cd18-kube-api-access-8qbst\") pod \"tempest-tests-tempest\" (UID: \"c4384a66-1728-45a3-9ab4-d1479c51cd18\") " pod="openstack/tempest-tests-tempest" Nov 24 12:05:25 crc kubenswrapper[5072]: I1124 12:05:25.147539 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/c4384a66-1728-45a3-9ab4-d1479c51cd18-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"c4384a66-1728-45a3-9ab4-d1479c51cd18\") " pod="openstack/tempest-tests-tempest" Nov 24 12:05:25 crc kubenswrapper[5072]: I1124 12:05:25.147975 5072 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"tempest-tests-tempest\" (UID: \"c4384a66-1728-45a3-9ab4-d1479c51cd18\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/tempest-tests-tempest" Nov 24 12:05:25 crc kubenswrapper[5072]: I1124 12:05:25.148779 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c4384a66-1728-45a3-9ab4-d1479c51cd18-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"c4384a66-1728-45a3-9ab4-d1479c51cd18\") " pod="openstack/tempest-tests-tempest" Nov 24 12:05:25 crc kubenswrapper[5072]: I1124 12:05:25.149069 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c4384a66-1728-45a3-9ab4-d1479c51cd18-config-data\") pod \"tempest-tests-tempest\" (UID: \"c4384a66-1728-45a3-9ab4-d1479c51cd18\") " pod="openstack/tempest-tests-tempest" Nov 24 12:05:25 crc kubenswrapper[5072]: I1124 12:05:25.149245 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/c4384a66-1728-45a3-9ab4-d1479c51cd18-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"c4384a66-1728-45a3-9ab4-d1479c51cd18\") " pod="openstack/tempest-tests-tempest" Nov 24 12:05:25 crc kubenswrapper[5072]: I1124 12:05:25.154355 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c4384a66-1728-45a3-9ab4-d1479c51cd18-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"c4384a66-1728-45a3-9ab4-d1479c51cd18\") " pod="openstack/tempest-tests-tempest" Nov 24 12:05:25 crc kubenswrapper[5072]: I1124 12:05:25.158910 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c4384a66-1728-45a3-9ab4-d1479c51cd18-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"c4384a66-1728-45a3-9ab4-d1479c51cd18\") " pod="openstack/tempest-tests-tempest" Nov 24 12:05:25 crc kubenswrapper[5072]: I1124 12:05:25.160519 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/c4384a66-1728-45a3-9ab4-d1479c51cd18-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"c4384a66-1728-45a3-9ab4-d1479c51cd18\") " pod="openstack/tempest-tests-tempest" Nov 24 12:05:25 crc kubenswrapper[5072]: I1124 12:05:25.166289 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qbst\" (UniqueName: \"kubernetes.io/projected/c4384a66-1728-45a3-9ab4-d1479c51cd18-kube-api-access-8qbst\") pod \"tempest-tests-tempest\" (UID: \"c4384a66-1728-45a3-9ab4-d1479c51cd18\") " pod="openstack/tempest-tests-tempest" Nov 24 12:05:25 crc kubenswrapper[5072]: I1124 12:05:25.178751 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"tempest-tests-tempest\" (UID: \"c4384a66-1728-45a3-9ab4-d1479c51cd18\") " pod="openstack/tempest-tests-tempest" Nov 24 12:05:25 crc kubenswrapper[5072]: I1124 12:05:25.275402 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 24 12:05:25 crc kubenswrapper[5072]: I1124 12:05:25.718390 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Nov 24 12:05:26 crc kubenswrapper[5072]: I1124 12:05:26.059945 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"c4384a66-1728-45a3-9ab4-d1479c51cd18","Type":"ContainerStarted","Data":"eb5a2e2fe0a0d34f7f7e09338e4679b0f44bb4d5536b218d1ec58618dbb284b7"} Nov 24 12:05:53 crc kubenswrapper[5072]: E1124 12:05:53.678136 5072 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Nov 24 12:05:53 crc kubenswrapper[5072]: E1124 12:05:53.678879 5072 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8qbst,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(c4384a66-1728-45a3-9ab4-d1479c51cd18): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 12:05:53 crc kubenswrapper[5072]: E1124 12:05:53.680307 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="c4384a66-1728-45a3-9ab4-d1479c51cd18" Nov 24 12:05:54 crc kubenswrapper[5072]: E1124 12:05:54.341060 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="c4384a66-1728-45a3-9ab4-d1479c51cd18" Nov 24 12:06:09 crc kubenswrapper[5072]: I1124 12:06:09.483099 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"c4384a66-1728-45a3-9ab4-d1479c51cd18","Type":"ContainerStarted","Data":"9d2bfeefe2ed82ed926730fce95369e0e66957e04e2cb48ccddc0bb99c242ab6"} Nov 24 12:06:09 crc kubenswrapper[5072]: I1124 12:06:09.515644 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.642289502 podStartE2EDuration="46.515621291s" podCreationTimestamp="2025-11-24 12:05:23 +0000 UTC" firstStartedPulling="2025-11-24 12:05:25.721810219 +0000 UTC m=+3377.433334695" lastFinishedPulling="2025-11-24 12:06:07.595141988 +0000 UTC m=+3419.306666484" observedRunningTime="2025-11-24 12:06:09.509609051 +0000 UTC m=+3421.221133547" watchObservedRunningTime="2025-11-24 12:06:09.515621291 +0000 UTC m=+3421.227145777" Nov 24 12:06:13 crc kubenswrapper[5072]: I1124 12:06:13.645036 5072 patch_prober.go:28] interesting pod/machine-config-daemon-jfxnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:06:13 crc kubenswrapper[5072]: I1124 12:06:13.645520 5072 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:06:43 crc kubenswrapper[5072]: I1124 12:06:43.645001 5072 patch_prober.go:28] interesting pod/machine-config-daemon-jfxnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:06:43 crc kubenswrapper[5072]: I1124 12:06:43.645706 5072 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:06:51 crc kubenswrapper[5072]: I1124 12:06:51.218015 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qrccd"] Nov 24 12:06:51 crc kubenswrapper[5072]: I1124 12:06:51.236451 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qrccd"] Nov 24 12:06:51 crc kubenswrapper[5072]: I1124 12:06:51.236643 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qrccd" Nov 24 12:06:51 crc kubenswrapper[5072]: I1124 12:06:51.346167 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb484840-f443-4ad3-adb2-9d0a5869857f-catalog-content\") pod \"community-operators-qrccd\" (UID: \"fb484840-f443-4ad3-adb2-9d0a5869857f\") " pod="openshift-marketplace/community-operators-qrccd" Nov 24 12:06:51 crc kubenswrapper[5072]: I1124 12:06:51.346233 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb484840-f443-4ad3-adb2-9d0a5869857f-utilities\") pod \"community-operators-qrccd\" (UID: \"fb484840-f443-4ad3-adb2-9d0a5869857f\") " pod="openshift-marketplace/community-operators-qrccd" Nov 24 12:06:51 crc kubenswrapper[5072]: I1124 12:06:51.346418 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8892\" (UniqueName: \"kubernetes.io/projected/fb484840-f443-4ad3-adb2-9d0a5869857f-kube-api-access-d8892\") pod \"community-operators-qrccd\" (UID: \"fb484840-f443-4ad3-adb2-9d0a5869857f\") " pod="openshift-marketplace/community-operators-qrccd" Nov 24 12:06:51 crc kubenswrapper[5072]: I1124 12:06:51.448240 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8892\" (UniqueName: \"kubernetes.io/projected/fb484840-f443-4ad3-adb2-9d0a5869857f-kube-api-access-d8892\") pod \"community-operators-qrccd\" (UID: \"fb484840-f443-4ad3-adb2-9d0a5869857f\") " pod="openshift-marketplace/community-operators-qrccd" Nov 24 12:06:51 crc kubenswrapper[5072]: I1124 12:06:51.448344 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb484840-f443-4ad3-adb2-9d0a5869857f-catalog-content\") pod \"community-operators-qrccd\" (UID: \"fb484840-f443-4ad3-adb2-9d0a5869857f\") " pod="openshift-marketplace/community-operators-qrccd" Nov 24 12:06:51 crc kubenswrapper[5072]: I1124 12:06:51.448387 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb484840-f443-4ad3-adb2-9d0a5869857f-utilities\") pod \"community-operators-qrccd\" (UID: \"fb484840-f443-4ad3-adb2-9d0a5869857f\") " pod="openshift-marketplace/community-operators-qrccd" Nov 24 12:06:51 crc kubenswrapper[5072]: I1124 12:06:51.448926 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb484840-f443-4ad3-adb2-9d0a5869857f-utilities\") pod \"community-operators-qrccd\" (UID: \"fb484840-f443-4ad3-adb2-9d0a5869857f\") " pod="openshift-marketplace/community-operators-qrccd" Nov 24 12:06:51 crc kubenswrapper[5072]: I1124 12:06:51.448987 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb484840-f443-4ad3-adb2-9d0a5869857f-catalog-content\") pod \"community-operators-qrccd\" (UID: \"fb484840-f443-4ad3-adb2-9d0a5869857f\") " pod="openshift-marketplace/community-operators-qrccd" Nov 24 12:06:51 crc kubenswrapper[5072]: I1124 12:06:51.514576 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8892\" (UniqueName: \"kubernetes.io/projected/fb484840-f443-4ad3-adb2-9d0a5869857f-kube-api-access-d8892\") pod \"community-operators-qrccd\" (UID: \"fb484840-f443-4ad3-adb2-9d0a5869857f\") " pod="openshift-marketplace/community-operators-qrccd" Nov 24 12:06:51 crc kubenswrapper[5072]: I1124 12:06:51.570464 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qrccd" Nov 24 12:06:52 crc kubenswrapper[5072]: I1124 12:06:52.156213 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qrccd"] Nov 24 12:06:52 crc kubenswrapper[5072]: I1124 12:06:52.923797 5072 generic.go:334] "Generic (PLEG): container finished" podID="fb484840-f443-4ad3-adb2-9d0a5869857f" containerID="83cfc921c4d5e725f8059003c55aae5c226a7be51cb0dc2e4fe590af1907b966" exitCode=0 Nov 24 12:06:52 crc kubenswrapper[5072]: I1124 12:06:52.923911 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qrccd" event={"ID":"fb484840-f443-4ad3-adb2-9d0a5869857f","Type":"ContainerDied","Data":"83cfc921c4d5e725f8059003c55aae5c226a7be51cb0dc2e4fe590af1907b966"} Nov 24 12:06:52 crc kubenswrapper[5072]: I1124 12:06:52.924177 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qrccd" event={"ID":"fb484840-f443-4ad3-adb2-9d0a5869857f","Type":"ContainerStarted","Data":"0236415cd3be2a1db16c3a103292eea133c61ef9b763fdb8a78498b1aafb71a6"} Nov 24 12:06:54 crc kubenswrapper[5072]: I1124 12:06:54.965054 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qrccd" event={"ID":"fb484840-f443-4ad3-adb2-9d0a5869857f","Type":"ContainerStarted","Data":"7771f756cd86b4ac9b4057930badb9a05fdd71824d1b40183ae220ff968e14dc"} Nov 24 12:06:56 crc kubenswrapper[5072]: I1124 12:06:56.984226 5072 generic.go:334] "Generic (PLEG): container finished" podID="fb484840-f443-4ad3-adb2-9d0a5869857f" containerID="7771f756cd86b4ac9b4057930badb9a05fdd71824d1b40183ae220ff968e14dc" exitCode=0 Nov 24 12:06:56 crc kubenswrapper[5072]: I1124 12:06:56.984427 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qrccd" event={"ID":"fb484840-f443-4ad3-adb2-9d0a5869857f","Type":"ContainerDied","Data":"7771f756cd86b4ac9b4057930badb9a05fdd71824d1b40183ae220ff968e14dc"} Nov 24 12:06:59 crc kubenswrapper[5072]: I1124 12:06:59.011951 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qrccd" event={"ID":"fb484840-f443-4ad3-adb2-9d0a5869857f","Type":"ContainerStarted","Data":"dc1e96ff19e1d59fe07aa400476166523ab6a032bb2f3d32c85e964ce04dd178"} Nov 24 12:06:59 crc kubenswrapper[5072]: I1124 12:06:59.036845 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qrccd" podStartSLOduration=3.147582172 podStartE2EDuration="8.036829372s" podCreationTimestamp="2025-11-24 12:06:51 +0000 UTC" firstStartedPulling="2025-11-24 12:06:52.927115677 +0000 UTC m=+3464.638640153" lastFinishedPulling="2025-11-24 12:06:57.816362877 +0000 UTC m=+3469.527887353" observedRunningTime="2025-11-24 12:06:59.035812287 +0000 UTC m=+3470.747336763" watchObservedRunningTime="2025-11-24 12:06:59.036829372 +0000 UTC m=+3470.748353848" Nov 24 12:07:01 crc kubenswrapper[5072]: I1124 12:07:01.571080 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qrccd" Nov 24 12:07:01 crc kubenswrapper[5072]: I1124 12:07:01.571421 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qrccd" Nov 24 12:07:01 crc kubenswrapper[5072]: I1124 12:07:01.618586 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qrccd" Nov 24 12:07:11 crc kubenswrapper[5072]: I1124 12:07:11.630169 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qrccd" Nov 24 12:07:11 crc kubenswrapper[5072]: I1124 12:07:11.680016 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qrccd"] Nov 24 12:07:12 crc kubenswrapper[5072]: I1124 12:07:12.148724 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qrccd" podUID="fb484840-f443-4ad3-adb2-9d0a5869857f" containerName="registry-server" containerID="cri-o://dc1e96ff19e1d59fe07aa400476166523ab6a032bb2f3d32c85e964ce04dd178" gracePeriod=2 Nov 24 12:07:12 crc kubenswrapper[5072]: I1124 12:07:12.652069 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qrccd" Nov 24 12:07:12 crc kubenswrapper[5072]: I1124 12:07:12.822554 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8892\" (UniqueName: \"kubernetes.io/projected/fb484840-f443-4ad3-adb2-9d0a5869857f-kube-api-access-d8892\") pod \"fb484840-f443-4ad3-adb2-9d0a5869857f\" (UID: \"fb484840-f443-4ad3-adb2-9d0a5869857f\") " Nov 24 12:07:12 crc kubenswrapper[5072]: I1124 12:07:12.822744 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb484840-f443-4ad3-adb2-9d0a5869857f-utilities\") pod \"fb484840-f443-4ad3-adb2-9d0a5869857f\" (UID: \"fb484840-f443-4ad3-adb2-9d0a5869857f\") " Nov 24 12:07:12 crc kubenswrapper[5072]: I1124 12:07:12.822896 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb484840-f443-4ad3-adb2-9d0a5869857f-catalog-content\") pod \"fb484840-f443-4ad3-adb2-9d0a5869857f\" (UID: \"fb484840-f443-4ad3-adb2-9d0a5869857f\") " Nov 24 12:07:12 crc kubenswrapper[5072]: I1124 12:07:12.823669 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb484840-f443-4ad3-adb2-9d0a5869857f-utilities" (OuterVolumeSpecName: "utilities") pod "fb484840-f443-4ad3-adb2-9d0a5869857f" (UID: "fb484840-f443-4ad3-adb2-9d0a5869857f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:07:12 crc kubenswrapper[5072]: I1124 12:07:12.830621 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb484840-f443-4ad3-adb2-9d0a5869857f-kube-api-access-d8892" (OuterVolumeSpecName: "kube-api-access-d8892") pod "fb484840-f443-4ad3-adb2-9d0a5869857f" (UID: "fb484840-f443-4ad3-adb2-9d0a5869857f"). InnerVolumeSpecName "kube-api-access-d8892". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:07:12 crc kubenswrapper[5072]: I1124 12:07:12.880949 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb484840-f443-4ad3-adb2-9d0a5869857f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fb484840-f443-4ad3-adb2-9d0a5869857f" (UID: "fb484840-f443-4ad3-adb2-9d0a5869857f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:07:12 crc kubenswrapper[5072]: I1124 12:07:12.925147 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d8892\" (UniqueName: \"kubernetes.io/projected/fb484840-f443-4ad3-adb2-9d0a5869857f-kube-api-access-d8892\") on node \"crc\" DevicePath \"\"" Nov 24 12:07:12 crc kubenswrapper[5072]: I1124 12:07:12.925196 5072 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb484840-f443-4ad3-adb2-9d0a5869857f-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:07:12 crc kubenswrapper[5072]: I1124 12:07:12.925211 5072 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb484840-f443-4ad3-adb2-9d0a5869857f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:07:13 crc kubenswrapper[5072]: I1124 12:07:13.160633 5072 generic.go:334] "Generic (PLEG): container finished" podID="fb484840-f443-4ad3-adb2-9d0a5869857f" containerID="dc1e96ff19e1d59fe07aa400476166523ab6a032bb2f3d32c85e964ce04dd178" exitCode=0 Nov 24 12:07:13 crc kubenswrapper[5072]: I1124 12:07:13.160680 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qrccd" event={"ID":"fb484840-f443-4ad3-adb2-9d0a5869857f","Type":"ContainerDied","Data":"dc1e96ff19e1d59fe07aa400476166523ab6a032bb2f3d32c85e964ce04dd178"} Nov 24 12:07:13 crc kubenswrapper[5072]: I1124 12:07:13.160708 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qrccd" event={"ID":"fb484840-f443-4ad3-adb2-9d0a5869857f","Type":"ContainerDied","Data":"0236415cd3be2a1db16c3a103292eea133c61ef9b763fdb8a78498b1aafb71a6"} Nov 24 12:07:13 crc kubenswrapper[5072]: I1124 12:07:13.160725 5072 scope.go:117] "RemoveContainer" containerID="dc1e96ff19e1d59fe07aa400476166523ab6a032bb2f3d32c85e964ce04dd178" Nov 24 12:07:13 crc kubenswrapper[5072]: I1124 12:07:13.160893 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qrccd" Nov 24 12:07:13 crc kubenswrapper[5072]: I1124 12:07:13.187217 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qrccd"] Nov 24 12:07:13 crc kubenswrapper[5072]: I1124 12:07:13.194448 5072 scope.go:117] "RemoveContainer" containerID="7771f756cd86b4ac9b4057930badb9a05fdd71824d1b40183ae220ff968e14dc" Nov 24 12:07:13 crc kubenswrapper[5072]: I1124 12:07:13.198434 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qrccd"] Nov 24 12:07:13 crc kubenswrapper[5072]: I1124 12:07:13.222325 5072 scope.go:117] "RemoveContainer" containerID="83cfc921c4d5e725f8059003c55aae5c226a7be51cb0dc2e4fe590af1907b966" Nov 24 12:07:13 crc kubenswrapper[5072]: I1124 12:07:13.263467 5072 scope.go:117] "RemoveContainer" containerID="dc1e96ff19e1d59fe07aa400476166523ab6a032bb2f3d32c85e964ce04dd178" Nov 24 12:07:13 crc kubenswrapper[5072]: E1124 12:07:13.263782 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc1e96ff19e1d59fe07aa400476166523ab6a032bb2f3d32c85e964ce04dd178\": container with ID starting with dc1e96ff19e1d59fe07aa400476166523ab6a032bb2f3d32c85e964ce04dd178 not found: ID does not exist" containerID="dc1e96ff19e1d59fe07aa400476166523ab6a032bb2f3d32c85e964ce04dd178" Nov 24 12:07:13 crc kubenswrapper[5072]: I1124 12:07:13.263814 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc1e96ff19e1d59fe07aa400476166523ab6a032bb2f3d32c85e964ce04dd178"} err="failed to get container status \"dc1e96ff19e1d59fe07aa400476166523ab6a032bb2f3d32c85e964ce04dd178\": rpc error: code = NotFound desc = could not find container \"dc1e96ff19e1d59fe07aa400476166523ab6a032bb2f3d32c85e964ce04dd178\": container with ID starting with dc1e96ff19e1d59fe07aa400476166523ab6a032bb2f3d32c85e964ce04dd178 not found: ID does not exist" Nov 24 12:07:13 crc kubenswrapper[5072]: I1124 12:07:13.263835 5072 scope.go:117] "RemoveContainer" containerID="7771f756cd86b4ac9b4057930badb9a05fdd71824d1b40183ae220ff968e14dc" Nov 24 12:07:13 crc kubenswrapper[5072]: E1124 12:07:13.264057 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7771f756cd86b4ac9b4057930badb9a05fdd71824d1b40183ae220ff968e14dc\": container with ID starting with 7771f756cd86b4ac9b4057930badb9a05fdd71824d1b40183ae220ff968e14dc not found: ID does not exist" containerID="7771f756cd86b4ac9b4057930badb9a05fdd71824d1b40183ae220ff968e14dc" Nov 24 12:07:13 crc kubenswrapper[5072]: I1124 12:07:13.264088 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7771f756cd86b4ac9b4057930badb9a05fdd71824d1b40183ae220ff968e14dc"} err="failed to get container status \"7771f756cd86b4ac9b4057930badb9a05fdd71824d1b40183ae220ff968e14dc\": rpc error: code = NotFound desc = could not find container \"7771f756cd86b4ac9b4057930badb9a05fdd71824d1b40183ae220ff968e14dc\": container with ID starting with 7771f756cd86b4ac9b4057930badb9a05fdd71824d1b40183ae220ff968e14dc not found: ID does not exist" Nov 24 12:07:13 crc kubenswrapper[5072]: I1124 12:07:13.264102 5072 scope.go:117] "RemoveContainer" containerID="83cfc921c4d5e725f8059003c55aae5c226a7be51cb0dc2e4fe590af1907b966" Nov 24 12:07:13 crc kubenswrapper[5072]: E1124 12:07:13.264283 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83cfc921c4d5e725f8059003c55aae5c226a7be51cb0dc2e4fe590af1907b966\": container with ID starting with 83cfc921c4d5e725f8059003c55aae5c226a7be51cb0dc2e4fe590af1907b966 not found: ID does not exist" containerID="83cfc921c4d5e725f8059003c55aae5c226a7be51cb0dc2e4fe590af1907b966" Nov 24 12:07:13 crc kubenswrapper[5072]: I1124 12:07:13.264300 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83cfc921c4d5e725f8059003c55aae5c226a7be51cb0dc2e4fe590af1907b966"} err="failed to get container status \"83cfc921c4d5e725f8059003c55aae5c226a7be51cb0dc2e4fe590af1907b966\": rpc error: code = NotFound desc = could not find container \"83cfc921c4d5e725f8059003c55aae5c226a7be51cb0dc2e4fe590af1907b966\": container with ID starting with 83cfc921c4d5e725f8059003c55aae5c226a7be51cb0dc2e4fe590af1907b966 not found: ID does not exist" Nov 24 12:07:13 crc kubenswrapper[5072]: I1124 12:07:13.645180 5072 patch_prober.go:28] interesting pod/machine-config-daemon-jfxnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:07:13 crc kubenswrapper[5072]: I1124 12:07:13.645236 5072 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:07:13 crc kubenswrapper[5072]: I1124 12:07:13.645285 5072 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" Nov 24 12:07:13 crc kubenswrapper[5072]: I1124 12:07:13.646825 5072 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"093652b8bc6216293abf04bfd41ce4561cf02d4cdffda4280a1d2d687ddf566d"} pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 12:07:13 crc kubenswrapper[5072]: I1124 12:07:13.646885 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" containerName="machine-config-daemon" containerID="cri-o://093652b8bc6216293abf04bfd41ce4561cf02d4cdffda4280a1d2d687ddf566d" gracePeriod=600 Nov 24 12:07:14 crc kubenswrapper[5072]: I1124 12:07:14.172705 5072 generic.go:334] "Generic (PLEG): container finished" podID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" containerID="093652b8bc6216293abf04bfd41ce4561cf02d4cdffda4280a1d2d687ddf566d" exitCode=0 Nov 24 12:07:14 crc kubenswrapper[5072]: I1124 12:07:14.172743 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" event={"ID":"85ee6420-36f0-467c-acf4-ebea8b02c8d5","Type":"ContainerDied","Data":"093652b8bc6216293abf04bfd41ce4561cf02d4cdffda4280a1d2d687ddf566d"} Nov 24 12:07:14 crc kubenswrapper[5072]: I1124 12:07:14.174242 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" event={"ID":"85ee6420-36f0-467c-acf4-ebea8b02c8d5","Type":"ContainerStarted","Data":"8f43d1f4633f1aa8538759d1c074486f2bc563268c7b723c6d137f75b353afbe"} Nov 24 12:07:14 crc kubenswrapper[5072]: I1124 12:07:14.174328 5072 scope.go:117] "RemoveContainer" containerID="4c463b6823449c0875f1fec4633ea521827aee0fee045719621150bcb1ac1a4f" Nov 24 12:07:15 crc kubenswrapper[5072]: I1124 12:07:15.031586 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb484840-f443-4ad3-adb2-9d0a5869857f" path="/var/lib/kubelet/pods/fb484840-f443-4ad3-adb2-9d0a5869857f/volumes" Nov 24 12:08:58 crc kubenswrapper[5072]: I1124 12:08:58.053260 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-create-6hvhf"] Nov 24 12:08:58 crc kubenswrapper[5072]: I1124 12:08:58.067554 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-d2d4-account-create-hl6fw"] Nov 24 12:08:58 crc kubenswrapper[5072]: I1124 12:08:58.077811 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-create-6hvhf"] Nov 24 12:08:58 crc kubenswrapper[5072]: I1124 12:08:58.087677 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-d2d4-account-create-hl6fw"] Nov 24 12:08:59 crc kubenswrapper[5072]: I1124 12:08:59.029355 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2b9ee49-0cbe-43d3-a768-74c71d0f79e8" path="/var/lib/kubelet/pods/e2b9ee49-0cbe-43d3-a768-74c71d0f79e8/volumes" Nov 24 12:08:59 crc kubenswrapper[5072]: I1124 12:08:59.031110 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="feb68e18-e333-419a-acbf-7bc331cc35a8" path="/var/lib/kubelet/pods/feb68e18-e333-419a-acbf-7bc331cc35a8/volumes" Nov 24 12:09:13 crc kubenswrapper[5072]: I1124 12:09:13.645485 5072 patch_prober.go:28] interesting pod/machine-config-daemon-jfxnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:09:13 crc kubenswrapper[5072]: I1124 12:09:13.646004 5072 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:09:23 crc kubenswrapper[5072]: I1124 12:09:23.095612 5072 scope.go:117] "RemoveContainer" containerID="c753631300873ca499bd1d589d519ad1c4a6114154e797749625b80ba3094c6d" Nov 24 12:09:23 crc kubenswrapper[5072]: I1124 12:09:23.125488 5072 scope.go:117] "RemoveContainer" containerID="e63f8c6b5db9f53c40123918fdffe97d3fcef308cb10730d815a0815a5d5356d" Nov 24 12:09:23 crc kubenswrapper[5072]: I1124 12:09:23.176935 5072 scope.go:117] "RemoveContainer" containerID="55daa16d88d917071c968a03d09546113f400e633e0c2a745e44231f85549ab4" Nov 24 12:09:23 crc kubenswrapper[5072]: I1124 12:09:23.198926 5072 scope.go:117] "RemoveContainer" containerID="f81646fb82089e09d7e9fe5fc7e11e71bb909c110f7a9bfd42acb274ae728a79" Nov 24 12:09:43 crc kubenswrapper[5072]: I1124 12:09:43.645459 5072 patch_prober.go:28] interesting pod/machine-config-daemon-jfxnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:09:43 crc kubenswrapper[5072]: I1124 12:09:43.645896 5072 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:10:13 crc kubenswrapper[5072]: I1124 12:10:13.645624 5072 patch_prober.go:28] interesting pod/machine-config-daemon-jfxnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:10:13 crc kubenswrapper[5072]: I1124 12:10:13.646153 5072 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:10:13 crc kubenswrapper[5072]: I1124 12:10:13.646199 5072 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" Nov 24 12:10:13 crc kubenswrapper[5072]: I1124 12:10:13.647075 5072 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8f43d1f4633f1aa8538759d1c074486f2bc563268c7b723c6d137f75b353afbe"} pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 12:10:13 crc kubenswrapper[5072]: I1124 12:10:13.647142 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" containerName="machine-config-daemon" containerID="cri-o://8f43d1f4633f1aa8538759d1c074486f2bc563268c7b723c6d137f75b353afbe" gracePeriod=600 Nov 24 12:10:13 crc kubenswrapper[5072]: E1124 12:10:13.784728 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 12:10:14 crc kubenswrapper[5072]: I1124 12:10:14.297868 5072 generic.go:334] "Generic (PLEG): container finished" podID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" containerID="8f43d1f4633f1aa8538759d1c074486f2bc563268c7b723c6d137f75b353afbe" exitCode=0 Nov 24 12:10:14 crc kubenswrapper[5072]: I1124 12:10:14.297927 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" event={"ID":"85ee6420-36f0-467c-acf4-ebea8b02c8d5","Type":"ContainerDied","Data":"8f43d1f4633f1aa8538759d1c074486f2bc563268c7b723c6d137f75b353afbe"} Nov 24 12:10:14 crc kubenswrapper[5072]: I1124 12:10:14.298243 5072 scope.go:117] "RemoveContainer" containerID="093652b8bc6216293abf04bfd41ce4561cf02d4cdffda4280a1d2d687ddf566d" Nov 24 12:10:14 crc kubenswrapper[5072]: I1124 12:10:14.299089 5072 scope.go:117] "RemoveContainer" containerID="8f43d1f4633f1aa8538759d1c074486f2bc563268c7b723c6d137f75b353afbe" Nov 24 12:10:14 crc kubenswrapper[5072]: E1124 12:10:14.299408 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 12:10:28 crc kubenswrapper[5072]: I1124 12:10:28.016726 5072 scope.go:117] "RemoveContainer" containerID="8f43d1f4633f1aa8538759d1c074486f2bc563268c7b723c6d137f75b353afbe" Nov 24 12:10:28 crc kubenswrapper[5072]: E1124 12:10:28.017487 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 12:10:43 crc kubenswrapper[5072]: I1124 12:10:43.023312 5072 scope.go:117] "RemoveContainer" containerID="8f43d1f4633f1aa8538759d1c074486f2bc563268c7b723c6d137f75b353afbe" Nov 24 12:10:43 crc kubenswrapper[5072]: E1124 12:10:43.024013 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 12:10:57 crc kubenswrapper[5072]: I1124 12:10:57.016049 5072 scope.go:117] "RemoveContainer" containerID="8f43d1f4633f1aa8538759d1c074486f2bc563268c7b723c6d137f75b353afbe" Nov 24 12:10:57 crc kubenswrapper[5072]: E1124 12:10:57.016721 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 12:11:09 crc kubenswrapper[5072]: I1124 12:11:09.025666 5072 scope.go:117] "RemoveContainer" containerID="8f43d1f4633f1aa8538759d1c074486f2bc563268c7b723c6d137f75b353afbe" Nov 24 12:11:09 crc kubenswrapper[5072]: E1124 12:11:09.044724 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 12:11:24 crc kubenswrapper[5072]: I1124 12:11:24.016994 5072 scope.go:117] "RemoveContainer" containerID="8f43d1f4633f1aa8538759d1c074486f2bc563268c7b723c6d137f75b353afbe" Nov 24 12:11:24 crc kubenswrapper[5072]: E1124 12:11:24.017758 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 12:11:36 crc kubenswrapper[5072]: I1124 12:11:36.016748 5072 scope.go:117] "RemoveContainer" containerID="8f43d1f4633f1aa8538759d1c074486f2bc563268c7b723c6d137f75b353afbe" Nov 24 12:11:36 crc kubenswrapper[5072]: E1124 12:11:36.018602 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 12:11:49 crc kubenswrapper[5072]: I1124 12:11:49.024211 5072 scope.go:117] "RemoveContainer" containerID="8f43d1f4633f1aa8538759d1c074486f2bc563268c7b723c6d137f75b353afbe" Nov 24 12:11:49 crc kubenswrapper[5072]: E1124 12:11:49.025000 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 12:12:00 crc kubenswrapper[5072]: I1124 12:12:00.016253 5072 scope.go:117] "RemoveContainer" containerID="8f43d1f4633f1aa8538759d1c074486f2bc563268c7b723c6d137f75b353afbe" Nov 24 12:12:00 crc kubenswrapper[5072]: E1124 12:12:00.017018 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 12:12:11 crc kubenswrapper[5072]: I1124 12:12:11.016737 5072 scope.go:117] "RemoveContainer" containerID="8f43d1f4633f1aa8538759d1c074486f2bc563268c7b723c6d137f75b353afbe" Nov 24 12:12:11 crc kubenswrapper[5072]: E1124 12:12:11.017528 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 12:12:23 crc kubenswrapper[5072]: I1124 12:12:23.016629 5072 scope.go:117] "RemoveContainer" containerID="8f43d1f4633f1aa8538759d1c074486f2bc563268c7b723c6d137f75b353afbe" Nov 24 12:12:23 crc kubenswrapper[5072]: E1124 12:12:23.017553 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 12:12:34 crc kubenswrapper[5072]: I1124 12:12:34.017153 5072 scope.go:117] "RemoveContainer" containerID="8f43d1f4633f1aa8538759d1c074486f2bc563268c7b723c6d137f75b353afbe" Nov 24 12:12:34 crc kubenswrapper[5072]: E1124 12:12:34.017813 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 12:12:45 crc kubenswrapper[5072]: I1124 12:12:45.021721 5072 scope.go:117] "RemoveContainer" containerID="8f43d1f4633f1aa8538759d1c074486f2bc563268c7b723c6d137f75b353afbe" Nov 24 12:12:45 crc kubenswrapper[5072]: E1124 12:12:45.022536 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 12:12:57 crc kubenswrapper[5072]: I1124 12:12:57.017971 5072 scope.go:117] "RemoveContainer" containerID="8f43d1f4633f1aa8538759d1c074486f2bc563268c7b723c6d137f75b353afbe" Nov 24 12:12:57 crc kubenswrapper[5072]: E1124 12:12:57.021233 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 12:13:06 crc kubenswrapper[5072]: I1124 12:13:06.043965 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-sync-b55tw"] Nov 24 12:13:06 crc kubenswrapper[5072]: I1124 12:13:06.053231 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-sync-b55tw"] Nov 24 12:13:07 crc kubenswrapper[5072]: I1124 12:13:07.033722 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a074607-4e56-4d2e-a4ee-87906af89764" path="/var/lib/kubelet/pods/4a074607-4e56-4d2e-a4ee-87906af89764/volumes" Nov 24 12:13:12 crc kubenswrapper[5072]: I1124 12:13:12.016606 5072 scope.go:117] "RemoveContainer" containerID="8f43d1f4633f1aa8538759d1c074486f2bc563268c7b723c6d137f75b353afbe" Nov 24 12:13:12 crc kubenswrapper[5072]: E1124 12:13:12.017343 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 12:13:23 crc kubenswrapper[5072]: I1124 12:13:23.344850 5072 scope.go:117] "RemoveContainer" containerID="1d87411ad890d3383fdb2466f4b2255ae671da030dc8f2cf61121b7460f5c1b3" Nov 24 12:13:25 crc kubenswrapper[5072]: I1124 12:13:25.017022 5072 scope.go:117] "RemoveContainer" containerID="8f43d1f4633f1aa8538759d1c074486f2bc563268c7b723c6d137f75b353afbe" Nov 24 12:13:25 crc kubenswrapper[5072]: E1124 12:13:25.017531 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 12:13:38 crc kubenswrapper[5072]: I1124 12:13:38.017103 5072 scope.go:117] "RemoveContainer" containerID="8f43d1f4633f1aa8538759d1c074486f2bc563268c7b723c6d137f75b353afbe" Nov 24 12:13:38 crc kubenswrapper[5072]: E1124 12:13:38.017964 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 12:13:51 crc kubenswrapper[5072]: I1124 12:13:51.018153 5072 scope.go:117] "RemoveContainer" containerID="8f43d1f4633f1aa8538759d1c074486f2bc563268c7b723c6d137f75b353afbe" Nov 24 12:13:51 crc kubenswrapper[5072]: E1124 12:13:51.019021 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 12:14:05 crc kubenswrapper[5072]: I1124 12:14:05.016650 5072 scope.go:117] "RemoveContainer" containerID="8f43d1f4633f1aa8538759d1c074486f2bc563268c7b723c6d137f75b353afbe" Nov 24 12:14:05 crc kubenswrapper[5072]: E1124 12:14:05.017508 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 12:14:16 crc kubenswrapper[5072]: I1124 12:14:16.016487 5072 scope.go:117] "RemoveContainer" containerID="8f43d1f4633f1aa8538759d1c074486f2bc563268c7b723c6d137f75b353afbe" Nov 24 12:14:16 crc kubenswrapper[5072]: E1124 12:14:16.017402 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 12:14:29 crc kubenswrapper[5072]: I1124 12:14:29.032231 5072 scope.go:117] "RemoveContainer" containerID="8f43d1f4633f1aa8538759d1c074486f2bc563268c7b723c6d137f75b353afbe" Nov 24 12:14:29 crc kubenswrapper[5072]: E1124 12:14:29.033154 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 12:14:44 crc kubenswrapper[5072]: I1124 12:14:44.016621 5072 scope.go:117] "RemoveContainer" containerID="8f43d1f4633f1aa8538759d1c074486f2bc563268c7b723c6d137f75b353afbe" Nov 24 12:14:44 crc kubenswrapper[5072]: E1124 12:14:44.017324 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 12:14:51 crc kubenswrapper[5072]: I1124 12:14:51.028872 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-86r54"] Nov 24 12:14:51 crc kubenswrapper[5072]: E1124 12:14:51.029567 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb484840-f443-4ad3-adb2-9d0a5869857f" containerName="extract-content" Nov 24 12:14:51 crc kubenswrapper[5072]: I1124 12:14:51.029577 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb484840-f443-4ad3-adb2-9d0a5869857f" containerName="extract-content" Nov 24 12:14:51 crc kubenswrapper[5072]: E1124 12:14:51.029598 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb484840-f443-4ad3-adb2-9d0a5869857f" containerName="registry-server" Nov 24 12:14:51 crc kubenswrapper[5072]: I1124 12:14:51.029604 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb484840-f443-4ad3-adb2-9d0a5869857f" containerName="registry-server" Nov 24 12:14:51 crc kubenswrapper[5072]: E1124 12:14:51.029616 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb484840-f443-4ad3-adb2-9d0a5869857f" containerName="extract-utilities" Nov 24 12:14:51 crc kubenswrapper[5072]: I1124 12:14:51.029621 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb484840-f443-4ad3-adb2-9d0a5869857f" containerName="extract-utilities" Nov 24 12:14:51 crc kubenswrapper[5072]: I1124 12:14:51.029812 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb484840-f443-4ad3-adb2-9d0a5869857f" containerName="registry-server" Nov 24 12:14:51 crc kubenswrapper[5072]: I1124 12:14:51.031115 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-86r54" Nov 24 12:14:51 crc kubenswrapper[5072]: I1124 12:14:51.032650 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-86r54"] Nov 24 12:14:51 crc kubenswrapper[5072]: I1124 12:14:51.151453 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67nhx\" (UniqueName: \"kubernetes.io/projected/bf5443ab-a4ca-4d95-8bc2-1a612bfba197-kube-api-access-67nhx\") pod \"certified-operators-86r54\" (UID: \"bf5443ab-a4ca-4d95-8bc2-1a612bfba197\") " pod="openshift-marketplace/certified-operators-86r54" Nov 24 12:14:51 crc kubenswrapper[5072]: I1124 12:14:51.151677 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf5443ab-a4ca-4d95-8bc2-1a612bfba197-utilities\") pod \"certified-operators-86r54\" (UID: \"bf5443ab-a4ca-4d95-8bc2-1a612bfba197\") " pod="openshift-marketplace/certified-operators-86r54" Nov 24 12:14:51 crc kubenswrapper[5072]: I1124 12:14:51.151769 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf5443ab-a4ca-4d95-8bc2-1a612bfba197-catalog-content\") pod \"certified-operators-86r54\" (UID: \"bf5443ab-a4ca-4d95-8bc2-1a612bfba197\") " pod="openshift-marketplace/certified-operators-86r54" Nov 24 12:14:51 crc kubenswrapper[5072]: I1124 12:14:51.253643 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf5443ab-a4ca-4d95-8bc2-1a612bfba197-catalog-content\") pod \"certified-operators-86r54\" (UID: \"bf5443ab-a4ca-4d95-8bc2-1a612bfba197\") " pod="openshift-marketplace/certified-operators-86r54" Nov 24 12:14:51 crc kubenswrapper[5072]: I1124 12:14:51.253759 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67nhx\" (UniqueName: \"kubernetes.io/projected/bf5443ab-a4ca-4d95-8bc2-1a612bfba197-kube-api-access-67nhx\") pod \"certified-operators-86r54\" (UID: \"bf5443ab-a4ca-4d95-8bc2-1a612bfba197\") " pod="openshift-marketplace/certified-operators-86r54" Nov 24 12:14:51 crc kubenswrapper[5072]: I1124 12:14:51.253899 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf5443ab-a4ca-4d95-8bc2-1a612bfba197-utilities\") pod \"certified-operators-86r54\" (UID: \"bf5443ab-a4ca-4d95-8bc2-1a612bfba197\") " pod="openshift-marketplace/certified-operators-86r54" Nov 24 12:14:51 crc kubenswrapper[5072]: I1124 12:14:51.254252 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf5443ab-a4ca-4d95-8bc2-1a612bfba197-catalog-content\") pod \"certified-operators-86r54\" (UID: \"bf5443ab-a4ca-4d95-8bc2-1a612bfba197\") " pod="openshift-marketplace/certified-operators-86r54" Nov 24 12:14:51 crc kubenswrapper[5072]: I1124 12:14:51.254294 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf5443ab-a4ca-4d95-8bc2-1a612bfba197-utilities\") pod \"certified-operators-86r54\" (UID: \"bf5443ab-a4ca-4d95-8bc2-1a612bfba197\") " pod="openshift-marketplace/certified-operators-86r54" Nov 24 12:14:51 crc kubenswrapper[5072]: I1124 12:14:51.275213 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67nhx\" (UniqueName: \"kubernetes.io/projected/bf5443ab-a4ca-4d95-8bc2-1a612bfba197-kube-api-access-67nhx\") pod \"certified-operators-86r54\" (UID: \"bf5443ab-a4ca-4d95-8bc2-1a612bfba197\") " pod="openshift-marketplace/certified-operators-86r54" Nov 24 12:14:51 crc kubenswrapper[5072]: I1124 12:14:51.364975 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-86r54" Nov 24 12:14:51 crc kubenswrapper[5072]: I1124 12:14:51.876984 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-86r54"] Nov 24 12:14:52 crc kubenswrapper[5072]: I1124 12:14:52.090769 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-86r54" event={"ID":"bf5443ab-a4ca-4d95-8bc2-1a612bfba197","Type":"ContainerStarted","Data":"bfbd9df96644a79e9d69e7b48ac56b188a9080588a9700e82890a1123debadbd"} Nov 24 12:14:53 crc kubenswrapper[5072]: I1124 12:14:53.103790 5072 generic.go:334] "Generic (PLEG): container finished" podID="bf5443ab-a4ca-4d95-8bc2-1a612bfba197" containerID="8833b41e8a01f40836b0e7bf3af0e88899e3af50eb722a2166612fb112112ff7" exitCode=0 Nov 24 12:14:53 crc kubenswrapper[5072]: I1124 12:14:53.103829 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-86r54" event={"ID":"bf5443ab-a4ca-4d95-8bc2-1a612bfba197","Type":"ContainerDied","Data":"8833b41e8a01f40836b0e7bf3af0e88899e3af50eb722a2166612fb112112ff7"} Nov 24 12:14:53 crc kubenswrapper[5072]: I1124 12:14:53.106727 5072 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 12:14:54 crc kubenswrapper[5072]: I1124 12:14:54.113707 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-86r54" event={"ID":"bf5443ab-a4ca-4d95-8bc2-1a612bfba197","Type":"ContainerStarted","Data":"5e491053ae6f0fbada40bbc599af1c07eb878a872611b0fe7d96d8acc16b3d1b"} Nov 24 12:14:54 crc kubenswrapper[5072]: I1124 12:14:54.993219 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zchmr"] Nov 24 12:14:54 crc kubenswrapper[5072]: I1124 12:14:54.999098 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zchmr" Nov 24 12:14:55 crc kubenswrapper[5072]: I1124 12:14:55.029956 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zchmr"] Nov 24 12:14:55 crc kubenswrapper[5072]: I1124 12:14:55.124646 5072 generic.go:334] "Generic (PLEG): container finished" podID="bf5443ab-a4ca-4d95-8bc2-1a612bfba197" containerID="5e491053ae6f0fbada40bbc599af1c07eb878a872611b0fe7d96d8acc16b3d1b" exitCode=0 Nov 24 12:14:55 crc kubenswrapper[5072]: I1124 12:14:55.125405 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-86r54" event={"ID":"bf5443ab-a4ca-4d95-8bc2-1a612bfba197","Type":"ContainerDied","Data":"5e491053ae6f0fbada40bbc599af1c07eb878a872611b0fe7d96d8acc16b3d1b"} Nov 24 12:14:55 crc kubenswrapper[5072]: I1124 12:14:55.130287 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7803b9b-b2d0-4ca4-bc69-e69184bda869-catalog-content\") pod \"redhat-operators-zchmr\" (UID: \"d7803b9b-b2d0-4ca4-bc69-e69184bda869\") " pod="openshift-marketplace/redhat-operators-zchmr" Nov 24 12:14:55 crc kubenswrapper[5072]: I1124 12:14:55.130456 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5fdd\" (UniqueName: \"kubernetes.io/projected/d7803b9b-b2d0-4ca4-bc69-e69184bda869-kube-api-access-t5fdd\") pod \"redhat-operators-zchmr\" (UID: \"d7803b9b-b2d0-4ca4-bc69-e69184bda869\") " pod="openshift-marketplace/redhat-operators-zchmr" Nov 24 12:14:55 crc kubenswrapper[5072]: I1124 12:14:55.130520 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7803b9b-b2d0-4ca4-bc69-e69184bda869-utilities\") pod \"redhat-operators-zchmr\" (UID: \"d7803b9b-b2d0-4ca4-bc69-e69184bda869\") " pod="openshift-marketplace/redhat-operators-zchmr" Nov 24 12:14:55 crc kubenswrapper[5072]: I1124 12:14:55.232592 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5fdd\" (UniqueName: \"kubernetes.io/projected/d7803b9b-b2d0-4ca4-bc69-e69184bda869-kube-api-access-t5fdd\") pod \"redhat-operators-zchmr\" (UID: \"d7803b9b-b2d0-4ca4-bc69-e69184bda869\") " pod="openshift-marketplace/redhat-operators-zchmr" Nov 24 12:14:55 crc kubenswrapper[5072]: I1124 12:14:55.232853 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7803b9b-b2d0-4ca4-bc69-e69184bda869-utilities\") pod \"redhat-operators-zchmr\" (UID: \"d7803b9b-b2d0-4ca4-bc69-e69184bda869\") " pod="openshift-marketplace/redhat-operators-zchmr" Nov 24 12:14:55 crc kubenswrapper[5072]: I1124 12:14:55.233049 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7803b9b-b2d0-4ca4-bc69-e69184bda869-catalog-content\") pod \"redhat-operators-zchmr\" (UID: \"d7803b9b-b2d0-4ca4-bc69-e69184bda869\") " pod="openshift-marketplace/redhat-operators-zchmr" Nov 24 12:14:55 crc kubenswrapper[5072]: I1124 12:14:55.233551 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7803b9b-b2d0-4ca4-bc69-e69184bda869-utilities\") pod \"redhat-operators-zchmr\" (UID: \"d7803b9b-b2d0-4ca4-bc69-e69184bda869\") " pod="openshift-marketplace/redhat-operators-zchmr" Nov 24 12:14:55 crc kubenswrapper[5072]: I1124 12:14:55.233609 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7803b9b-b2d0-4ca4-bc69-e69184bda869-catalog-content\") pod \"redhat-operators-zchmr\" (UID: \"d7803b9b-b2d0-4ca4-bc69-e69184bda869\") " pod="openshift-marketplace/redhat-operators-zchmr" Nov 24 12:14:55 crc kubenswrapper[5072]: I1124 12:14:55.253719 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5fdd\" (UniqueName: \"kubernetes.io/projected/d7803b9b-b2d0-4ca4-bc69-e69184bda869-kube-api-access-t5fdd\") pod \"redhat-operators-zchmr\" (UID: \"d7803b9b-b2d0-4ca4-bc69-e69184bda869\") " pod="openshift-marketplace/redhat-operators-zchmr" Nov 24 12:14:55 crc kubenswrapper[5072]: I1124 12:14:55.321736 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zchmr" Nov 24 12:14:55 crc kubenswrapper[5072]: I1124 12:14:55.787948 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zchmr"] Nov 24 12:14:55 crc kubenswrapper[5072]: W1124 12:14:55.788984 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7803b9b_b2d0_4ca4_bc69_e69184bda869.slice/crio-f8805e625551cabb441f81a8d19f13163e41d0e67599894e9a173644f7af477a WatchSource:0}: Error finding container f8805e625551cabb441f81a8d19f13163e41d0e67599894e9a173644f7af477a: Status 404 returned error can't find the container with id f8805e625551cabb441f81a8d19f13163e41d0e67599894e9a173644f7af477a Nov 24 12:14:56 crc kubenswrapper[5072]: I1124 12:14:56.183139 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-86r54" event={"ID":"bf5443ab-a4ca-4d95-8bc2-1a612bfba197","Type":"ContainerStarted","Data":"aaa740f288ad45f079d83b1edf4e8761b9292213d3afdc0fbc3920ac6290655d"} Nov 24 12:14:56 crc kubenswrapper[5072]: I1124 12:14:56.186008 5072 generic.go:334] "Generic (PLEG): container finished" podID="d7803b9b-b2d0-4ca4-bc69-e69184bda869" containerID="651559b1a56eecea17d37c6ae1faf1217e550a4e993f7ffb866b4b078328e64f" exitCode=0 Nov 24 12:14:56 crc kubenswrapper[5072]: I1124 12:14:56.186050 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zchmr" event={"ID":"d7803b9b-b2d0-4ca4-bc69-e69184bda869","Type":"ContainerDied","Data":"651559b1a56eecea17d37c6ae1faf1217e550a4e993f7ffb866b4b078328e64f"} Nov 24 12:14:56 crc kubenswrapper[5072]: I1124 12:14:56.186071 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zchmr" event={"ID":"d7803b9b-b2d0-4ca4-bc69-e69184bda869","Type":"ContainerStarted","Data":"f8805e625551cabb441f81a8d19f13163e41d0e67599894e9a173644f7af477a"} Nov 24 12:14:56 crc kubenswrapper[5072]: I1124 12:14:56.206086 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-86r54" podStartSLOduration=3.7568663989999997 podStartE2EDuration="6.206060558s" podCreationTimestamp="2025-11-24 12:14:50 +0000 UTC" firstStartedPulling="2025-11-24 12:14:53.106296649 +0000 UTC m=+3944.817821155" lastFinishedPulling="2025-11-24 12:14:55.555490838 +0000 UTC m=+3947.267015314" observedRunningTime="2025-11-24 12:14:56.203080374 +0000 UTC m=+3947.914604850" watchObservedRunningTime="2025-11-24 12:14:56.206060558 +0000 UTC m=+3947.917585054" Nov 24 12:14:57 crc kubenswrapper[5072]: I1124 12:14:57.017085 5072 scope.go:117] "RemoveContainer" containerID="8f43d1f4633f1aa8538759d1c074486f2bc563268c7b723c6d137f75b353afbe" Nov 24 12:14:57 crc kubenswrapper[5072]: E1124 12:14:57.017835 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 12:14:57 crc kubenswrapper[5072]: I1124 12:14:57.197261 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zchmr" event={"ID":"d7803b9b-b2d0-4ca4-bc69-e69184bda869","Type":"ContainerStarted","Data":"7da42fb20de1da88fabc22942df29ef04d458613f63dcc577ae838b60414c889"} Nov 24 12:15:00 crc kubenswrapper[5072]: I1124 12:15:00.144819 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399775-cccjh"] Nov 24 12:15:00 crc kubenswrapper[5072]: I1124 12:15:00.146591 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-cccjh" Nov 24 12:15:00 crc kubenswrapper[5072]: I1124 12:15:00.148367 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 12:15:00 crc kubenswrapper[5072]: I1124 12:15:00.148924 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 24 12:15:00 crc kubenswrapper[5072]: I1124 12:15:00.167292 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399775-cccjh"] Nov 24 12:15:00 crc kubenswrapper[5072]: I1124 12:15:00.238307 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/15f7f9a6-79c2-4f7c-8614-bfd77ddae9f1-secret-volume\") pod \"collect-profiles-29399775-cccjh\" (UID: \"15f7f9a6-79c2-4f7c-8614-bfd77ddae9f1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-cccjh" Nov 24 12:15:00 crc kubenswrapper[5072]: I1124 12:15:00.238361 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/15f7f9a6-79c2-4f7c-8614-bfd77ddae9f1-config-volume\") pod \"collect-profiles-29399775-cccjh\" (UID: \"15f7f9a6-79c2-4f7c-8614-bfd77ddae9f1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-cccjh" Nov 24 12:15:00 crc kubenswrapper[5072]: I1124 12:15:00.238541 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cq495\" (UniqueName: \"kubernetes.io/projected/15f7f9a6-79c2-4f7c-8614-bfd77ddae9f1-kube-api-access-cq495\") pod \"collect-profiles-29399775-cccjh\" (UID: \"15f7f9a6-79c2-4f7c-8614-bfd77ddae9f1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-cccjh" Nov 24 12:15:00 crc kubenswrapper[5072]: I1124 12:15:00.340550 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/15f7f9a6-79c2-4f7c-8614-bfd77ddae9f1-secret-volume\") pod \"collect-profiles-29399775-cccjh\" (UID: \"15f7f9a6-79c2-4f7c-8614-bfd77ddae9f1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-cccjh" Nov 24 12:15:00 crc kubenswrapper[5072]: I1124 12:15:00.340695 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/15f7f9a6-79c2-4f7c-8614-bfd77ddae9f1-config-volume\") pod \"collect-profiles-29399775-cccjh\" (UID: \"15f7f9a6-79c2-4f7c-8614-bfd77ddae9f1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-cccjh" Nov 24 12:15:00 crc kubenswrapper[5072]: I1124 12:15:00.340867 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cq495\" (UniqueName: \"kubernetes.io/projected/15f7f9a6-79c2-4f7c-8614-bfd77ddae9f1-kube-api-access-cq495\") pod \"collect-profiles-29399775-cccjh\" (UID: \"15f7f9a6-79c2-4f7c-8614-bfd77ddae9f1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-cccjh" Nov 24 12:15:00 crc kubenswrapper[5072]: I1124 12:15:00.342182 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/15f7f9a6-79c2-4f7c-8614-bfd77ddae9f1-config-volume\") pod \"collect-profiles-29399775-cccjh\" (UID: \"15f7f9a6-79c2-4f7c-8614-bfd77ddae9f1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-cccjh" Nov 24 12:15:00 crc kubenswrapper[5072]: I1124 12:15:00.353159 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/15f7f9a6-79c2-4f7c-8614-bfd77ddae9f1-secret-volume\") pod \"collect-profiles-29399775-cccjh\" (UID: \"15f7f9a6-79c2-4f7c-8614-bfd77ddae9f1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-cccjh" Nov 24 12:15:00 crc kubenswrapper[5072]: I1124 12:15:00.359092 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cq495\" (UniqueName: \"kubernetes.io/projected/15f7f9a6-79c2-4f7c-8614-bfd77ddae9f1-kube-api-access-cq495\") pod \"collect-profiles-29399775-cccjh\" (UID: \"15f7f9a6-79c2-4f7c-8614-bfd77ddae9f1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-cccjh" Nov 24 12:15:00 crc kubenswrapper[5072]: I1124 12:15:00.465102 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-cccjh" Nov 24 12:15:00 crc kubenswrapper[5072]: W1124 12:15:00.980893 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod15f7f9a6_79c2_4f7c_8614_bfd77ddae9f1.slice/crio-d8ab373a4d4a7b51aad76ff822deb7aef505151de594a959f354d975cdb00299 WatchSource:0}: Error finding container d8ab373a4d4a7b51aad76ff822deb7aef505151de594a959f354d975cdb00299: Status 404 returned error can't find the container with id d8ab373a4d4a7b51aad76ff822deb7aef505151de594a959f354d975cdb00299 Nov 24 12:15:00 crc kubenswrapper[5072]: I1124 12:15:00.982955 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399775-cccjh"] Nov 24 12:15:01 crc kubenswrapper[5072]: I1124 12:15:01.236024 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-cccjh" event={"ID":"15f7f9a6-79c2-4f7c-8614-bfd77ddae9f1","Type":"ContainerStarted","Data":"d8ab373a4d4a7b51aad76ff822deb7aef505151de594a959f354d975cdb00299"} Nov 24 12:15:01 crc kubenswrapper[5072]: I1124 12:15:01.365446 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-86r54" Nov 24 12:15:01 crc kubenswrapper[5072]: I1124 12:15:01.365516 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-86r54" Nov 24 12:15:01 crc kubenswrapper[5072]: I1124 12:15:01.416640 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-86r54" Nov 24 12:15:02 crc kubenswrapper[5072]: I1124 12:15:02.249651 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-cccjh" event={"ID":"15f7f9a6-79c2-4f7c-8614-bfd77ddae9f1","Type":"ContainerStarted","Data":"b21a69a9ee9694bacc1127318751747aba70dca63f57c6f4339908d60f7def46"} Nov 24 12:15:02 crc kubenswrapper[5072]: I1124 12:15:02.266930 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-cccjh" podStartSLOduration=2.266907763 podStartE2EDuration="2.266907763s" podCreationTimestamp="2025-11-24 12:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:15:02.266906403 +0000 UTC m=+3953.978430879" watchObservedRunningTime="2025-11-24 12:15:02.266907763 +0000 UTC m=+3953.978432239" Nov 24 12:15:02 crc kubenswrapper[5072]: I1124 12:15:02.301470 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-86r54" Nov 24 12:15:02 crc kubenswrapper[5072]: I1124 12:15:02.586784 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-86r54"] Nov 24 12:15:03 crc kubenswrapper[5072]: I1124 12:15:03.259509 5072 generic.go:334] "Generic (PLEG): container finished" podID="15f7f9a6-79c2-4f7c-8614-bfd77ddae9f1" containerID="b21a69a9ee9694bacc1127318751747aba70dca63f57c6f4339908d60f7def46" exitCode=0 Nov 24 12:15:03 crc kubenswrapper[5072]: I1124 12:15:03.259548 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-cccjh" event={"ID":"15f7f9a6-79c2-4f7c-8614-bfd77ddae9f1","Type":"ContainerDied","Data":"b21a69a9ee9694bacc1127318751747aba70dca63f57c6f4339908d60f7def46"} Nov 24 12:15:04 crc kubenswrapper[5072]: I1124 12:15:04.270464 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-86r54" podUID="bf5443ab-a4ca-4d95-8bc2-1a612bfba197" containerName="registry-server" containerID="cri-o://aaa740f288ad45f079d83b1edf4e8761b9292213d3afdc0fbc3920ac6290655d" gracePeriod=2 Nov 24 12:15:04 crc kubenswrapper[5072]: I1124 12:15:04.742581 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-cccjh" Nov 24 12:15:04 crc kubenswrapper[5072]: I1124 12:15:04.838674 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cq495\" (UniqueName: \"kubernetes.io/projected/15f7f9a6-79c2-4f7c-8614-bfd77ddae9f1-kube-api-access-cq495\") pod \"15f7f9a6-79c2-4f7c-8614-bfd77ddae9f1\" (UID: \"15f7f9a6-79c2-4f7c-8614-bfd77ddae9f1\") " Nov 24 12:15:04 crc kubenswrapper[5072]: I1124 12:15:04.838770 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/15f7f9a6-79c2-4f7c-8614-bfd77ddae9f1-config-volume\") pod \"15f7f9a6-79c2-4f7c-8614-bfd77ddae9f1\" (UID: \"15f7f9a6-79c2-4f7c-8614-bfd77ddae9f1\") " Nov 24 12:15:04 crc kubenswrapper[5072]: I1124 12:15:04.838936 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/15f7f9a6-79c2-4f7c-8614-bfd77ddae9f1-secret-volume\") pod \"15f7f9a6-79c2-4f7c-8614-bfd77ddae9f1\" (UID: \"15f7f9a6-79c2-4f7c-8614-bfd77ddae9f1\") " Nov 24 12:15:04 crc kubenswrapper[5072]: I1124 12:15:04.841124 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15f7f9a6-79c2-4f7c-8614-bfd77ddae9f1-config-volume" (OuterVolumeSpecName: "config-volume") pod "15f7f9a6-79c2-4f7c-8614-bfd77ddae9f1" (UID: "15f7f9a6-79c2-4f7c-8614-bfd77ddae9f1"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:15:04 crc kubenswrapper[5072]: I1124 12:15:04.845393 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15f7f9a6-79c2-4f7c-8614-bfd77ddae9f1-kube-api-access-cq495" (OuterVolumeSpecName: "kube-api-access-cq495") pod "15f7f9a6-79c2-4f7c-8614-bfd77ddae9f1" (UID: "15f7f9a6-79c2-4f7c-8614-bfd77ddae9f1"). InnerVolumeSpecName "kube-api-access-cq495". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:15:04 crc kubenswrapper[5072]: I1124 12:15:04.845437 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15f7f9a6-79c2-4f7c-8614-bfd77ddae9f1-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "15f7f9a6-79c2-4f7c-8614-bfd77ddae9f1" (UID: "15f7f9a6-79c2-4f7c-8614-bfd77ddae9f1"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:15:04 crc kubenswrapper[5072]: I1124 12:15:04.892294 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-86r54" Nov 24 12:15:04 crc kubenswrapper[5072]: I1124 12:15:04.940984 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cq495\" (UniqueName: \"kubernetes.io/projected/15f7f9a6-79c2-4f7c-8614-bfd77ddae9f1-kube-api-access-cq495\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:04 crc kubenswrapper[5072]: I1124 12:15:04.941025 5072 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/15f7f9a6-79c2-4f7c-8614-bfd77ddae9f1-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:04 crc kubenswrapper[5072]: I1124 12:15:04.941034 5072 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/15f7f9a6-79c2-4f7c-8614-bfd77ddae9f1-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:05 crc kubenswrapper[5072]: I1124 12:15:05.042193 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf5443ab-a4ca-4d95-8bc2-1a612bfba197-utilities\") pod \"bf5443ab-a4ca-4d95-8bc2-1a612bfba197\" (UID: \"bf5443ab-a4ca-4d95-8bc2-1a612bfba197\") " Nov 24 12:15:05 crc kubenswrapper[5072]: I1124 12:15:05.042304 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67nhx\" (UniqueName: \"kubernetes.io/projected/bf5443ab-a4ca-4d95-8bc2-1a612bfba197-kube-api-access-67nhx\") pod \"bf5443ab-a4ca-4d95-8bc2-1a612bfba197\" (UID: \"bf5443ab-a4ca-4d95-8bc2-1a612bfba197\") " Nov 24 12:15:05 crc kubenswrapper[5072]: I1124 12:15:05.042396 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf5443ab-a4ca-4d95-8bc2-1a612bfba197-catalog-content\") pod \"bf5443ab-a4ca-4d95-8bc2-1a612bfba197\" (UID: \"bf5443ab-a4ca-4d95-8bc2-1a612bfba197\") " Nov 24 12:15:05 crc kubenswrapper[5072]: I1124 12:15:05.043822 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf5443ab-a4ca-4d95-8bc2-1a612bfba197-utilities" (OuterVolumeSpecName: "utilities") pod "bf5443ab-a4ca-4d95-8bc2-1a612bfba197" (UID: "bf5443ab-a4ca-4d95-8bc2-1a612bfba197"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:15:05 crc kubenswrapper[5072]: I1124 12:15:05.046418 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf5443ab-a4ca-4d95-8bc2-1a612bfba197-kube-api-access-67nhx" (OuterVolumeSpecName: "kube-api-access-67nhx") pod "bf5443ab-a4ca-4d95-8bc2-1a612bfba197" (UID: "bf5443ab-a4ca-4d95-8bc2-1a612bfba197"). InnerVolumeSpecName "kube-api-access-67nhx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:15:05 crc kubenswrapper[5072]: I1124 12:15:05.089168 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf5443ab-a4ca-4d95-8bc2-1a612bfba197-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bf5443ab-a4ca-4d95-8bc2-1a612bfba197" (UID: "bf5443ab-a4ca-4d95-8bc2-1a612bfba197"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:15:05 crc kubenswrapper[5072]: I1124 12:15:05.144873 5072 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf5443ab-a4ca-4d95-8bc2-1a612bfba197-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:05 crc kubenswrapper[5072]: I1124 12:15:05.144912 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67nhx\" (UniqueName: \"kubernetes.io/projected/bf5443ab-a4ca-4d95-8bc2-1a612bfba197-kube-api-access-67nhx\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:05 crc kubenswrapper[5072]: I1124 12:15:05.144924 5072 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf5443ab-a4ca-4d95-8bc2-1a612bfba197-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:05 crc kubenswrapper[5072]: I1124 12:15:05.281214 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-cccjh" event={"ID":"15f7f9a6-79c2-4f7c-8614-bfd77ddae9f1","Type":"ContainerDied","Data":"d8ab373a4d4a7b51aad76ff822deb7aef505151de594a959f354d975cdb00299"} Nov 24 12:15:05 crc kubenswrapper[5072]: I1124 12:15:05.281252 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-cccjh" Nov 24 12:15:05 crc kubenswrapper[5072]: I1124 12:15:05.281268 5072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8ab373a4d4a7b51aad76ff822deb7aef505151de594a959f354d975cdb00299" Nov 24 12:15:05 crc kubenswrapper[5072]: I1124 12:15:05.284611 5072 generic.go:334] "Generic (PLEG): container finished" podID="bf5443ab-a4ca-4d95-8bc2-1a612bfba197" containerID="aaa740f288ad45f079d83b1edf4e8761b9292213d3afdc0fbc3920ac6290655d" exitCode=0 Nov 24 12:15:05 crc kubenswrapper[5072]: I1124 12:15:05.284656 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-86r54" event={"ID":"bf5443ab-a4ca-4d95-8bc2-1a612bfba197","Type":"ContainerDied","Data":"aaa740f288ad45f079d83b1edf4e8761b9292213d3afdc0fbc3920ac6290655d"} Nov 24 12:15:05 crc kubenswrapper[5072]: I1124 12:15:05.284688 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-86r54" event={"ID":"bf5443ab-a4ca-4d95-8bc2-1a612bfba197","Type":"ContainerDied","Data":"bfbd9df96644a79e9d69e7b48ac56b188a9080588a9700e82890a1123debadbd"} Nov 24 12:15:05 crc kubenswrapper[5072]: I1124 12:15:05.284699 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-86r54" Nov 24 12:15:05 crc kubenswrapper[5072]: I1124 12:15:05.284710 5072 scope.go:117] "RemoveContainer" containerID="aaa740f288ad45f079d83b1edf4e8761b9292213d3afdc0fbc3920ac6290655d" Nov 24 12:15:05 crc kubenswrapper[5072]: I1124 12:15:05.313040 5072 scope.go:117] "RemoveContainer" containerID="5e491053ae6f0fbada40bbc599af1c07eb878a872611b0fe7d96d8acc16b3d1b" Nov 24 12:15:05 crc kubenswrapper[5072]: I1124 12:15:05.334197 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-86r54"] Nov 24 12:15:05 crc kubenswrapper[5072]: I1124 12:15:05.351247 5072 scope.go:117] "RemoveContainer" containerID="8833b41e8a01f40836b0e7bf3af0e88899e3af50eb722a2166612fb112112ff7" Nov 24 12:15:05 crc kubenswrapper[5072]: I1124 12:15:05.353402 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-86r54"] Nov 24 12:15:05 crc kubenswrapper[5072]: I1124 12:15:05.363420 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399730-5x49b"] Nov 24 12:15:05 crc kubenswrapper[5072]: I1124 12:15:05.371762 5072 scope.go:117] "RemoveContainer" containerID="aaa740f288ad45f079d83b1edf4e8761b9292213d3afdc0fbc3920ac6290655d" Nov 24 12:15:05 crc kubenswrapper[5072]: E1124 12:15:05.372661 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aaa740f288ad45f079d83b1edf4e8761b9292213d3afdc0fbc3920ac6290655d\": container with ID starting with aaa740f288ad45f079d83b1edf4e8761b9292213d3afdc0fbc3920ac6290655d not found: ID does not exist" containerID="aaa740f288ad45f079d83b1edf4e8761b9292213d3afdc0fbc3920ac6290655d" Nov 24 12:15:05 crc kubenswrapper[5072]: I1124 12:15:05.372691 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aaa740f288ad45f079d83b1edf4e8761b9292213d3afdc0fbc3920ac6290655d"} err="failed to get container status \"aaa740f288ad45f079d83b1edf4e8761b9292213d3afdc0fbc3920ac6290655d\": rpc error: code = NotFound desc = could not find container \"aaa740f288ad45f079d83b1edf4e8761b9292213d3afdc0fbc3920ac6290655d\": container with ID starting with aaa740f288ad45f079d83b1edf4e8761b9292213d3afdc0fbc3920ac6290655d not found: ID does not exist" Nov 24 12:15:05 crc kubenswrapper[5072]: I1124 12:15:05.372712 5072 scope.go:117] "RemoveContainer" containerID="5e491053ae6f0fbada40bbc599af1c07eb878a872611b0fe7d96d8acc16b3d1b" Nov 24 12:15:05 crc kubenswrapper[5072]: E1124 12:15:05.372911 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e491053ae6f0fbada40bbc599af1c07eb878a872611b0fe7d96d8acc16b3d1b\": container with ID starting with 5e491053ae6f0fbada40bbc599af1c07eb878a872611b0fe7d96d8acc16b3d1b not found: ID does not exist" containerID="5e491053ae6f0fbada40bbc599af1c07eb878a872611b0fe7d96d8acc16b3d1b" Nov 24 12:15:05 crc kubenswrapper[5072]: I1124 12:15:05.372933 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e491053ae6f0fbada40bbc599af1c07eb878a872611b0fe7d96d8acc16b3d1b"} err="failed to get container status \"5e491053ae6f0fbada40bbc599af1c07eb878a872611b0fe7d96d8acc16b3d1b\": rpc error: code = NotFound desc = could not find container \"5e491053ae6f0fbada40bbc599af1c07eb878a872611b0fe7d96d8acc16b3d1b\": container with ID starting with 5e491053ae6f0fbada40bbc599af1c07eb878a872611b0fe7d96d8acc16b3d1b not found: ID does not exist" Nov 24 12:15:05 crc kubenswrapper[5072]: I1124 12:15:05.372946 5072 scope.go:117] "RemoveContainer" containerID="8833b41e8a01f40836b0e7bf3af0e88899e3af50eb722a2166612fb112112ff7" Nov 24 12:15:05 crc kubenswrapper[5072]: E1124 12:15:05.373109 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8833b41e8a01f40836b0e7bf3af0e88899e3af50eb722a2166612fb112112ff7\": container with ID starting with 8833b41e8a01f40836b0e7bf3af0e88899e3af50eb722a2166612fb112112ff7 not found: ID does not exist" containerID="8833b41e8a01f40836b0e7bf3af0e88899e3af50eb722a2166612fb112112ff7" Nov 24 12:15:05 crc kubenswrapper[5072]: I1124 12:15:05.373138 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8833b41e8a01f40836b0e7bf3af0e88899e3af50eb722a2166612fb112112ff7"} err="failed to get container status \"8833b41e8a01f40836b0e7bf3af0e88899e3af50eb722a2166612fb112112ff7\": rpc error: code = NotFound desc = could not find container \"8833b41e8a01f40836b0e7bf3af0e88899e3af50eb722a2166612fb112112ff7\": container with ID starting with 8833b41e8a01f40836b0e7bf3af0e88899e3af50eb722a2166612fb112112ff7 not found: ID does not exist" Nov 24 12:15:05 crc kubenswrapper[5072]: I1124 12:15:05.376147 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399730-5x49b"] Nov 24 12:15:07 crc kubenswrapper[5072]: I1124 12:15:07.029902 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf5443ab-a4ca-4d95-8bc2-1a612bfba197" path="/var/lib/kubelet/pods/bf5443ab-a4ca-4d95-8bc2-1a612bfba197/volumes" Nov 24 12:15:07 crc kubenswrapper[5072]: I1124 12:15:07.031192 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c99ac2b9-7719-430e-b9f0-6263982af569" path="/var/lib/kubelet/pods/c99ac2b9-7719-430e-b9f0-6263982af569/volumes" Nov 24 12:15:07 crc kubenswrapper[5072]: I1124 12:15:07.307809 5072 generic.go:334] "Generic (PLEG): container finished" podID="d7803b9b-b2d0-4ca4-bc69-e69184bda869" containerID="7da42fb20de1da88fabc22942df29ef04d458613f63dcc577ae838b60414c889" exitCode=0 Nov 24 12:15:07 crc kubenswrapper[5072]: I1124 12:15:07.307861 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zchmr" event={"ID":"d7803b9b-b2d0-4ca4-bc69-e69184bda869","Type":"ContainerDied","Data":"7da42fb20de1da88fabc22942df29ef04d458613f63dcc577ae838b60414c889"} Nov 24 12:15:08 crc kubenswrapper[5072]: I1124 12:15:08.018951 5072 scope.go:117] "RemoveContainer" containerID="8f43d1f4633f1aa8538759d1c074486f2bc563268c7b723c6d137f75b353afbe" Nov 24 12:15:08 crc kubenswrapper[5072]: E1124 12:15:08.019627 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 12:15:08 crc kubenswrapper[5072]: I1124 12:15:08.318794 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zchmr" event={"ID":"d7803b9b-b2d0-4ca4-bc69-e69184bda869","Type":"ContainerStarted","Data":"b750798e53ad03c27d4d704ab015536937f7c7a977fd020a3b10b18392b52b2c"} Nov 24 12:15:08 crc kubenswrapper[5072]: I1124 12:15:08.341217 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zchmr" podStartSLOduration=2.5313694289999997 podStartE2EDuration="14.341199791s" podCreationTimestamp="2025-11-24 12:14:54 +0000 UTC" firstStartedPulling="2025-11-24 12:14:56.187931857 +0000 UTC m=+3947.899456333" lastFinishedPulling="2025-11-24 12:15:07.997762219 +0000 UTC m=+3959.709286695" observedRunningTime="2025-11-24 12:15:08.336129985 +0000 UTC m=+3960.047654461" watchObservedRunningTime="2025-11-24 12:15:08.341199791 +0000 UTC m=+3960.052724267" Nov 24 12:15:15 crc kubenswrapper[5072]: I1124 12:15:15.322360 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zchmr" Nov 24 12:15:15 crc kubenswrapper[5072]: I1124 12:15:15.322829 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zchmr" Nov 24 12:15:15 crc kubenswrapper[5072]: I1124 12:15:15.372524 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zchmr" Nov 24 12:15:15 crc kubenswrapper[5072]: I1124 12:15:15.432700 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zchmr" Nov 24 12:15:15 crc kubenswrapper[5072]: I1124 12:15:15.606258 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zchmr"] Nov 24 12:15:17 crc kubenswrapper[5072]: I1124 12:15:17.391709 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zchmr" podUID="d7803b9b-b2d0-4ca4-bc69-e69184bda869" containerName="registry-server" containerID="cri-o://b750798e53ad03c27d4d704ab015536937f7c7a977fd020a3b10b18392b52b2c" gracePeriod=2 Nov 24 12:15:17 crc kubenswrapper[5072]: I1124 12:15:17.915894 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zchmr" Nov 24 12:15:18 crc kubenswrapper[5072]: I1124 12:15:18.018193 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5fdd\" (UniqueName: \"kubernetes.io/projected/d7803b9b-b2d0-4ca4-bc69-e69184bda869-kube-api-access-t5fdd\") pod \"d7803b9b-b2d0-4ca4-bc69-e69184bda869\" (UID: \"d7803b9b-b2d0-4ca4-bc69-e69184bda869\") " Nov 24 12:15:18 crc kubenswrapper[5072]: I1124 12:15:18.018420 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7803b9b-b2d0-4ca4-bc69-e69184bda869-utilities\") pod \"d7803b9b-b2d0-4ca4-bc69-e69184bda869\" (UID: \"d7803b9b-b2d0-4ca4-bc69-e69184bda869\") " Nov 24 12:15:18 crc kubenswrapper[5072]: I1124 12:15:18.018478 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7803b9b-b2d0-4ca4-bc69-e69184bda869-catalog-content\") pod \"d7803b9b-b2d0-4ca4-bc69-e69184bda869\" (UID: \"d7803b9b-b2d0-4ca4-bc69-e69184bda869\") " Nov 24 12:15:18 crc kubenswrapper[5072]: I1124 12:15:18.019054 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7803b9b-b2d0-4ca4-bc69-e69184bda869-utilities" (OuterVolumeSpecName: "utilities") pod "d7803b9b-b2d0-4ca4-bc69-e69184bda869" (UID: "d7803b9b-b2d0-4ca4-bc69-e69184bda869"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:15:18 crc kubenswrapper[5072]: I1124 12:15:18.024643 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7803b9b-b2d0-4ca4-bc69-e69184bda869-kube-api-access-t5fdd" (OuterVolumeSpecName: "kube-api-access-t5fdd") pod "d7803b9b-b2d0-4ca4-bc69-e69184bda869" (UID: "d7803b9b-b2d0-4ca4-bc69-e69184bda869"). InnerVolumeSpecName "kube-api-access-t5fdd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:15:18 crc kubenswrapper[5072]: I1124 12:15:18.117525 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7803b9b-b2d0-4ca4-bc69-e69184bda869-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d7803b9b-b2d0-4ca4-bc69-e69184bda869" (UID: "d7803b9b-b2d0-4ca4-bc69-e69184bda869"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:15:18 crc kubenswrapper[5072]: I1124 12:15:18.120938 5072 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7803b9b-b2d0-4ca4-bc69-e69184bda869-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:18 crc kubenswrapper[5072]: I1124 12:15:18.120975 5072 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7803b9b-b2d0-4ca4-bc69-e69184bda869-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:18 crc kubenswrapper[5072]: I1124 12:15:18.120989 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5fdd\" (UniqueName: \"kubernetes.io/projected/d7803b9b-b2d0-4ca4-bc69-e69184bda869-kube-api-access-t5fdd\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:18 crc kubenswrapper[5072]: I1124 12:15:18.400670 5072 generic.go:334] "Generic (PLEG): container finished" podID="d7803b9b-b2d0-4ca4-bc69-e69184bda869" containerID="b750798e53ad03c27d4d704ab015536937f7c7a977fd020a3b10b18392b52b2c" exitCode=0 Nov 24 12:15:18 crc kubenswrapper[5072]: I1124 12:15:18.400710 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zchmr" event={"ID":"d7803b9b-b2d0-4ca4-bc69-e69184bda869","Type":"ContainerDied","Data":"b750798e53ad03c27d4d704ab015536937f7c7a977fd020a3b10b18392b52b2c"} Nov 24 12:15:18 crc kubenswrapper[5072]: I1124 12:15:18.400741 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zchmr" event={"ID":"d7803b9b-b2d0-4ca4-bc69-e69184bda869","Type":"ContainerDied","Data":"f8805e625551cabb441f81a8d19f13163e41d0e67599894e9a173644f7af477a"} Nov 24 12:15:18 crc kubenswrapper[5072]: I1124 12:15:18.400747 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zchmr" Nov 24 12:15:18 crc kubenswrapper[5072]: I1124 12:15:18.400761 5072 scope.go:117] "RemoveContainer" containerID="b750798e53ad03c27d4d704ab015536937f7c7a977fd020a3b10b18392b52b2c" Nov 24 12:15:18 crc kubenswrapper[5072]: I1124 12:15:18.430835 5072 scope.go:117] "RemoveContainer" containerID="7da42fb20de1da88fabc22942df29ef04d458613f63dcc577ae838b60414c889" Nov 24 12:15:18 crc kubenswrapper[5072]: I1124 12:15:18.442310 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zchmr"] Nov 24 12:15:18 crc kubenswrapper[5072]: I1124 12:15:18.455274 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zchmr"] Nov 24 12:15:18 crc kubenswrapper[5072]: I1124 12:15:18.457365 5072 scope.go:117] "RemoveContainer" containerID="651559b1a56eecea17d37c6ae1faf1217e550a4e993f7ffb866b4b078328e64f" Nov 24 12:15:18 crc kubenswrapper[5072]: I1124 12:15:18.502877 5072 scope.go:117] "RemoveContainer" containerID="b750798e53ad03c27d4d704ab015536937f7c7a977fd020a3b10b18392b52b2c" Nov 24 12:15:18 crc kubenswrapper[5072]: E1124 12:15:18.503381 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b750798e53ad03c27d4d704ab015536937f7c7a977fd020a3b10b18392b52b2c\": container with ID starting with b750798e53ad03c27d4d704ab015536937f7c7a977fd020a3b10b18392b52b2c not found: ID does not exist" containerID="b750798e53ad03c27d4d704ab015536937f7c7a977fd020a3b10b18392b52b2c" Nov 24 12:15:18 crc kubenswrapper[5072]: I1124 12:15:18.503413 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b750798e53ad03c27d4d704ab015536937f7c7a977fd020a3b10b18392b52b2c"} err="failed to get container status \"b750798e53ad03c27d4d704ab015536937f7c7a977fd020a3b10b18392b52b2c\": rpc error: code = NotFound desc = could not find container \"b750798e53ad03c27d4d704ab015536937f7c7a977fd020a3b10b18392b52b2c\": container with ID starting with b750798e53ad03c27d4d704ab015536937f7c7a977fd020a3b10b18392b52b2c not found: ID does not exist" Nov 24 12:15:18 crc kubenswrapper[5072]: I1124 12:15:18.503435 5072 scope.go:117] "RemoveContainer" containerID="7da42fb20de1da88fabc22942df29ef04d458613f63dcc577ae838b60414c889" Nov 24 12:15:18 crc kubenswrapper[5072]: E1124 12:15:18.503926 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7da42fb20de1da88fabc22942df29ef04d458613f63dcc577ae838b60414c889\": container with ID starting with 7da42fb20de1da88fabc22942df29ef04d458613f63dcc577ae838b60414c889 not found: ID does not exist" containerID="7da42fb20de1da88fabc22942df29ef04d458613f63dcc577ae838b60414c889" Nov 24 12:15:18 crc kubenswrapper[5072]: I1124 12:15:18.503959 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7da42fb20de1da88fabc22942df29ef04d458613f63dcc577ae838b60414c889"} err="failed to get container status \"7da42fb20de1da88fabc22942df29ef04d458613f63dcc577ae838b60414c889\": rpc error: code = NotFound desc = could not find container \"7da42fb20de1da88fabc22942df29ef04d458613f63dcc577ae838b60414c889\": container with ID starting with 7da42fb20de1da88fabc22942df29ef04d458613f63dcc577ae838b60414c889 not found: ID does not exist" Nov 24 12:15:18 crc kubenswrapper[5072]: I1124 12:15:18.503979 5072 scope.go:117] "RemoveContainer" containerID="651559b1a56eecea17d37c6ae1faf1217e550a4e993f7ffb866b4b078328e64f" Nov 24 12:15:18 crc kubenswrapper[5072]: E1124 12:15:18.504706 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"651559b1a56eecea17d37c6ae1faf1217e550a4e993f7ffb866b4b078328e64f\": container with ID starting with 651559b1a56eecea17d37c6ae1faf1217e550a4e993f7ffb866b4b078328e64f not found: ID does not exist" containerID="651559b1a56eecea17d37c6ae1faf1217e550a4e993f7ffb866b4b078328e64f" Nov 24 12:15:18 crc kubenswrapper[5072]: I1124 12:15:18.504730 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"651559b1a56eecea17d37c6ae1faf1217e550a4e993f7ffb866b4b078328e64f"} err="failed to get container status \"651559b1a56eecea17d37c6ae1faf1217e550a4e993f7ffb866b4b078328e64f\": rpc error: code = NotFound desc = could not find container \"651559b1a56eecea17d37c6ae1faf1217e550a4e993f7ffb866b4b078328e64f\": container with ID starting with 651559b1a56eecea17d37c6ae1faf1217e550a4e993f7ffb866b4b078328e64f not found: ID does not exist" Nov 24 12:15:19 crc kubenswrapper[5072]: I1124 12:15:19.028422 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7803b9b-b2d0-4ca4-bc69-e69184bda869" path="/var/lib/kubelet/pods/d7803b9b-b2d0-4ca4-bc69-e69184bda869/volumes" Nov 24 12:15:23 crc kubenswrapper[5072]: I1124 12:15:23.016034 5072 scope.go:117] "RemoveContainer" containerID="8f43d1f4633f1aa8538759d1c074486f2bc563268c7b723c6d137f75b353afbe" Nov 24 12:15:23 crc kubenswrapper[5072]: I1124 12:15:23.426387 5072 scope.go:117] "RemoveContainer" containerID="0b0cb3684360fc9348a582818d545846e3fe9c5608368c434a21218e947a7fa4" Nov 24 12:15:23 crc kubenswrapper[5072]: I1124 12:15:23.450548 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" event={"ID":"85ee6420-36f0-467c-acf4-ebea8b02c8d5","Type":"ContainerStarted","Data":"d220dc7647c7de191bb9661af86034533cedb6d0eef421dd6a5fd92481793daf"} Nov 24 12:16:31 crc kubenswrapper[5072]: I1124 12:16:31.522599 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5wkxh"] Nov 24 12:16:31 crc kubenswrapper[5072]: E1124 12:16:31.523570 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf5443ab-a4ca-4d95-8bc2-1a612bfba197" containerName="extract-utilities" Nov 24 12:16:31 crc kubenswrapper[5072]: I1124 12:16:31.523586 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf5443ab-a4ca-4d95-8bc2-1a612bfba197" containerName="extract-utilities" Nov 24 12:16:31 crc kubenswrapper[5072]: E1124 12:16:31.523596 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7803b9b-b2d0-4ca4-bc69-e69184bda869" containerName="extract-content" Nov 24 12:16:31 crc kubenswrapper[5072]: I1124 12:16:31.523602 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7803b9b-b2d0-4ca4-bc69-e69184bda869" containerName="extract-content" Nov 24 12:16:31 crc kubenswrapper[5072]: E1124 12:16:31.523628 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15f7f9a6-79c2-4f7c-8614-bfd77ddae9f1" containerName="collect-profiles" Nov 24 12:16:31 crc kubenswrapper[5072]: I1124 12:16:31.523636 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="15f7f9a6-79c2-4f7c-8614-bfd77ddae9f1" containerName="collect-profiles" Nov 24 12:16:31 crc kubenswrapper[5072]: E1124 12:16:31.523652 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf5443ab-a4ca-4d95-8bc2-1a612bfba197" containerName="extract-content" Nov 24 12:16:31 crc kubenswrapper[5072]: I1124 12:16:31.523661 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf5443ab-a4ca-4d95-8bc2-1a612bfba197" containerName="extract-content" Nov 24 12:16:31 crc kubenswrapper[5072]: E1124 12:16:31.523683 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7803b9b-b2d0-4ca4-bc69-e69184bda869" containerName="registry-server" Nov 24 12:16:31 crc kubenswrapper[5072]: I1124 12:16:31.523692 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7803b9b-b2d0-4ca4-bc69-e69184bda869" containerName="registry-server" Nov 24 12:16:31 crc kubenswrapper[5072]: E1124 12:16:31.523710 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7803b9b-b2d0-4ca4-bc69-e69184bda869" containerName="extract-utilities" Nov 24 12:16:31 crc kubenswrapper[5072]: I1124 12:16:31.523718 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7803b9b-b2d0-4ca4-bc69-e69184bda869" containerName="extract-utilities" Nov 24 12:16:31 crc kubenswrapper[5072]: E1124 12:16:31.523731 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf5443ab-a4ca-4d95-8bc2-1a612bfba197" containerName="registry-server" Nov 24 12:16:31 crc kubenswrapper[5072]: I1124 12:16:31.523738 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf5443ab-a4ca-4d95-8bc2-1a612bfba197" containerName="registry-server" Nov 24 12:16:31 crc kubenswrapper[5072]: I1124 12:16:31.523943 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7803b9b-b2d0-4ca4-bc69-e69184bda869" containerName="registry-server" Nov 24 12:16:31 crc kubenswrapper[5072]: I1124 12:16:31.523965 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf5443ab-a4ca-4d95-8bc2-1a612bfba197" containerName="registry-server" Nov 24 12:16:31 crc kubenswrapper[5072]: I1124 12:16:31.523979 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="15f7f9a6-79c2-4f7c-8614-bfd77ddae9f1" containerName="collect-profiles" Nov 24 12:16:31 crc kubenswrapper[5072]: I1124 12:16:31.534971 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5wkxh" Nov 24 12:16:31 crc kubenswrapper[5072]: I1124 12:16:31.536058 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5wkxh"] Nov 24 12:16:31 crc kubenswrapper[5072]: I1124 12:16:31.604749 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0233d244-890c-4b92-ad69-d11f81252671-catalog-content\") pod \"redhat-marketplace-5wkxh\" (UID: \"0233d244-890c-4b92-ad69-d11f81252671\") " pod="openshift-marketplace/redhat-marketplace-5wkxh" Nov 24 12:16:31 crc kubenswrapper[5072]: I1124 12:16:31.605328 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0233d244-890c-4b92-ad69-d11f81252671-utilities\") pod \"redhat-marketplace-5wkxh\" (UID: \"0233d244-890c-4b92-ad69-d11f81252671\") " pod="openshift-marketplace/redhat-marketplace-5wkxh" Nov 24 12:16:31 crc kubenswrapper[5072]: I1124 12:16:31.605546 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsh5r\" (UniqueName: \"kubernetes.io/projected/0233d244-890c-4b92-ad69-d11f81252671-kube-api-access-vsh5r\") pod \"redhat-marketplace-5wkxh\" (UID: \"0233d244-890c-4b92-ad69-d11f81252671\") " pod="openshift-marketplace/redhat-marketplace-5wkxh" Nov 24 12:16:31 crc kubenswrapper[5072]: I1124 12:16:31.707897 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0233d244-890c-4b92-ad69-d11f81252671-utilities\") pod \"redhat-marketplace-5wkxh\" (UID: \"0233d244-890c-4b92-ad69-d11f81252671\") " pod="openshift-marketplace/redhat-marketplace-5wkxh" Nov 24 12:16:31 crc kubenswrapper[5072]: I1124 12:16:31.707968 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsh5r\" (UniqueName: \"kubernetes.io/projected/0233d244-890c-4b92-ad69-d11f81252671-kube-api-access-vsh5r\") pod \"redhat-marketplace-5wkxh\" (UID: \"0233d244-890c-4b92-ad69-d11f81252671\") " pod="openshift-marketplace/redhat-marketplace-5wkxh" Nov 24 12:16:31 crc kubenswrapper[5072]: I1124 12:16:31.708032 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0233d244-890c-4b92-ad69-d11f81252671-catalog-content\") pod \"redhat-marketplace-5wkxh\" (UID: \"0233d244-890c-4b92-ad69-d11f81252671\") " pod="openshift-marketplace/redhat-marketplace-5wkxh" Nov 24 12:16:31 crc kubenswrapper[5072]: I1124 12:16:31.708611 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0233d244-890c-4b92-ad69-d11f81252671-catalog-content\") pod \"redhat-marketplace-5wkxh\" (UID: \"0233d244-890c-4b92-ad69-d11f81252671\") " pod="openshift-marketplace/redhat-marketplace-5wkxh" Nov 24 12:16:31 crc kubenswrapper[5072]: I1124 12:16:31.708819 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0233d244-890c-4b92-ad69-d11f81252671-utilities\") pod \"redhat-marketplace-5wkxh\" (UID: \"0233d244-890c-4b92-ad69-d11f81252671\") " pod="openshift-marketplace/redhat-marketplace-5wkxh" Nov 24 12:16:31 crc kubenswrapper[5072]: I1124 12:16:31.729331 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsh5r\" (UniqueName: \"kubernetes.io/projected/0233d244-890c-4b92-ad69-d11f81252671-kube-api-access-vsh5r\") pod \"redhat-marketplace-5wkxh\" (UID: \"0233d244-890c-4b92-ad69-d11f81252671\") " pod="openshift-marketplace/redhat-marketplace-5wkxh" Nov 24 12:16:31 crc kubenswrapper[5072]: I1124 12:16:31.862554 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5wkxh" Nov 24 12:16:32 crc kubenswrapper[5072]: I1124 12:16:32.324488 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5wkxh"] Nov 24 12:16:33 crc kubenswrapper[5072]: I1124 12:16:33.090943 5072 generic.go:334] "Generic (PLEG): container finished" podID="0233d244-890c-4b92-ad69-d11f81252671" containerID="a61d4457b38cc9aaed7d59b8f2f9679b4fc6ea4153f66625bc3860a28dd7fff6" exitCode=0 Nov 24 12:16:33 crc kubenswrapper[5072]: I1124 12:16:33.091229 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5wkxh" event={"ID":"0233d244-890c-4b92-ad69-d11f81252671","Type":"ContainerDied","Data":"a61d4457b38cc9aaed7d59b8f2f9679b4fc6ea4153f66625bc3860a28dd7fff6"} Nov 24 12:16:33 crc kubenswrapper[5072]: I1124 12:16:33.091259 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5wkxh" event={"ID":"0233d244-890c-4b92-ad69-d11f81252671","Type":"ContainerStarted","Data":"345c1ea32cde765b2edf10a597242637d8fb8da91c0d9be13e4b50c33ee1e0ea"} Nov 24 12:16:34 crc kubenswrapper[5072]: I1124 12:16:34.102341 5072 generic.go:334] "Generic (PLEG): container finished" podID="0233d244-890c-4b92-ad69-d11f81252671" containerID="12bd06af3f0b920290c90e7bb28ab74c37019fb75eab7442d90a145c8b5caa94" exitCode=0 Nov 24 12:16:34 crc kubenswrapper[5072]: I1124 12:16:34.102414 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5wkxh" event={"ID":"0233d244-890c-4b92-ad69-d11f81252671","Type":"ContainerDied","Data":"12bd06af3f0b920290c90e7bb28ab74c37019fb75eab7442d90a145c8b5caa94"} Nov 24 12:16:35 crc kubenswrapper[5072]: I1124 12:16:35.113916 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5wkxh" event={"ID":"0233d244-890c-4b92-ad69-d11f81252671","Type":"ContainerStarted","Data":"8d99c2e0e7878fd21c16e4ab2db5198d85efb2ffaa97a93a8332d02567113d7e"} Nov 24 12:16:35 crc kubenswrapper[5072]: I1124 12:16:35.144904 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5wkxh" podStartSLOduration=2.7162403790000003 podStartE2EDuration="4.144883335s" podCreationTimestamp="2025-11-24 12:16:31 +0000 UTC" firstStartedPulling="2025-11-24 12:16:33.097107441 +0000 UTC m=+4044.808631917" lastFinishedPulling="2025-11-24 12:16:34.525750397 +0000 UTC m=+4046.237274873" observedRunningTime="2025-11-24 12:16:35.136276541 +0000 UTC m=+4046.847801037" watchObservedRunningTime="2025-11-24 12:16:35.144883335 +0000 UTC m=+4046.856407831" Nov 24 12:16:41 crc kubenswrapper[5072]: I1124 12:16:41.863040 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5wkxh" Nov 24 12:16:41 crc kubenswrapper[5072]: I1124 12:16:41.863873 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5wkxh" Nov 24 12:16:41 crc kubenswrapper[5072]: I1124 12:16:41.932844 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5wkxh" Nov 24 12:16:42 crc kubenswrapper[5072]: I1124 12:16:42.230757 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5wkxh" Nov 24 12:16:42 crc kubenswrapper[5072]: I1124 12:16:42.280402 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5wkxh"] Nov 24 12:16:44 crc kubenswrapper[5072]: I1124 12:16:44.203021 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5wkxh" podUID="0233d244-890c-4b92-ad69-d11f81252671" containerName="registry-server" containerID="cri-o://8d99c2e0e7878fd21c16e4ab2db5198d85efb2ffaa97a93a8332d02567113d7e" gracePeriod=2 Nov 24 12:16:45 crc kubenswrapper[5072]: I1124 12:16:45.211835 5072 generic.go:334] "Generic (PLEG): container finished" podID="0233d244-890c-4b92-ad69-d11f81252671" containerID="8d99c2e0e7878fd21c16e4ab2db5198d85efb2ffaa97a93a8332d02567113d7e" exitCode=0 Nov 24 12:16:45 crc kubenswrapper[5072]: I1124 12:16:45.211912 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5wkxh" event={"ID":"0233d244-890c-4b92-ad69-d11f81252671","Type":"ContainerDied","Data":"8d99c2e0e7878fd21c16e4ab2db5198d85efb2ffaa97a93a8332d02567113d7e"} Nov 24 12:16:45 crc kubenswrapper[5072]: I1124 12:16:45.354727 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5wkxh" Nov 24 12:16:45 crc kubenswrapper[5072]: I1124 12:16:45.521301 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vsh5r\" (UniqueName: \"kubernetes.io/projected/0233d244-890c-4b92-ad69-d11f81252671-kube-api-access-vsh5r\") pod \"0233d244-890c-4b92-ad69-d11f81252671\" (UID: \"0233d244-890c-4b92-ad69-d11f81252671\") " Nov 24 12:16:45 crc kubenswrapper[5072]: I1124 12:16:45.521357 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0233d244-890c-4b92-ad69-d11f81252671-catalog-content\") pod \"0233d244-890c-4b92-ad69-d11f81252671\" (UID: \"0233d244-890c-4b92-ad69-d11f81252671\") " Nov 24 12:16:45 crc kubenswrapper[5072]: I1124 12:16:45.521460 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0233d244-890c-4b92-ad69-d11f81252671-utilities\") pod \"0233d244-890c-4b92-ad69-d11f81252671\" (UID: \"0233d244-890c-4b92-ad69-d11f81252671\") " Nov 24 12:16:45 crc kubenswrapper[5072]: I1124 12:16:45.522409 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0233d244-890c-4b92-ad69-d11f81252671-utilities" (OuterVolumeSpecName: "utilities") pod "0233d244-890c-4b92-ad69-d11f81252671" (UID: "0233d244-890c-4b92-ad69-d11f81252671"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:16:45 crc kubenswrapper[5072]: I1124 12:16:45.527047 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0233d244-890c-4b92-ad69-d11f81252671-kube-api-access-vsh5r" (OuterVolumeSpecName: "kube-api-access-vsh5r") pod "0233d244-890c-4b92-ad69-d11f81252671" (UID: "0233d244-890c-4b92-ad69-d11f81252671"). InnerVolumeSpecName "kube-api-access-vsh5r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:16:45 crc kubenswrapper[5072]: I1124 12:16:45.540736 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0233d244-890c-4b92-ad69-d11f81252671-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0233d244-890c-4b92-ad69-d11f81252671" (UID: "0233d244-890c-4b92-ad69-d11f81252671"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:16:45 crc kubenswrapper[5072]: I1124 12:16:45.623905 5072 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0233d244-890c-4b92-ad69-d11f81252671-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:45 crc kubenswrapper[5072]: I1124 12:16:45.623941 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vsh5r\" (UniqueName: \"kubernetes.io/projected/0233d244-890c-4b92-ad69-d11f81252671-kube-api-access-vsh5r\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:45 crc kubenswrapper[5072]: I1124 12:16:45.623954 5072 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0233d244-890c-4b92-ad69-d11f81252671-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:46 crc kubenswrapper[5072]: I1124 12:16:46.225779 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5wkxh" event={"ID":"0233d244-890c-4b92-ad69-d11f81252671","Type":"ContainerDied","Data":"345c1ea32cde765b2edf10a597242637d8fb8da91c0d9be13e4b50c33ee1e0ea"} Nov 24 12:16:46 crc kubenswrapper[5072]: I1124 12:16:46.226222 5072 scope.go:117] "RemoveContainer" containerID="8d99c2e0e7878fd21c16e4ab2db5198d85efb2ffaa97a93a8332d02567113d7e" Nov 24 12:16:46 crc kubenswrapper[5072]: I1124 12:16:46.225832 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5wkxh" Nov 24 12:16:46 crc kubenswrapper[5072]: I1124 12:16:46.250362 5072 scope.go:117] "RemoveContainer" containerID="12bd06af3f0b920290c90e7bb28ab74c37019fb75eab7442d90a145c8b5caa94" Nov 24 12:16:46 crc kubenswrapper[5072]: I1124 12:16:46.285813 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5wkxh"] Nov 24 12:16:46 crc kubenswrapper[5072]: I1124 12:16:46.296030 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5wkxh"] Nov 24 12:16:46 crc kubenswrapper[5072]: I1124 12:16:46.301227 5072 scope.go:117] "RemoveContainer" containerID="a61d4457b38cc9aaed7d59b8f2f9679b4fc6ea4153f66625bc3860a28dd7fff6" Nov 24 12:16:47 crc kubenswrapper[5072]: I1124 12:16:47.035701 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0233d244-890c-4b92-ad69-d11f81252671" path="/var/lib/kubelet/pods/0233d244-890c-4b92-ad69-d11f81252671/volumes" Nov 24 12:16:53 crc kubenswrapper[5072]: I1124 12:16:53.370855 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dhk2z"] Nov 24 12:16:53 crc kubenswrapper[5072]: E1124 12:16:53.371744 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0233d244-890c-4b92-ad69-d11f81252671" containerName="extract-content" Nov 24 12:16:53 crc kubenswrapper[5072]: I1124 12:16:53.371757 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="0233d244-890c-4b92-ad69-d11f81252671" containerName="extract-content" Nov 24 12:16:53 crc kubenswrapper[5072]: E1124 12:16:53.371793 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0233d244-890c-4b92-ad69-d11f81252671" containerName="registry-server" Nov 24 12:16:53 crc kubenswrapper[5072]: I1124 12:16:53.371799 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="0233d244-890c-4b92-ad69-d11f81252671" containerName="registry-server" Nov 24 12:16:53 crc kubenswrapper[5072]: E1124 12:16:53.371818 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0233d244-890c-4b92-ad69-d11f81252671" containerName="extract-utilities" Nov 24 12:16:53 crc kubenswrapper[5072]: I1124 12:16:53.371824 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="0233d244-890c-4b92-ad69-d11f81252671" containerName="extract-utilities" Nov 24 12:16:53 crc kubenswrapper[5072]: I1124 12:16:53.372021 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="0233d244-890c-4b92-ad69-d11f81252671" containerName="registry-server" Nov 24 12:16:53 crc kubenswrapper[5072]: I1124 12:16:53.373280 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dhk2z" Nov 24 12:16:53 crc kubenswrapper[5072]: I1124 12:16:53.390752 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dhk2z"] Nov 24 12:16:53 crc kubenswrapper[5072]: I1124 12:16:53.479573 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67bfj\" (UniqueName: \"kubernetes.io/projected/813bb919-406f-42a0-8e17-73d219df6e86-kube-api-access-67bfj\") pod \"community-operators-dhk2z\" (UID: \"813bb919-406f-42a0-8e17-73d219df6e86\") " pod="openshift-marketplace/community-operators-dhk2z" Nov 24 12:16:53 crc kubenswrapper[5072]: I1124 12:16:53.479961 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/813bb919-406f-42a0-8e17-73d219df6e86-catalog-content\") pod \"community-operators-dhk2z\" (UID: \"813bb919-406f-42a0-8e17-73d219df6e86\") " pod="openshift-marketplace/community-operators-dhk2z" Nov 24 12:16:53 crc kubenswrapper[5072]: I1124 12:16:53.480054 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/813bb919-406f-42a0-8e17-73d219df6e86-utilities\") pod \"community-operators-dhk2z\" (UID: \"813bb919-406f-42a0-8e17-73d219df6e86\") " pod="openshift-marketplace/community-operators-dhk2z" Nov 24 12:16:53 crc kubenswrapper[5072]: I1124 12:16:53.582343 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/813bb919-406f-42a0-8e17-73d219df6e86-catalog-content\") pod \"community-operators-dhk2z\" (UID: \"813bb919-406f-42a0-8e17-73d219df6e86\") " pod="openshift-marketplace/community-operators-dhk2z" Nov 24 12:16:53 crc kubenswrapper[5072]: I1124 12:16:53.582418 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/813bb919-406f-42a0-8e17-73d219df6e86-utilities\") pod \"community-operators-dhk2z\" (UID: \"813bb919-406f-42a0-8e17-73d219df6e86\") " pod="openshift-marketplace/community-operators-dhk2z" Nov 24 12:16:53 crc kubenswrapper[5072]: I1124 12:16:53.582517 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67bfj\" (UniqueName: \"kubernetes.io/projected/813bb919-406f-42a0-8e17-73d219df6e86-kube-api-access-67bfj\") pod \"community-operators-dhk2z\" (UID: \"813bb919-406f-42a0-8e17-73d219df6e86\") " pod="openshift-marketplace/community-operators-dhk2z" Nov 24 12:16:53 crc kubenswrapper[5072]: I1124 12:16:53.582884 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/813bb919-406f-42a0-8e17-73d219df6e86-catalog-content\") pod \"community-operators-dhk2z\" (UID: \"813bb919-406f-42a0-8e17-73d219df6e86\") " pod="openshift-marketplace/community-operators-dhk2z" Nov 24 12:16:53 crc kubenswrapper[5072]: I1124 12:16:53.583171 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/813bb919-406f-42a0-8e17-73d219df6e86-utilities\") pod \"community-operators-dhk2z\" (UID: \"813bb919-406f-42a0-8e17-73d219df6e86\") " pod="openshift-marketplace/community-operators-dhk2z" Nov 24 12:16:53 crc kubenswrapper[5072]: I1124 12:16:53.601483 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67bfj\" (UniqueName: \"kubernetes.io/projected/813bb919-406f-42a0-8e17-73d219df6e86-kube-api-access-67bfj\") pod \"community-operators-dhk2z\" (UID: \"813bb919-406f-42a0-8e17-73d219df6e86\") " pod="openshift-marketplace/community-operators-dhk2z" Nov 24 12:16:53 crc kubenswrapper[5072]: I1124 12:16:53.690606 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dhk2z" Nov 24 12:16:54 crc kubenswrapper[5072]: I1124 12:16:54.249542 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dhk2z"] Nov 24 12:16:54 crc kubenswrapper[5072]: W1124 12:16:54.253758 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod813bb919_406f_42a0_8e17_73d219df6e86.slice/crio-26270b534284c79574c2f8b8883a3f689427b9f5f73741e9ae46bac346ab9582 WatchSource:0}: Error finding container 26270b534284c79574c2f8b8883a3f689427b9f5f73741e9ae46bac346ab9582: Status 404 returned error can't find the container with id 26270b534284c79574c2f8b8883a3f689427b9f5f73741e9ae46bac346ab9582 Nov 24 12:16:54 crc kubenswrapper[5072]: I1124 12:16:54.296039 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dhk2z" event={"ID":"813bb919-406f-42a0-8e17-73d219df6e86","Type":"ContainerStarted","Data":"26270b534284c79574c2f8b8883a3f689427b9f5f73741e9ae46bac346ab9582"} Nov 24 12:16:55 crc kubenswrapper[5072]: I1124 12:16:55.305125 5072 generic.go:334] "Generic (PLEG): container finished" podID="813bb919-406f-42a0-8e17-73d219df6e86" containerID="3a14a4c25cce9646c50a385c5d5da648767b8965135c8ab964744b9402abc894" exitCode=0 Nov 24 12:16:55 crc kubenswrapper[5072]: I1124 12:16:55.306347 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dhk2z" event={"ID":"813bb919-406f-42a0-8e17-73d219df6e86","Type":"ContainerDied","Data":"3a14a4c25cce9646c50a385c5d5da648767b8965135c8ab964744b9402abc894"} Nov 24 12:16:57 crc kubenswrapper[5072]: I1124 12:16:57.325933 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dhk2z" event={"ID":"813bb919-406f-42a0-8e17-73d219df6e86","Type":"ContainerStarted","Data":"42a89df71fcac9ee652a1cc90a9370797911bcb537df750a721b8134afd10e34"} Nov 24 12:16:58 crc kubenswrapper[5072]: I1124 12:16:58.337869 5072 generic.go:334] "Generic (PLEG): container finished" podID="813bb919-406f-42a0-8e17-73d219df6e86" containerID="42a89df71fcac9ee652a1cc90a9370797911bcb537df750a721b8134afd10e34" exitCode=0 Nov 24 12:16:58 crc kubenswrapper[5072]: I1124 12:16:58.337993 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dhk2z" event={"ID":"813bb919-406f-42a0-8e17-73d219df6e86","Type":"ContainerDied","Data":"42a89df71fcac9ee652a1cc90a9370797911bcb537df750a721b8134afd10e34"} Nov 24 12:16:59 crc kubenswrapper[5072]: I1124 12:16:59.353794 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dhk2z" event={"ID":"813bb919-406f-42a0-8e17-73d219df6e86","Type":"ContainerStarted","Data":"739ed383f71e5085ff2e333310fbf04bc7f0cee683056609a455300a730096b4"} Nov 24 12:16:59 crc kubenswrapper[5072]: I1124 12:16:59.374765 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dhk2z" podStartSLOduration=2.68549579 podStartE2EDuration="6.374745709s" podCreationTimestamp="2025-11-24 12:16:53 +0000 UTC" firstStartedPulling="2025-11-24 12:16:55.310061041 +0000 UTC m=+4067.021585517" lastFinishedPulling="2025-11-24 12:16:58.99931096 +0000 UTC m=+4070.710835436" observedRunningTime="2025-11-24 12:16:59.372828111 +0000 UTC m=+4071.084352597" watchObservedRunningTime="2025-11-24 12:16:59.374745709 +0000 UTC m=+4071.086270185" Nov 24 12:17:03 crc kubenswrapper[5072]: I1124 12:17:03.690852 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dhk2z" Nov 24 12:17:03 crc kubenswrapper[5072]: I1124 12:17:03.691434 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dhk2z" Nov 24 12:17:03 crc kubenswrapper[5072]: I1124 12:17:03.733241 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dhk2z" Nov 24 12:17:04 crc kubenswrapper[5072]: I1124 12:17:04.445412 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dhk2z" Nov 24 12:17:04 crc kubenswrapper[5072]: I1124 12:17:04.491298 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dhk2z"] Nov 24 12:17:06 crc kubenswrapper[5072]: I1124 12:17:06.418416 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dhk2z" podUID="813bb919-406f-42a0-8e17-73d219df6e86" containerName="registry-server" containerID="cri-o://739ed383f71e5085ff2e333310fbf04bc7f0cee683056609a455300a730096b4" gracePeriod=2 Nov 24 12:17:07 crc kubenswrapper[5072]: I1124 12:17:07.050291 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dhk2z" Nov 24 12:17:07 crc kubenswrapper[5072]: I1124 12:17:07.186687 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/813bb919-406f-42a0-8e17-73d219df6e86-utilities\") pod \"813bb919-406f-42a0-8e17-73d219df6e86\" (UID: \"813bb919-406f-42a0-8e17-73d219df6e86\") " Nov 24 12:17:07 crc kubenswrapper[5072]: I1124 12:17:07.186752 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/813bb919-406f-42a0-8e17-73d219df6e86-catalog-content\") pod \"813bb919-406f-42a0-8e17-73d219df6e86\" (UID: \"813bb919-406f-42a0-8e17-73d219df6e86\") " Nov 24 12:17:07 crc kubenswrapper[5072]: I1124 12:17:07.186840 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67bfj\" (UniqueName: \"kubernetes.io/projected/813bb919-406f-42a0-8e17-73d219df6e86-kube-api-access-67bfj\") pod \"813bb919-406f-42a0-8e17-73d219df6e86\" (UID: \"813bb919-406f-42a0-8e17-73d219df6e86\") " Nov 24 12:17:07 crc kubenswrapper[5072]: I1124 12:17:07.188182 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/813bb919-406f-42a0-8e17-73d219df6e86-utilities" (OuterVolumeSpecName: "utilities") pod "813bb919-406f-42a0-8e17-73d219df6e86" (UID: "813bb919-406f-42a0-8e17-73d219df6e86"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:17:07 crc kubenswrapper[5072]: I1124 12:17:07.197454 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/813bb919-406f-42a0-8e17-73d219df6e86-kube-api-access-67bfj" (OuterVolumeSpecName: "kube-api-access-67bfj") pod "813bb919-406f-42a0-8e17-73d219df6e86" (UID: "813bb919-406f-42a0-8e17-73d219df6e86"). InnerVolumeSpecName "kube-api-access-67bfj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:17:07 crc kubenswrapper[5072]: I1124 12:17:07.259994 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/813bb919-406f-42a0-8e17-73d219df6e86-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "813bb919-406f-42a0-8e17-73d219df6e86" (UID: "813bb919-406f-42a0-8e17-73d219df6e86"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:17:07 crc kubenswrapper[5072]: I1124 12:17:07.289210 5072 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/813bb919-406f-42a0-8e17-73d219df6e86-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:07 crc kubenswrapper[5072]: I1124 12:17:07.289247 5072 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/813bb919-406f-42a0-8e17-73d219df6e86-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:07 crc kubenswrapper[5072]: I1124 12:17:07.289259 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67bfj\" (UniqueName: \"kubernetes.io/projected/813bb919-406f-42a0-8e17-73d219df6e86-kube-api-access-67bfj\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:07 crc kubenswrapper[5072]: I1124 12:17:07.429444 5072 generic.go:334] "Generic (PLEG): container finished" podID="813bb919-406f-42a0-8e17-73d219df6e86" containerID="739ed383f71e5085ff2e333310fbf04bc7f0cee683056609a455300a730096b4" exitCode=0 Nov 24 12:17:07 crc kubenswrapper[5072]: I1124 12:17:07.429497 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dhk2z" event={"ID":"813bb919-406f-42a0-8e17-73d219df6e86","Type":"ContainerDied","Data":"739ed383f71e5085ff2e333310fbf04bc7f0cee683056609a455300a730096b4"} Nov 24 12:17:07 crc kubenswrapper[5072]: I1124 12:17:07.429528 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dhk2z" event={"ID":"813bb919-406f-42a0-8e17-73d219df6e86","Type":"ContainerDied","Data":"26270b534284c79574c2f8b8883a3f689427b9f5f73741e9ae46bac346ab9582"} Nov 24 12:17:07 crc kubenswrapper[5072]: I1124 12:17:07.429549 5072 scope.go:117] "RemoveContainer" containerID="739ed383f71e5085ff2e333310fbf04bc7f0cee683056609a455300a730096b4" Nov 24 12:17:07 crc kubenswrapper[5072]: I1124 12:17:07.429503 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dhk2z" Nov 24 12:17:07 crc kubenswrapper[5072]: I1124 12:17:07.452559 5072 scope.go:117] "RemoveContainer" containerID="42a89df71fcac9ee652a1cc90a9370797911bcb537df750a721b8134afd10e34" Nov 24 12:17:07 crc kubenswrapper[5072]: I1124 12:17:07.467425 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dhk2z"] Nov 24 12:17:07 crc kubenswrapper[5072]: I1124 12:17:07.474406 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dhk2z"] Nov 24 12:17:07 crc kubenswrapper[5072]: I1124 12:17:07.484001 5072 scope.go:117] "RemoveContainer" containerID="3a14a4c25cce9646c50a385c5d5da648767b8965135c8ab964744b9402abc894" Nov 24 12:17:07 crc kubenswrapper[5072]: I1124 12:17:07.528320 5072 scope.go:117] "RemoveContainer" containerID="739ed383f71e5085ff2e333310fbf04bc7f0cee683056609a455300a730096b4" Nov 24 12:17:07 crc kubenswrapper[5072]: E1124 12:17:07.528758 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"739ed383f71e5085ff2e333310fbf04bc7f0cee683056609a455300a730096b4\": container with ID starting with 739ed383f71e5085ff2e333310fbf04bc7f0cee683056609a455300a730096b4 not found: ID does not exist" containerID="739ed383f71e5085ff2e333310fbf04bc7f0cee683056609a455300a730096b4" Nov 24 12:17:07 crc kubenswrapper[5072]: I1124 12:17:07.528788 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"739ed383f71e5085ff2e333310fbf04bc7f0cee683056609a455300a730096b4"} err="failed to get container status \"739ed383f71e5085ff2e333310fbf04bc7f0cee683056609a455300a730096b4\": rpc error: code = NotFound desc = could not find container \"739ed383f71e5085ff2e333310fbf04bc7f0cee683056609a455300a730096b4\": container with ID starting with 739ed383f71e5085ff2e333310fbf04bc7f0cee683056609a455300a730096b4 not found: ID does not exist" Nov 24 12:17:07 crc kubenswrapper[5072]: I1124 12:17:07.528809 5072 scope.go:117] "RemoveContainer" containerID="42a89df71fcac9ee652a1cc90a9370797911bcb537df750a721b8134afd10e34" Nov 24 12:17:07 crc kubenswrapper[5072]: E1124 12:17:07.529003 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42a89df71fcac9ee652a1cc90a9370797911bcb537df750a721b8134afd10e34\": container with ID starting with 42a89df71fcac9ee652a1cc90a9370797911bcb537df750a721b8134afd10e34 not found: ID does not exist" containerID="42a89df71fcac9ee652a1cc90a9370797911bcb537df750a721b8134afd10e34" Nov 24 12:17:07 crc kubenswrapper[5072]: I1124 12:17:07.529021 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42a89df71fcac9ee652a1cc90a9370797911bcb537df750a721b8134afd10e34"} err="failed to get container status \"42a89df71fcac9ee652a1cc90a9370797911bcb537df750a721b8134afd10e34\": rpc error: code = NotFound desc = could not find container \"42a89df71fcac9ee652a1cc90a9370797911bcb537df750a721b8134afd10e34\": container with ID starting with 42a89df71fcac9ee652a1cc90a9370797911bcb537df750a721b8134afd10e34 not found: ID does not exist" Nov 24 12:17:07 crc kubenswrapper[5072]: I1124 12:17:07.529032 5072 scope.go:117] "RemoveContainer" containerID="3a14a4c25cce9646c50a385c5d5da648767b8965135c8ab964744b9402abc894" Nov 24 12:17:07 crc kubenswrapper[5072]: E1124 12:17:07.529258 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a14a4c25cce9646c50a385c5d5da648767b8965135c8ab964744b9402abc894\": container with ID starting with 3a14a4c25cce9646c50a385c5d5da648767b8965135c8ab964744b9402abc894 not found: ID does not exist" containerID="3a14a4c25cce9646c50a385c5d5da648767b8965135c8ab964744b9402abc894" Nov 24 12:17:07 crc kubenswrapper[5072]: I1124 12:17:07.529279 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a14a4c25cce9646c50a385c5d5da648767b8965135c8ab964744b9402abc894"} err="failed to get container status \"3a14a4c25cce9646c50a385c5d5da648767b8965135c8ab964744b9402abc894\": rpc error: code = NotFound desc = could not find container \"3a14a4c25cce9646c50a385c5d5da648767b8965135c8ab964744b9402abc894\": container with ID starting with 3a14a4c25cce9646c50a385c5d5da648767b8965135c8ab964744b9402abc894 not found: ID does not exist" Nov 24 12:17:09 crc kubenswrapper[5072]: I1124 12:17:09.054466 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="813bb919-406f-42a0-8e17-73d219df6e86" path="/var/lib/kubelet/pods/813bb919-406f-42a0-8e17-73d219df6e86/volumes" Nov 24 12:17:43 crc kubenswrapper[5072]: I1124 12:17:43.645486 5072 patch_prober.go:28] interesting pod/machine-config-daemon-jfxnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:17:43 crc kubenswrapper[5072]: I1124 12:17:43.646238 5072 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:18:13 crc kubenswrapper[5072]: I1124 12:18:13.645506 5072 patch_prober.go:28] interesting pod/machine-config-daemon-jfxnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:18:13 crc kubenswrapper[5072]: I1124 12:18:13.646047 5072 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:18:43 crc kubenswrapper[5072]: I1124 12:18:43.645558 5072 patch_prober.go:28] interesting pod/machine-config-daemon-jfxnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:18:43 crc kubenswrapper[5072]: I1124 12:18:43.646204 5072 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:18:43 crc kubenswrapper[5072]: I1124 12:18:43.646271 5072 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" Nov 24 12:18:43 crc kubenswrapper[5072]: I1124 12:18:43.647459 5072 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d220dc7647c7de191bb9661af86034533cedb6d0eef421dd6a5fd92481793daf"} pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 12:18:43 crc kubenswrapper[5072]: I1124 12:18:43.647563 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" containerName="machine-config-daemon" containerID="cri-o://d220dc7647c7de191bb9661af86034533cedb6d0eef421dd6a5fd92481793daf" gracePeriod=600 Nov 24 12:18:44 crc kubenswrapper[5072]: I1124 12:18:44.394790 5072 generic.go:334] "Generic (PLEG): container finished" podID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" containerID="d220dc7647c7de191bb9661af86034533cedb6d0eef421dd6a5fd92481793daf" exitCode=0 Nov 24 12:18:44 crc kubenswrapper[5072]: I1124 12:18:44.394831 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" event={"ID":"85ee6420-36f0-467c-acf4-ebea8b02c8d5","Type":"ContainerDied","Data":"d220dc7647c7de191bb9661af86034533cedb6d0eef421dd6a5fd92481793daf"} Nov 24 12:18:44 crc kubenswrapper[5072]: I1124 12:18:44.395120 5072 scope.go:117] "RemoveContainer" containerID="8f43d1f4633f1aa8538759d1c074486f2bc563268c7b723c6d137f75b353afbe" Nov 24 12:18:45 crc kubenswrapper[5072]: I1124 12:18:45.405418 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" event={"ID":"85ee6420-36f0-467c-acf4-ebea8b02c8d5","Type":"ContainerStarted","Data":"5e4b2551d31676c56045004e4ca1ab40457429150ff7753248ba4a9525c16c9e"} Nov 24 12:19:21 crc kubenswrapper[5072]: I1124 12:19:21.829500 5072 generic.go:334] "Generic (PLEG): container finished" podID="c4384a66-1728-45a3-9ab4-d1479c51cd18" containerID="9d2bfeefe2ed82ed926730fce95369e0e66957e04e2cb48ccddc0bb99c242ab6" exitCode=0 Nov 24 12:19:21 crc kubenswrapper[5072]: I1124 12:19:21.829637 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"c4384a66-1728-45a3-9ab4-d1479c51cd18","Type":"ContainerDied","Data":"9d2bfeefe2ed82ed926730fce95369e0e66957e04e2cb48ccddc0bb99c242ab6"} Nov 24 12:19:23 crc kubenswrapper[5072]: I1124 12:19:23.272921 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 24 12:19:23 crc kubenswrapper[5072]: I1124 12:19:23.345781 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c4384a66-1728-45a3-9ab4-d1479c51cd18-config-data\") pod \"c4384a66-1728-45a3-9ab4-d1479c51cd18\" (UID: \"c4384a66-1728-45a3-9ab4-d1479c51cd18\") " Nov 24 12:19:23 crc kubenswrapper[5072]: I1124 12:19:23.345855 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c4384a66-1728-45a3-9ab4-d1479c51cd18-ssh-key\") pod \"c4384a66-1728-45a3-9ab4-d1479c51cd18\" (UID: \"c4384a66-1728-45a3-9ab4-d1479c51cd18\") " Nov 24 12:19:23 crc kubenswrapper[5072]: I1124 12:19:23.345972 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8qbst\" (UniqueName: \"kubernetes.io/projected/c4384a66-1728-45a3-9ab4-d1479c51cd18-kube-api-access-8qbst\") pod \"c4384a66-1728-45a3-9ab4-d1479c51cd18\" (UID: \"c4384a66-1728-45a3-9ab4-d1479c51cd18\") " Nov 24 12:19:23 crc kubenswrapper[5072]: I1124 12:19:23.346807 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4384a66-1728-45a3-9ab4-d1479c51cd18-config-data" (OuterVolumeSpecName: "config-data") pod "c4384a66-1728-45a3-9ab4-d1479c51cd18" (UID: "c4384a66-1728-45a3-9ab4-d1479c51cd18"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:19:23 crc kubenswrapper[5072]: I1124 12:19:23.347875 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/c4384a66-1728-45a3-9ab4-d1479c51cd18-ca-certs\") pod \"c4384a66-1728-45a3-9ab4-d1479c51cd18\" (UID: \"c4384a66-1728-45a3-9ab4-d1479c51cd18\") " Nov 24 12:19:23 crc kubenswrapper[5072]: I1124 12:19:23.348027 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/c4384a66-1728-45a3-9ab4-d1479c51cd18-test-operator-ephemeral-workdir\") pod \"c4384a66-1728-45a3-9ab4-d1479c51cd18\" (UID: \"c4384a66-1728-45a3-9ab4-d1479c51cd18\") " Nov 24 12:19:23 crc kubenswrapper[5072]: I1124 12:19:23.348089 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c4384a66-1728-45a3-9ab4-d1479c51cd18-openstack-config\") pod \"c4384a66-1728-45a3-9ab4-d1479c51cd18\" (UID: \"c4384a66-1728-45a3-9ab4-d1479c51cd18\") " Nov 24 12:19:23 crc kubenswrapper[5072]: I1124 12:19:23.348180 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/c4384a66-1728-45a3-9ab4-d1479c51cd18-test-operator-ephemeral-temporary\") pod \"c4384a66-1728-45a3-9ab4-d1479c51cd18\" (UID: \"c4384a66-1728-45a3-9ab4-d1479c51cd18\") " Nov 24 12:19:23 crc kubenswrapper[5072]: I1124 12:19:23.348235 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c4384a66-1728-45a3-9ab4-d1479c51cd18-openstack-config-secret\") pod \"c4384a66-1728-45a3-9ab4-d1479c51cd18\" (UID: \"c4384a66-1728-45a3-9ab4-d1479c51cd18\") " Nov 24 12:19:23 crc kubenswrapper[5072]: I1124 12:19:23.348290 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"c4384a66-1728-45a3-9ab4-d1479c51cd18\" (UID: \"c4384a66-1728-45a3-9ab4-d1479c51cd18\") " Nov 24 12:19:23 crc kubenswrapper[5072]: I1124 12:19:23.348995 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4384a66-1728-45a3-9ab4-d1479c51cd18-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "c4384a66-1728-45a3-9ab4-d1479c51cd18" (UID: "c4384a66-1728-45a3-9ab4-d1479c51cd18"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:19:23 crc kubenswrapper[5072]: I1124 12:19:23.349348 5072 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/c4384a66-1728-45a3-9ab4-d1479c51cd18-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Nov 24 12:19:23 crc kubenswrapper[5072]: I1124 12:19:23.349362 5072 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c4384a66-1728-45a3-9ab4-d1479c51cd18-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:19:23 crc kubenswrapper[5072]: I1124 12:19:23.352518 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4384a66-1728-45a3-9ab4-d1479c51cd18-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "c4384a66-1728-45a3-9ab4-d1479c51cd18" (UID: "c4384a66-1728-45a3-9ab4-d1479c51cd18"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:19:23 crc kubenswrapper[5072]: I1124 12:19:23.353532 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "test-operator-logs") pod "c4384a66-1728-45a3-9ab4-d1479c51cd18" (UID: "c4384a66-1728-45a3-9ab4-d1479c51cd18"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 24 12:19:23 crc kubenswrapper[5072]: I1124 12:19:23.356772 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4384a66-1728-45a3-9ab4-d1479c51cd18-kube-api-access-8qbst" (OuterVolumeSpecName: "kube-api-access-8qbst") pod "c4384a66-1728-45a3-9ab4-d1479c51cd18" (UID: "c4384a66-1728-45a3-9ab4-d1479c51cd18"). InnerVolumeSpecName "kube-api-access-8qbst". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:19:23 crc kubenswrapper[5072]: I1124 12:19:23.380221 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4384a66-1728-45a3-9ab4-d1479c51cd18-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "c4384a66-1728-45a3-9ab4-d1479c51cd18" (UID: "c4384a66-1728-45a3-9ab4-d1479c51cd18"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:19:23 crc kubenswrapper[5072]: I1124 12:19:23.382551 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4384a66-1728-45a3-9ab4-d1479c51cd18-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "c4384a66-1728-45a3-9ab4-d1479c51cd18" (UID: "c4384a66-1728-45a3-9ab4-d1479c51cd18"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:19:23 crc kubenswrapper[5072]: I1124 12:19:23.395171 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4384a66-1728-45a3-9ab4-d1479c51cd18-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "c4384a66-1728-45a3-9ab4-d1479c51cd18" (UID: "c4384a66-1728-45a3-9ab4-d1479c51cd18"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:19:23 crc kubenswrapper[5072]: I1124 12:19:23.413494 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4384a66-1728-45a3-9ab4-d1479c51cd18-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "c4384a66-1728-45a3-9ab4-d1479c51cd18" (UID: "c4384a66-1728-45a3-9ab4-d1479c51cd18"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:19:23 crc kubenswrapper[5072]: I1124 12:19:23.451181 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8qbst\" (UniqueName: \"kubernetes.io/projected/c4384a66-1728-45a3-9ab4-d1479c51cd18-kube-api-access-8qbst\") on node \"crc\" DevicePath \"\"" Nov 24 12:19:23 crc kubenswrapper[5072]: I1124 12:19:23.451206 5072 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/c4384a66-1728-45a3-9ab4-d1479c51cd18-ca-certs\") on node \"crc\" DevicePath \"\"" Nov 24 12:19:23 crc kubenswrapper[5072]: I1124 12:19:23.451246 5072 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/c4384a66-1728-45a3-9ab4-d1479c51cd18-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Nov 24 12:19:23 crc kubenswrapper[5072]: I1124 12:19:23.451257 5072 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c4384a66-1728-45a3-9ab4-d1479c51cd18-openstack-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:19:23 crc kubenswrapper[5072]: I1124 12:19:23.451265 5072 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c4384a66-1728-45a3-9ab4-d1479c51cd18-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Nov 24 12:19:23 crc kubenswrapper[5072]: I1124 12:19:23.451296 5072 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Nov 24 12:19:23 crc kubenswrapper[5072]: I1124 12:19:23.451323 5072 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c4384a66-1728-45a3-9ab4-d1479c51cd18-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 12:19:23 crc kubenswrapper[5072]: I1124 12:19:23.473431 5072 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Nov 24 12:19:23 crc kubenswrapper[5072]: I1124 12:19:23.553187 5072 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Nov 24 12:19:23 crc kubenswrapper[5072]: I1124 12:19:23.850691 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"c4384a66-1728-45a3-9ab4-d1479c51cd18","Type":"ContainerDied","Data":"eb5a2e2fe0a0d34f7f7e09338e4679b0f44bb4d5536b218d1ec58618dbb284b7"} Nov 24 12:19:23 crc kubenswrapper[5072]: I1124 12:19:23.851005 5072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb5a2e2fe0a0d34f7f7e09338e4679b0f44bb4d5536b218d1ec58618dbb284b7" Nov 24 12:19:23 crc kubenswrapper[5072]: I1124 12:19:23.850847 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 24 12:19:30 crc kubenswrapper[5072]: I1124 12:19:30.625889 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 24 12:19:30 crc kubenswrapper[5072]: E1124 12:19:30.627169 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="813bb919-406f-42a0-8e17-73d219df6e86" containerName="extract-utilities" Nov 24 12:19:30 crc kubenswrapper[5072]: I1124 12:19:30.627196 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="813bb919-406f-42a0-8e17-73d219df6e86" containerName="extract-utilities" Nov 24 12:19:30 crc kubenswrapper[5072]: E1124 12:19:30.627229 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="813bb919-406f-42a0-8e17-73d219df6e86" containerName="registry-server" Nov 24 12:19:30 crc kubenswrapper[5072]: I1124 12:19:30.627245 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="813bb919-406f-42a0-8e17-73d219df6e86" containerName="registry-server" Nov 24 12:19:30 crc kubenswrapper[5072]: E1124 12:19:30.627297 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="813bb919-406f-42a0-8e17-73d219df6e86" containerName="extract-content" Nov 24 12:19:30 crc kubenswrapper[5072]: I1124 12:19:30.627312 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="813bb919-406f-42a0-8e17-73d219df6e86" containerName="extract-content" Nov 24 12:19:30 crc kubenswrapper[5072]: E1124 12:19:30.627337 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4384a66-1728-45a3-9ab4-d1479c51cd18" containerName="tempest-tests-tempest-tests-runner" Nov 24 12:19:30 crc kubenswrapper[5072]: I1124 12:19:30.627352 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4384a66-1728-45a3-9ab4-d1479c51cd18" containerName="tempest-tests-tempest-tests-runner" Nov 24 12:19:30 crc kubenswrapper[5072]: I1124 12:19:30.627772 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="813bb919-406f-42a0-8e17-73d219df6e86" containerName="registry-server" Nov 24 12:19:30 crc kubenswrapper[5072]: I1124 12:19:30.627835 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4384a66-1728-45a3-9ab4-d1479c51cd18" containerName="tempest-tests-tempest-tests-runner" Nov 24 12:19:30 crc kubenswrapper[5072]: I1124 12:19:30.628897 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 24 12:19:30 crc kubenswrapper[5072]: I1124 12:19:30.636577 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-kvkcl" Nov 24 12:19:30 crc kubenswrapper[5072]: I1124 12:19:30.665449 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 24 12:19:30 crc kubenswrapper[5072]: I1124 12:19:30.705786 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghw68\" (UniqueName: \"kubernetes.io/projected/5e7f7b49-4b5e-4050-bfdb-0cea02628c47-kube-api-access-ghw68\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"5e7f7b49-4b5e-4050-bfdb-0cea02628c47\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 24 12:19:30 crc kubenswrapper[5072]: I1124 12:19:30.705881 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"5e7f7b49-4b5e-4050-bfdb-0cea02628c47\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 24 12:19:30 crc kubenswrapper[5072]: I1124 12:19:30.808318 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghw68\" (UniqueName: \"kubernetes.io/projected/5e7f7b49-4b5e-4050-bfdb-0cea02628c47-kube-api-access-ghw68\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"5e7f7b49-4b5e-4050-bfdb-0cea02628c47\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 24 12:19:30 crc kubenswrapper[5072]: I1124 12:19:30.808400 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"5e7f7b49-4b5e-4050-bfdb-0cea02628c47\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 24 12:19:30 crc kubenswrapper[5072]: I1124 12:19:30.809731 5072 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"5e7f7b49-4b5e-4050-bfdb-0cea02628c47\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 24 12:19:30 crc kubenswrapper[5072]: I1124 12:19:30.831869 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghw68\" (UniqueName: \"kubernetes.io/projected/5e7f7b49-4b5e-4050-bfdb-0cea02628c47-kube-api-access-ghw68\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"5e7f7b49-4b5e-4050-bfdb-0cea02628c47\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 24 12:19:30 crc kubenswrapper[5072]: I1124 12:19:30.842640 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"5e7f7b49-4b5e-4050-bfdb-0cea02628c47\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 24 12:19:30 crc kubenswrapper[5072]: I1124 12:19:30.960966 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 24 12:19:31 crc kubenswrapper[5072]: I1124 12:19:31.532297 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 24 12:19:31 crc kubenswrapper[5072]: I1124 12:19:31.944159 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"5e7f7b49-4b5e-4050-bfdb-0cea02628c47","Type":"ContainerStarted","Data":"975c239fb0778f562896104863a70db7eb723685482b54d23d687eafeb09dd48"} Nov 24 12:19:32 crc kubenswrapper[5072]: I1124 12:19:32.954318 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"5e7f7b49-4b5e-4050-bfdb-0cea02628c47","Type":"ContainerStarted","Data":"d1d3a4243a5349766a30fc66bf862d8cec4d614a7c81b32739a295bc9849bf8f"} Nov 24 12:19:32 crc kubenswrapper[5072]: I1124 12:19:32.979274 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.181623424 podStartE2EDuration="2.979246536s" podCreationTimestamp="2025-11-24 12:19:30 +0000 UTC" firstStartedPulling="2025-11-24 12:19:31.53372335 +0000 UTC m=+4223.245247856" lastFinishedPulling="2025-11-24 12:19:32.331346492 +0000 UTC m=+4224.042870968" observedRunningTime="2025-11-24 12:19:32.96613868 +0000 UTC m=+4224.677663196" watchObservedRunningTime="2025-11-24 12:19:32.979246536 +0000 UTC m=+4224.690771042" Nov 24 12:19:59 crc kubenswrapper[5072]: I1124 12:19:59.155744 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-x5lkh/must-gather-h9lrd"] Nov 24 12:19:59 crc kubenswrapper[5072]: I1124 12:19:59.159059 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x5lkh/must-gather-h9lrd" Nov 24 12:19:59 crc kubenswrapper[5072]: I1124 12:19:59.160436 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-x5lkh"/"default-dockercfg-n25pv" Nov 24 12:19:59 crc kubenswrapper[5072]: I1124 12:19:59.161774 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-x5lkh"/"kube-root-ca.crt" Nov 24 12:19:59 crc kubenswrapper[5072]: I1124 12:19:59.162102 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-x5lkh"/"openshift-service-ca.crt" Nov 24 12:19:59 crc kubenswrapper[5072]: I1124 12:19:59.166970 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-x5lkh/must-gather-h9lrd"] Nov 24 12:19:59 crc kubenswrapper[5072]: I1124 12:19:59.278056 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/eff8ab72-c26f-4434-a4f1-1a19dbe034ba-must-gather-output\") pod \"must-gather-h9lrd\" (UID: \"eff8ab72-c26f-4434-a4f1-1a19dbe034ba\") " pod="openshift-must-gather-x5lkh/must-gather-h9lrd" Nov 24 12:19:59 crc kubenswrapper[5072]: I1124 12:19:59.278195 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwc8c\" (UniqueName: \"kubernetes.io/projected/eff8ab72-c26f-4434-a4f1-1a19dbe034ba-kube-api-access-nwc8c\") pod \"must-gather-h9lrd\" (UID: \"eff8ab72-c26f-4434-a4f1-1a19dbe034ba\") " pod="openshift-must-gather-x5lkh/must-gather-h9lrd" Nov 24 12:19:59 crc kubenswrapper[5072]: I1124 12:19:59.379961 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwc8c\" (UniqueName: \"kubernetes.io/projected/eff8ab72-c26f-4434-a4f1-1a19dbe034ba-kube-api-access-nwc8c\") pod \"must-gather-h9lrd\" (UID: \"eff8ab72-c26f-4434-a4f1-1a19dbe034ba\") " pod="openshift-must-gather-x5lkh/must-gather-h9lrd" Nov 24 12:19:59 crc kubenswrapper[5072]: I1124 12:19:59.380272 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/eff8ab72-c26f-4434-a4f1-1a19dbe034ba-must-gather-output\") pod \"must-gather-h9lrd\" (UID: \"eff8ab72-c26f-4434-a4f1-1a19dbe034ba\") " pod="openshift-must-gather-x5lkh/must-gather-h9lrd" Nov 24 12:19:59 crc kubenswrapper[5072]: I1124 12:19:59.380755 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/eff8ab72-c26f-4434-a4f1-1a19dbe034ba-must-gather-output\") pod \"must-gather-h9lrd\" (UID: \"eff8ab72-c26f-4434-a4f1-1a19dbe034ba\") " pod="openshift-must-gather-x5lkh/must-gather-h9lrd" Nov 24 12:19:59 crc kubenswrapper[5072]: I1124 12:19:59.983365 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwc8c\" (UniqueName: \"kubernetes.io/projected/eff8ab72-c26f-4434-a4f1-1a19dbe034ba-kube-api-access-nwc8c\") pod \"must-gather-h9lrd\" (UID: \"eff8ab72-c26f-4434-a4f1-1a19dbe034ba\") " pod="openshift-must-gather-x5lkh/must-gather-h9lrd" Nov 24 12:20:00 crc kubenswrapper[5072]: I1124 12:20:00.079851 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x5lkh/must-gather-h9lrd" Nov 24 12:20:00 crc kubenswrapper[5072]: I1124 12:20:00.539241 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-x5lkh/must-gather-h9lrd"] Nov 24 12:20:00 crc kubenswrapper[5072]: I1124 12:20:00.547920 5072 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 12:20:01 crc kubenswrapper[5072]: I1124 12:20:01.265598 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-x5lkh/must-gather-h9lrd" event={"ID":"eff8ab72-c26f-4434-a4f1-1a19dbe034ba","Type":"ContainerStarted","Data":"ad58319eead796d9c613fa9935d29e5e16951848d94e75e37e5e7598de0246e3"} Nov 24 12:20:05 crc kubenswrapper[5072]: I1124 12:20:05.311016 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-x5lkh/must-gather-h9lrd" event={"ID":"eff8ab72-c26f-4434-a4f1-1a19dbe034ba","Type":"ContainerStarted","Data":"e473bd2a93bcfb019dbea326946e4ecb12889b9d8000fea79757b4e5a7b4311a"} Nov 24 12:20:05 crc kubenswrapper[5072]: I1124 12:20:05.312473 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-x5lkh/must-gather-h9lrd" event={"ID":"eff8ab72-c26f-4434-a4f1-1a19dbe034ba","Type":"ContainerStarted","Data":"a28b3c2b95aef109795b6fc4cbd99c5e2a681c6f2b02cef137cde68082450edc"} Nov 24 12:20:05 crc kubenswrapper[5072]: I1124 12:20:05.332926 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-x5lkh/must-gather-h9lrd" podStartSLOduration=2.6585063460000002 podStartE2EDuration="6.332908815s" podCreationTimestamp="2025-11-24 12:19:59 +0000 UTC" firstStartedPulling="2025-11-24 12:20:00.547281315 +0000 UTC m=+4252.258805811" lastFinishedPulling="2025-11-24 12:20:04.221683764 +0000 UTC m=+4255.933208280" observedRunningTime="2025-11-24 12:20:05.326914976 +0000 UTC m=+4257.038439492" watchObservedRunningTime="2025-11-24 12:20:05.332908815 +0000 UTC m=+4257.044433291" Nov 24 12:20:08 crc kubenswrapper[5072]: I1124 12:20:08.524059 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-x5lkh/crc-debug-nl7dt"] Nov 24 12:20:08 crc kubenswrapper[5072]: I1124 12:20:08.525868 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x5lkh/crc-debug-nl7dt" Nov 24 12:20:08 crc kubenswrapper[5072]: I1124 12:20:08.599936 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5b01b9bd-82d9-40c4-9b27-301d17b22319-host\") pod \"crc-debug-nl7dt\" (UID: \"5b01b9bd-82d9-40c4-9b27-301d17b22319\") " pod="openshift-must-gather-x5lkh/crc-debug-nl7dt" Nov 24 12:20:08 crc kubenswrapper[5072]: I1124 12:20:08.600114 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnk6n\" (UniqueName: \"kubernetes.io/projected/5b01b9bd-82d9-40c4-9b27-301d17b22319-kube-api-access-lnk6n\") pod \"crc-debug-nl7dt\" (UID: \"5b01b9bd-82d9-40c4-9b27-301d17b22319\") " pod="openshift-must-gather-x5lkh/crc-debug-nl7dt" Nov 24 12:20:08 crc kubenswrapper[5072]: I1124 12:20:08.702439 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnk6n\" (UniqueName: \"kubernetes.io/projected/5b01b9bd-82d9-40c4-9b27-301d17b22319-kube-api-access-lnk6n\") pod \"crc-debug-nl7dt\" (UID: \"5b01b9bd-82d9-40c4-9b27-301d17b22319\") " pod="openshift-must-gather-x5lkh/crc-debug-nl7dt" Nov 24 12:20:08 crc kubenswrapper[5072]: I1124 12:20:08.702611 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5b01b9bd-82d9-40c4-9b27-301d17b22319-host\") pod \"crc-debug-nl7dt\" (UID: \"5b01b9bd-82d9-40c4-9b27-301d17b22319\") " pod="openshift-must-gather-x5lkh/crc-debug-nl7dt" Nov 24 12:20:08 crc kubenswrapper[5072]: I1124 12:20:08.702776 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5b01b9bd-82d9-40c4-9b27-301d17b22319-host\") pod \"crc-debug-nl7dt\" (UID: \"5b01b9bd-82d9-40c4-9b27-301d17b22319\") " pod="openshift-must-gather-x5lkh/crc-debug-nl7dt" Nov 24 12:20:08 crc kubenswrapper[5072]: I1124 12:20:08.726282 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnk6n\" (UniqueName: \"kubernetes.io/projected/5b01b9bd-82d9-40c4-9b27-301d17b22319-kube-api-access-lnk6n\") pod \"crc-debug-nl7dt\" (UID: \"5b01b9bd-82d9-40c4-9b27-301d17b22319\") " pod="openshift-must-gather-x5lkh/crc-debug-nl7dt" Nov 24 12:20:08 crc kubenswrapper[5072]: I1124 12:20:08.845094 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x5lkh/crc-debug-nl7dt" Nov 24 12:20:08 crc kubenswrapper[5072]: W1124 12:20:08.884145 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5b01b9bd_82d9_40c4_9b27_301d17b22319.slice/crio-71e4d3b1c3fce7ac3b57ccfaf1d5151e6531329a4d5cf9d7cded8f87a5cbe7e1 WatchSource:0}: Error finding container 71e4d3b1c3fce7ac3b57ccfaf1d5151e6531329a4d5cf9d7cded8f87a5cbe7e1: Status 404 returned error can't find the container with id 71e4d3b1c3fce7ac3b57ccfaf1d5151e6531329a4d5cf9d7cded8f87a5cbe7e1 Nov 24 12:20:09 crc kubenswrapper[5072]: I1124 12:20:09.347735 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-x5lkh/crc-debug-nl7dt" event={"ID":"5b01b9bd-82d9-40c4-9b27-301d17b22319","Type":"ContainerStarted","Data":"71e4d3b1c3fce7ac3b57ccfaf1d5151e6531329a4d5cf9d7cded8f87a5cbe7e1"} Nov 24 12:20:20 crc kubenswrapper[5072]: I1124 12:20:20.452405 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-x5lkh/crc-debug-nl7dt" event={"ID":"5b01b9bd-82d9-40c4-9b27-301d17b22319","Type":"ContainerStarted","Data":"917e903071d55258fc1b06727b6c3c2590911a9c4dbd07b544f7432e04fc1e56"} Nov 24 12:20:20 crc kubenswrapper[5072]: I1124 12:20:20.472244 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-x5lkh/crc-debug-nl7dt" podStartSLOduration=2.26606635 podStartE2EDuration="12.47222002s" podCreationTimestamp="2025-11-24 12:20:08 +0000 UTC" firstStartedPulling="2025-11-24 12:20:08.886633969 +0000 UTC m=+4260.598158445" lastFinishedPulling="2025-11-24 12:20:19.092787639 +0000 UTC m=+4270.804312115" observedRunningTime="2025-11-24 12:20:20.463421431 +0000 UTC m=+4272.174945907" watchObservedRunningTime="2025-11-24 12:20:20.47222002 +0000 UTC m=+4272.183744496" Nov 24 12:21:08 crc kubenswrapper[5072]: I1124 12:21:08.907778 5072 generic.go:334] "Generic (PLEG): container finished" podID="5b01b9bd-82d9-40c4-9b27-301d17b22319" containerID="917e903071d55258fc1b06727b6c3c2590911a9c4dbd07b544f7432e04fc1e56" exitCode=0 Nov 24 12:21:08 crc kubenswrapper[5072]: I1124 12:21:08.907858 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-x5lkh/crc-debug-nl7dt" event={"ID":"5b01b9bd-82d9-40c4-9b27-301d17b22319","Type":"ContainerDied","Data":"917e903071d55258fc1b06727b6c3c2590911a9c4dbd07b544f7432e04fc1e56"} Nov 24 12:21:10 crc kubenswrapper[5072]: I1124 12:21:10.054302 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x5lkh/crc-debug-nl7dt" Nov 24 12:21:10 crc kubenswrapper[5072]: I1124 12:21:10.105667 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-x5lkh/crc-debug-nl7dt"] Nov 24 12:21:10 crc kubenswrapper[5072]: I1124 12:21:10.116791 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-x5lkh/crc-debug-nl7dt"] Nov 24 12:21:10 crc kubenswrapper[5072]: I1124 12:21:10.148237 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lnk6n\" (UniqueName: \"kubernetes.io/projected/5b01b9bd-82d9-40c4-9b27-301d17b22319-kube-api-access-lnk6n\") pod \"5b01b9bd-82d9-40c4-9b27-301d17b22319\" (UID: \"5b01b9bd-82d9-40c4-9b27-301d17b22319\") " Nov 24 12:21:10 crc kubenswrapper[5072]: I1124 12:21:10.148293 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5b01b9bd-82d9-40c4-9b27-301d17b22319-host\") pod \"5b01b9bd-82d9-40c4-9b27-301d17b22319\" (UID: \"5b01b9bd-82d9-40c4-9b27-301d17b22319\") " Nov 24 12:21:10 crc kubenswrapper[5072]: I1124 12:21:10.151122 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b01b9bd-82d9-40c4-9b27-301d17b22319-host" (OuterVolumeSpecName: "host") pod "5b01b9bd-82d9-40c4-9b27-301d17b22319" (UID: "5b01b9bd-82d9-40c4-9b27-301d17b22319"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 12:21:10 crc kubenswrapper[5072]: I1124 12:21:10.155996 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b01b9bd-82d9-40c4-9b27-301d17b22319-kube-api-access-lnk6n" (OuterVolumeSpecName: "kube-api-access-lnk6n") pod "5b01b9bd-82d9-40c4-9b27-301d17b22319" (UID: "5b01b9bd-82d9-40c4-9b27-301d17b22319"). InnerVolumeSpecName "kube-api-access-lnk6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:21:10 crc kubenswrapper[5072]: I1124 12:21:10.251818 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lnk6n\" (UniqueName: \"kubernetes.io/projected/5b01b9bd-82d9-40c4-9b27-301d17b22319-kube-api-access-lnk6n\") on node \"crc\" DevicePath \"\"" Nov 24 12:21:10 crc kubenswrapper[5072]: I1124 12:21:10.251856 5072 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5b01b9bd-82d9-40c4-9b27-301d17b22319-host\") on node \"crc\" DevicePath \"\"" Nov 24 12:21:10 crc kubenswrapper[5072]: I1124 12:21:10.934212 5072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="71e4d3b1c3fce7ac3b57ccfaf1d5151e6531329a4d5cf9d7cded8f87a5cbe7e1" Nov 24 12:21:10 crc kubenswrapper[5072]: I1124 12:21:10.934306 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x5lkh/crc-debug-nl7dt" Nov 24 12:21:11 crc kubenswrapper[5072]: I1124 12:21:11.029043 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b01b9bd-82d9-40c4-9b27-301d17b22319" path="/var/lib/kubelet/pods/5b01b9bd-82d9-40c4-9b27-301d17b22319/volumes" Nov 24 12:21:11 crc kubenswrapper[5072]: I1124 12:21:11.293290 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-x5lkh/crc-debug-dzrxl"] Nov 24 12:21:11 crc kubenswrapper[5072]: E1124 12:21:11.295194 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b01b9bd-82d9-40c4-9b27-301d17b22319" containerName="container-00" Nov 24 12:21:11 crc kubenswrapper[5072]: I1124 12:21:11.295301 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b01b9bd-82d9-40c4-9b27-301d17b22319" containerName="container-00" Nov 24 12:21:11 crc kubenswrapper[5072]: I1124 12:21:11.295748 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b01b9bd-82d9-40c4-9b27-301d17b22319" containerName="container-00" Nov 24 12:21:11 crc kubenswrapper[5072]: I1124 12:21:11.299080 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x5lkh/crc-debug-dzrxl" Nov 24 12:21:11 crc kubenswrapper[5072]: I1124 12:21:11.372755 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f4678180-4a8b-49d2-907f-60bd7e080192-host\") pod \"crc-debug-dzrxl\" (UID: \"f4678180-4a8b-49d2-907f-60bd7e080192\") " pod="openshift-must-gather-x5lkh/crc-debug-dzrxl" Nov 24 12:21:11 crc kubenswrapper[5072]: I1124 12:21:11.372853 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rshhk\" (UniqueName: \"kubernetes.io/projected/f4678180-4a8b-49d2-907f-60bd7e080192-kube-api-access-rshhk\") pod \"crc-debug-dzrxl\" (UID: \"f4678180-4a8b-49d2-907f-60bd7e080192\") " pod="openshift-must-gather-x5lkh/crc-debug-dzrxl" Nov 24 12:21:11 crc kubenswrapper[5072]: I1124 12:21:11.475242 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f4678180-4a8b-49d2-907f-60bd7e080192-host\") pod \"crc-debug-dzrxl\" (UID: \"f4678180-4a8b-49d2-907f-60bd7e080192\") " pod="openshift-must-gather-x5lkh/crc-debug-dzrxl" Nov 24 12:21:11 crc kubenswrapper[5072]: I1124 12:21:11.475339 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rshhk\" (UniqueName: \"kubernetes.io/projected/f4678180-4a8b-49d2-907f-60bd7e080192-kube-api-access-rshhk\") pod \"crc-debug-dzrxl\" (UID: \"f4678180-4a8b-49d2-907f-60bd7e080192\") " pod="openshift-must-gather-x5lkh/crc-debug-dzrxl" Nov 24 12:21:11 crc kubenswrapper[5072]: I1124 12:21:11.475365 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f4678180-4a8b-49d2-907f-60bd7e080192-host\") pod \"crc-debug-dzrxl\" (UID: \"f4678180-4a8b-49d2-907f-60bd7e080192\") " pod="openshift-must-gather-x5lkh/crc-debug-dzrxl" Nov 24 12:21:11 crc kubenswrapper[5072]: I1124 12:21:11.507160 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rshhk\" (UniqueName: \"kubernetes.io/projected/f4678180-4a8b-49d2-907f-60bd7e080192-kube-api-access-rshhk\") pod \"crc-debug-dzrxl\" (UID: \"f4678180-4a8b-49d2-907f-60bd7e080192\") " pod="openshift-must-gather-x5lkh/crc-debug-dzrxl" Nov 24 12:21:11 crc kubenswrapper[5072]: I1124 12:21:11.615071 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x5lkh/crc-debug-dzrxl" Nov 24 12:21:11 crc kubenswrapper[5072]: I1124 12:21:11.945148 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-x5lkh/crc-debug-dzrxl" event={"ID":"f4678180-4a8b-49d2-907f-60bd7e080192","Type":"ContainerStarted","Data":"0cbd1c1b788a7c3e2a6dfa5a271f70b2394e61338eb2050510867dc2b5af2da6"} Nov 24 12:21:12 crc kubenswrapper[5072]: I1124 12:21:12.955748 5072 generic.go:334] "Generic (PLEG): container finished" podID="f4678180-4a8b-49d2-907f-60bd7e080192" containerID="f44be46ba01caddd22551e1313d5b7f1e41c8b007092cf4a7a53df854bd93017" exitCode=0 Nov 24 12:21:12 crc kubenswrapper[5072]: I1124 12:21:12.955865 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-x5lkh/crc-debug-dzrxl" event={"ID":"f4678180-4a8b-49d2-907f-60bd7e080192","Type":"ContainerDied","Data":"f44be46ba01caddd22551e1313d5b7f1e41c8b007092cf4a7a53df854bd93017"} Nov 24 12:21:13 crc kubenswrapper[5072]: I1124 12:21:13.644575 5072 patch_prober.go:28] interesting pod/machine-config-daemon-jfxnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:21:13 crc kubenswrapper[5072]: I1124 12:21:13.644630 5072 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:21:14 crc kubenswrapper[5072]: I1124 12:21:14.069899 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x5lkh/crc-debug-dzrxl" Nov 24 12:21:14 crc kubenswrapper[5072]: I1124 12:21:14.122351 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rshhk\" (UniqueName: \"kubernetes.io/projected/f4678180-4a8b-49d2-907f-60bd7e080192-kube-api-access-rshhk\") pod \"f4678180-4a8b-49d2-907f-60bd7e080192\" (UID: \"f4678180-4a8b-49d2-907f-60bd7e080192\") " Nov 24 12:21:14 crc kubenswrapper[5072]: I1124 12:21:14.122481 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f4678180-4a8b-49d2-907f-60bd7e080192-host\") pod \"f4678180-4a8b-49d2-907f-60bd7e080192\" (UID: \"f4678180-4a8b-49d2-907f-60bd7e080192\") " Nov 24 12:21:14 crc kubenswrapper[5072]: I1124 12:21:14.122865 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4678180-4a8b-49d2-907f-60bd7e080192-host" (OuterVolumeSpecName: "host") pod "f4678180-4a8b-49d2-907f-60bd7e080192" (UID: "f4678180-4a8b-49d2-907f-60bd7e080192"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 12:21:14 crc kubenswrapper[5072]: I1124 12:21:14.129148 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4678180-4a8b-49d2-907f-60bd7e080192-kube-api-access-rshhk" (OuterVolumeSpecName: "kube-api-access-rshhk") pod "f4678180-4a8b-49d2-907f-60bd7e080192" (UID: "f4678180-4a8b-49d2-907f-60bd7e080192"). InnerVolumeSpecName "kube-api-access-rshhk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:21:14 crc kubenswrapper[5072]: I1124 12:21:14.223871 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rshhk\" (UniqueName: \"kubernetes.io/projected/f4678180-4a8b-49d2-907f-60bd7e080192-kube-api-access-rshhk\") on node \"crc\" DevicePath \"\"" Nov 24 12:21:14 crc kubenswrapper[5072]: I1124 12:21:14.223908 5072 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f4678180-4a8b-49d2-907f-60bd7e080192-host\") on node \"crc\" DevicePath \"\"" Nov 24 12:21:14 crc kubenswrapper[5072]: I1124 12:21:14.933875 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-x5lkh/crc-debug-dzrxl"] Nov 24 12:21:14 crc kubenswrapper[5072]: I1124 12:21:14.942516 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-x5lkh/crc-debug-dzrxl"] Nov 24 12:21:14 crc kubenswrapper[5072]: I1124 12:21:14.972086 5072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0cbd1c1b788a7c3e2a6dfa5a271f70b2394e61338eb2050510867dc2b5af2da6" Nov 24 12:21:14 crc kubenswrapper[5072]: I1124 12:21:14.972163 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x5lkh/crc-debug-dzrxl" Nov 24 12:21:15 crc kubenswrapper[5072]: I1124 12:21:15.029497 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4678180-4a8b-49d2-907f-60bd7e080192" path="/var/lib/kubelet/pods/f4678180-4a8b-49d2-907f-60bd7e080192/volumes" Nov 24 12:21:16 crc kubenswrapper[5072]: I1124 12:21:16.307205 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-x5lkh/crc-debug-6mtm5"] Nov 24 12:21:16 crc kubenswrapper[5072]: E1124 12:21:16.307994 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4678180-4a8b-49d2-907f-60bd7e080192" containerName="container-00" Nov 24 12:21:16 crc kubenswrapper[5072]: I1124 12:21:16.308009 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4678180-4a8b-49d2-907f-60bd7e080192" containerName="container-00" Nov 24 12:21:16 crc kubenswrapper[5072]: I1124 12:21:16.308242 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4678180-4a8b-49d2-907f-60bd7e080192" containerName="container-00" Nov 24 12:21:16 crc kubenswrapper[5072]: I1124 12:21:16.308973 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x5lkh/crc-debug-6mtm5" Nov 24 12:21:16 crc kubenswrapper[5072]: I1124 12:21:16.362933 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7d5g2\" (UniqueName: \"kubernetes.io/projected/0d51a210-96fd-493c-bb5f-e5dcb287a43c-kube-api-access-7d5g2\") pod \"crc-debug-6mtm5\" (UID: \"0d51a210-96fd-493c-bb5f-e5dcb287a43c\") " pod="openshift-must-gather-x5lkh/crc-debug-6mtm5" Nov 24 12:21:16 crc kubenswrapper[5072]: I1124 12:21:16.363126 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0d51a210-96fd-493c-bb5f-e5dcb287a43c-host\") pod \"crc-debug-6mtm5\" (UID: \"0d51a210-96fd-493c-bb5f-e5dcb287a43c\") " pod="openshift-must-gather-x5lkh/crc-debug-6mtm5" Nov 24 12:21:16 crc kubenswrapper[5072]: I1124 12:21:16.465561 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7d5g2\" (UniqueName: \"kubernetes.io/projected/0d51a210-96fd-493c-bb5f-e5dcb287a43c-kube-api-access-7d5g2\") pod \"crc-debug-6mtm5\" (UID: \"0d51a210-96fd-493c-bb5f-e5dcb287a43c\") " pod="openshift-must-gather-x5lkh/crc-debug-6mtm5" Nov 24 12:21:16 crc kubenswrapper[5072]: I1124 12:21:16.465892 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0d51a210-96fd-493c-bb5f-e5dcb287a43c-host\") pod \"crc-debug-6mtm5\" (UID: \"0d51a210-96fd-493c-bb5f-e5dcb287a43c\") " pod="openshift-must-gather-x5lkh/crc-debug-6mtm5" Nov 24 12:21:16 crc kubenswrapper[5072]: I1124 12:21:16.466021 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0d51a210-96fd-493c-bb5f-e5dcb287a43c-host\") pod \"crc-debug-6mtm5\" (UID: \"0d51a210-96fd-493c-bb5f-e5dcb287a43c\") " pod="openshift-must-gather-x5lkh/crc-debug-6mtm5" Nov 24 12:21:16 crc kubenswrapper[5072]: I1124 12:21:16.487003 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7d5g2\" (UniqueName: \"kubernetes.io/projected/0d51a210-96fd-493c-bb5f-e5dcb287a43c-kube-api-access-7d5g2\") pod \"crc-debug-6mtm5\" (UID: \"0d51a210-96fd-493c-bb5f-e5dcb287a43c\") " pod="openshift-must-gather-x5lkh/crc-debug-6mtm5" Nov 24 12:21:16 crc kubenswrapper[5072]: I1124 12:21:16.633428 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x5lkh/crc-debug-6mtm5" Nov 24 12:21:16 crc kubenswrapper[5072]: I1124 12:21:16.989850 5072 generic.go:334] "Generic (PLEG): container finished" podID="0d51a210-96fd-493c-bb5f-e5dcb287a43c" containerID="02a008d25817296a27ecc6f0f1a51e7e5a1f462105a66172891f8696217941ca" exitCode=0 Nov 24 12:21:16 crc kubenswrapper[5072]: I1124 12:21:16.989898 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-x5lkh/crc-debug-6mtm5" event={"ID":"0d51a210-96fd-493c-bb5f-e5dcb287a43c","Type":"ContainerDied","Data":"02a008d25817296a27ecc6f0f1a51e7e5a1f462105a66172891f8696217941ca"} Nov 24 12:21:16 crc kubenswrapper[5072]: I1124 12:21:16.989938 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-x5lkh/crc-debug-6mtm5" event={"ID":"0d51a210-96fd-493c-bb5f-e5dcb287a43c","Type":"ContainerStarted","Data":"2219fde1efc750c7632dd40972527f8730ad3429b7b6ad698588d923f47db991"} Nov 24 12:21:17 crc kubenswrapper[5072]: I1124 12:21:17.039139 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-x5lkh/crc-debug-6mtm5"] Nov 24 12:21:17 crc kubenswrapper[5072]: I1124 12:21:17.050430 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-x5lkh/crc-debug-6mtm5"] Nov 24 12:21:18 crc kubenswrapper[5072]: I1124 12:21:18.130086 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x5lkh/crc-debug-6mtm5" Nov 24 12:21:18 crc kubenswrapper[5072]: I1124 12:21:18.304679 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7d5g2\" (UniqueName: \"kubernetes.io/projected/0d51a210-96fd-493c-bb5f-e5dcb287a43c-kube-api-access-7d5g2\") pod \"0d51a210-96fd-493c-bb5f-e5dcb287a43c\" (UID: \"0d51a210-96fd-493c-bb5f-e5dcb287a43c\") " Nov 24 12:21:18 crc kubenswrapper[5072]: I1124 12:21:18.304759 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0d51a210-96fd-493c-bb5f-e5dcb287a43c-host\") pod \"0d51a210-96fd-493c-bb5f-e5dcb287a43c\" (UID: \"0d51a210-96fd-493c-bb5f-e5dcb287a43c\") " Nov 24 12:21:18 crc kubenswrapper[5072]: I1124 12:21:18.304988 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d51a210-96fd-493c-bb5f-e5dcb287a43c-host" (OuterVolumeSpecName: "host") pod "0d51a210-96fd-493c-bb5f-e5dcb287a43c" (UID: "0d51a210-96fd-493c-bb5f-e5dcb287a43c"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 12:21:18 crc kubenswrapper[5072]: I1124 12:21:18.305740 5072 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0d51a210-96fd-493c-bb5f-e5dcb287a43c-host\") on node \"crc\" DevicePath \"\"" Nov 24 12:21:18 crc kubenswrapper[5072]: I1124 12:21:18.312773 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d51a210-96fd-493c-bb5f-e5dcb287a43c-kube-api-access-7d5g2" (OuterVolumeSpecName: "kube-api-access-7d5g2") pod "0d51a210-96fd-493c-bb5f-e5dcb287a43c" (UID: "0d51a210-96fd-493c-bb5f-e5dcb287a43c"). InnerVolumeSpecName "kube-api-access-7d5g2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:21:18 crc kubenswrapper[5072]: I1124 12:21:18.408730 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7d5g2\" (UniqueName: \"kubernetes.io/projected/0d51a210-96fd-493c-bb5f-e5dcb287a43c-kube-api-access-7d5g2\") on node \"crc\" DevicePath \"\"" Nov 24 12:21:19 crc kubenswrapper[5072]: I1124 12:21:19.015917 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x5lkh/crc-debug-6mtm5" Nov 24 12:21:19 crc kubenswrapper[5072]: I1124 12:21:19.035638 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d51a210-96fd-493c-bb5f-e5dcb287a43c" path="/var/lib/kubelet/pods/0d51a210-96fd-493c-bb5f-e5dcb287a43c/volumes" Nov 24 12:21:19 crc kubenswrapper[5072]: I1124 12:21:19.036971 5072 scope.go:117] "RemoveContainer" containerID="02a008d25817296a27ecc6f0f1a51e7e5a1f462105a66172891f8696217941ca" Nov 24 12:21:42 crc kubenswrapper[5072]: I1124 12:21:42.047499 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7785cf9ff8-jrntg_02bf4aaa-02e9-42b0-96e7-182557310711/barbican-api/0.log" Nov 24 12:21:42 crc kubenswrapper[5072]: I1124 12:21:42.221161 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7785cf9ff8-jrntg_02bf4aaa-02e9-42b0-96e7-182557310711/barbican-api-log/0.log" Nov 24 12:21:42 crc kubenswrapper[5072]: I1124 12:21:42.327281 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-56f6884b8b-d9lh4_17dcf560-c08b-4adb-b4e1-90887cddba39/barbican-keystone-listener/0.log" Nov 24 12:21:42 crc kubenswrapper[5072]: I1124 12:21:42.505471 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-55f6867c5c-rjpdx_522a3a4f-dbc9-4b6a-9bff-5df22b4cba44/barbican-worker/0.log" Nov 24 12:21:42 crc kubenswrapper[5072]: I1124 12:21:42.561725 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-55f6867c5c-rjpdx_522a3a4f-dbc9-4b6a-9bff-5df22b4cba44/barbican-worker-log/0.log" Nov 24 12:21:42 crc kubenswrapper[5072]: I1124 12:21:42.576405 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-56f6884b8b-d9lh4_17dcf560-c08b-4adb-b4e1-90887cddba39/barbican-keystone-listener-log/0.log" Nov 24 12:21:42 crc kubenswrapper[5072]: I1124 12:21:42.731048 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-cv2h4_ddef4dcc-c1f4-4057-8503-14afc5bffd37/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 12:21:42 crc kubenswrapper[5072]: I1124 12:21:42.778734 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_e6e58a4b-cc8d-45ea-8aad-10f44bcc2c21/ceilometer-central-agent/0.log" Nov 24 12:21:42 crc kubenswrapper[5072]: I1124 12:21:42.939211 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_e6e58a4b-cc8d-45ea-8aad-10f44bcc2c21/ceilometer-notification-agent/0.log" Nov 24 12:21:42 crc kubenswrapper[5072]: I1124 12:21:42.976968 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_e6e58a4b-cc8d-45ea-8aad-10f44bcc2c21/sg-core/0.log" Nov 24 12:21:42 crc kubenswrapper[5072]: I1124 12:21:42.977790 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_e6e58a4b-cc8d-45ea-8aad-10f44bcc2c21/proxy-httpd/0.log" Nov 24 12:21:43 crc kubenswrapper[5072]: I1124 12:21:43.319314 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-client-edpm-deployment-openstack-edpm-ipam-nr928_95c83f58-e5a9-4038-ae80-2ba999d47b81/ceph-client-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 12:21:43 crc kubenswrapper[5072]: I1124 12:21:43.321781 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-h9kpr_42275dab-0c0f-488a-9d9f-00d08fd1a9fb/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 12:21:43 crc kubenswrapper[5072]: I1124 12:21:43.465065 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_83c629ab-d9bd-4c85-b3e8-7d43a3d1c495/cinder-api/0.log" Nov 24 12:21:43 crc kubenswrapper[5072]: I1124 12:21:43.492036 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_83c629ab-d9bd-4c85-b3e8-7d43a3d1c495/cinder-api-log/0.log" Nov 24 12:21:43 crc kubenswrapper[5072]: I1124 12:21:43.645169 5072 patch_prober.go:28] interesting pod/machine-config-daemon-jfxnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:21:43 crc kubenswrapper[5072]: I1124 12:21:43.647225 5072 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:21:43 crc kubenswrapper[5072]: I1124 12:21:43.718407 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_e51194ec-7c1f-4609-996f-ee210bb13bb5/probe/0.log" Nov 24 12:21:43 crc kubenswrapper[5072]: I1124 12:21:43.724327 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_e51194ec-7c1f-4609-996f-ee210bb13bb5/cinder-backup/0.log" Nov 24 12:21:43 crc kubenswrapper[5072]: I1124 12:21:43.799756 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_5053f25d-e6d3-4a92-88f4-5659485403af/cinder-scheduler/0.log" Nov 24 12:21:43 crc kubenswrapper[5072]: I1124 12:21:43.923085 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_5053f25d-e6d3-4a92-88f4-5659485403af/probe/0.log" Nov 24 12:21:43 crc kubenswrapper[5072]: I1124 12:21:43.982196 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0/cinder-volume/0.log" Nov 24 12:21:44 crc kubenswrapper[5072]: I1124 12:21:44.011604 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0/probe/0.log" Nov 24 12:21:44 crc kubenswrapper[5072]: I1124 12:21:44.154903 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-5lhlt_3960ebf7-e874-4d40-9d12-759d8bf2b312/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 12:21:44 crc kubenswrapper[5072]: I1124 12:21:44.235579 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-vptlp_792ebb76-1e10-452d-a1e3-159bb5b80975/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 12:21:44 crc kubenswrapper[5072]: I1124 12:21:44.369316 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-76b5fdb995-g6frb_0307a1dc-4248-472b-9b5e-51f2f116ac64/init/0.log" Nov 24 12:21:44 crc kubenswrapper[5072]: I1124 12:21:44.587772 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-76b5fdb995-g6frb_0307a1dc-4248-472b-9b5e-51f2f116ac64/init/0.log" Nov 24 12:21:44 crc kubenswrapper[5072]: I1124 12:21:44.595264 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-76b5fdb995-g6frb_0307a1dc-4248-472b-9b5e-51f2f116ac64/dnsmasq-dns/0.log" Nov 24 12:21:44 crc kubenswrapper[5072]: I1124 12:21:44.664791 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_1d71c9a2-3657-43f6-aec2-b53e3ea8fc01/glance-httpd/0.log" Nov 24 12:21:44 crc kubenswrapper[5072]: I1124 12:21:44.816555 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_61880241-c7c3-4422-adbb-3e6323831d71/glance-httpd/0.log" Nov 24 12:21:44 crc kubenswrapper[5072]: I1124 12:21:44.825927 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_1d71c9a2-3657-43f6-aec2-b53e3ea8fc01/glance-log/0.log" Nov 24 12:21:44 crc kubenswrapper[5072]: I1124 12:21:44.873098 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_61880241-c7c3-4422-adbb-3e6323831d71/glance-log/0.log" Nov 24 12:21:45 crc kubenswrapper[5072]: I1124 12:21:45.153146 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-575b5d47b6-n66fd_78739666-79c8-4af9-9766-6793e7975629/horizon/1.log" Nov 24 12:21:45 crc kubenswrapper[5072]: I1124 12:21:45.156388 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-575b5d47b6-n66fd_78739666-79c8-4af9-9766-6793e7975629/horizon/0.log" Nov 24 12:21:45 crc kubenswrapper[5072]: I1124 12:21:45.284866 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-575b5d47b6-n66fd_78739666-79c8-4af9-9766-6793e7975629/horizon-log/0.log" Nov 24 12:21:45 crc kubenswrapper[5072]: I1124 12:21:45.363324 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-dcmv7_55863054-3da4-4d20-80f7-9dd43d6ce388/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 12:21:46 crc kubenswrapper[5072]: I1124 12:21:46.082782 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-lrxgj_b7687777-0417-42e1-8f0e-201de683f32d/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 12:21:46 crc kubenswrapper[5072]: I1124 12:21:46.295933 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29399761-642mr_360e5e7f-fc1f-4d24-8446-b97c9c04aa46/keystone-cron/0.log" Nov 24 12:21:46 crc kubenswrapper[5072]: I1124 12:21:46.330651 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_4d9aa589-2a3a-4e9a-a1d6-92fc939cf2f6/kube-state-metrics/0.log" Nov 24 12:21:46 crc kubenswrapper[5072]: I1124 12:21:46.750677 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-n6dbq_619cab13-44ee-48c6-bf40-4baddd9ad88e/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 12:21:46 crc kubenswrapper[5072]: I1124 12:21:46.907746 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-6cc7b79dbf-mkd8x_f71f36ff-e9cc-4207-8381-a4edf917c2b1/keystone-api/0.log" Nov 24 12:21:47 crc kubenswrapper[5072]: I1124 12:21:47.076323 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_7c1f9647-62ad-452d-84ae-81211ebc18b5/probe/0.log" Nov 24 12:21:47 crc kubenswrapper[5072]: I1124 12:21:47.156629 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_f4e064b6-df4e-436b-9dec-c72ff87569f2/manila-api/0.log" Nov 24 12:21:47 crc kubenswrapper[5072]: I1124 12:21:47.193035 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_7c1f9647-62ad-452d-84ae-81211ebc18b5/manila-scheduler/0.log" Nov 24 12:21:47 crc kubenswrapper[5072]: I1124 12:21:47.537690 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_aee02894-118d-46a9-88b6-4e2099bdf16f/manila-share/0.log" Nov 24 12:21:47 crc kubenswrapper[5072]: I1124 12:21:47.654042 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_f4e064b6-df4e-436b-9dec-c72ff87569f2/manila-api-log/0.log" Nov 24 12:21:47 crc kubenswrapper[5072]: I1124 12:21:47.738430 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_aee02894-118d-46a9-88b6-4e2099bdf16f/probe/0.log" Nov 24 12:21:47 crc kubenswrapper[5072]: I1124 12:21:47.965725 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6dc7d7697-tf7nw_c1ae9399-6f4c-4053-84c8-821eb2867dc8/neutron-api/0.log" Nov 24 12:21:47 crc kubenswrapper[5072]: I1124 12:21:47.983823 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6dc7d7697-tf7nw_c1ae9399-6f4c-4053-84c8-821eb2867dc8/neutron-httpd/0.log" Nov 24 12:21:48 crc kubenswrapper[5072]: I1124 12:21:48.111518 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-pfz95_45051007-ac2c-49b5-acda-c9fdccd8cf9d/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 12:21:48 crc kubenswrapper[5072]: I1124 12:21:48.408755 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_82f52ff9-d0f6-4a88-bc4e-47d4d47808ac/nova-api-log/0.log" Nov 24 12:21:48 crc kubenswrapper[5072]: I1124 12:21:48.547956 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_cf68ac0f-299c-4ed5-a198-30bd0b2a7544/nova-cell0-conductor-conductor/0.log" Nov 24 12:21:48 crc kubenswrapper[5072]: I1124 12:21:48.727365 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_42a95d10-e572-4170-aa79-9b98d2c290b7/nova-cell1-conductor-conductor/0.log" Nov 24 12:21:48 crc kubenswrapper[5072]: I1124 12:21:48.762419 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_82f52ff9-d0f6-4a88-bc4e-47d4d47808ac/nova-api-api/0.log" Nov 24 12:21:48 crc kubenswrapper[5072]: I1124 12:21:48.860695 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_8a061135-fd7e-4c6c-bbca-422e684c0ccb/nova-cell1-novncproxy-novncproxy/0.log" Nov 24 12:21:49 crc kubenswrapper[5072]: I1124 12:21:49.009555 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-gpkb7_a25d738b-a5be-44f2-86f2-9b554c3f7947/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 12:21:49 crc kubenswrapper[5072]: I1124 12:21:49.127947 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_cb7d5b02-88e5-4f50-8039-3d573e832977/nova-metadata-log/0.log" Nov 24 12:21:49 crc kubenswrapper[5072]: I1124 12:21:49.409528 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_c842f0bb-64ee-4e70-a276-cf281480cf05/nova-scheduler-scheduler/0.log" Nov 24 12:21:49 crc kubenswrapper[5072]: I1124 12:21:49.502832 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_e05f8763-9e64-4bf6-84c8-25df03057309/mysql-bootstrap/0.log" Nov 24 12:21:49 crc kubenswrapper[5072]: I1124 12:21:49.738348 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_e05f8763-9e64-4bf6-84c8-25df03057309/mysql-bootstrap/0.log" Nov 24 12:21:49 crc kubenswrapper[5072]: I1124 12:21:49.768827 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_e05f8763-9e64-4bf6-84c8-25df03057309/galera/0.log" Nov 24 12:21:49 crc kubenswrapper[5072]: I1124 12:21:49.956912 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_0f143b81-90ef-461e-a3b5-36ceb68eda94/mysql-bootstrap/0.log" Nov 24 12:21:50 crc kubenswrapper[5072]: I1124 12:21:50.109316 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_0f143b81-90ef-461e-a3b5-36ceb68eda94/mysql-bootstrap/0.log" Nov 24 12:21:50 crc kubenswrapper[5072]: I1124 12:21:50.189484 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_0f143b81-90ef-461e-a3b5-36ceb68eda94/galera/0.log" Nov 24 12:21:50 crc kubenswrapper[5072]: I1124 12:21:50.357602 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_36162589-ddbd-4386-82e5-62d4d73d41b7/openstackclient/0.log" Nov 24 12:21:50 crc kubenswrapper[5072]: I1124 12:21:50.429911 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ltkhm_d1f48ba7-b537-4282-9eef-aee78410afcb/ovn-controller/0.log" Nov 24 12:21:50 crc kubenswrapper[5072]: I1124 12:21:50.565208 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-dwffh_6dc3beca-8832-4852-a397-cca5accca1a1/openstack-network-exporter/0.log" Nov 24 12:21:50 crc kubenswrapper[5072]: I1124 12:21:50.787329 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-7tcxz_a15ce4b3-7344-4b9f-983a-0065209e9d68/ovsdb-server-init/0.log" Nov 24 12:21:50 crc kubenswrapper[5072]: I1124 12:21:50.819130 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_cb7d5b02-88e5-4f50-8039-3d573e832977/nova-metadata-metadata/0.log" Nov 24 12:21:50 crc kubenswrapper[5072]: I1124 12:21:50.986396 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-7tcxz_a15ce4b3-7344-4b9f-983a-0065209e9d68/ovsdb-server-init/0.log" Nov 24 12:21:51 crc kubenswrapper[5072]: I1124 12:21:51.018902 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-7tcxz_a15ce4b3-7344-4b9f-983a-0065209e9d68/ovs-vswitchd/0.log" Nov 24 12:21:51 crc kubenswrapper[5072]: I1124 12:21:51.056244 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-7tcxz_a15ce4b3-7344-4b9f-983a-0065209e9d68/ovsdb-server/0.log" Nov 24 12:21:51 crc kubenswrapper[5072]: I1124 12:21:51.213786 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-qk9gt_60fbd22d-6dd6-4bdf-aa92-3b4682feeee0/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 12:21:51 crc kubenswrapper[5072]: I1124 12:21:51.288723 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_67176bb7-8d1f-453f-b403-7e2f323f41f8/openstack-network-exporter/0.log" Nov 24 12:21:51 crc kubenswrapper[5072]: I1124 12:21:51.422107 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_67176bb7-8d1f-453f-b403-7e2f323f41f8/ovn-northd/0.log" Nov 24 12:21:51 crc kubenswrapper[5072]: I1124 12:21:51.445229 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_e8ca3957-ce1c-49e8-a56b-d0f406d2e078/openstack-network-exporter/0.log" Nov 24 12:21:51 crc kubenswrapper[5072]: I1124 12:21:51.526510 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_e8ca3957-ce1c-49e8-a56b-d0f406d2e078/ovsdbserver-nb/0.log" Nov 24 12:21:51 crc kubenswrapper[5072]: I1124 12:21:51.666613 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_c95fc4be-5531-4d4d-98a5-aeb6d64b732d/openstack-network-exporter/0.log" Nov 24 12:21:51 crc kubenswrapper[5072]: I1124 12:21:51.719021 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_c95fc4be-5531-4d4d-98a5-aeb6d64b732d/ovsdbserver-sb/0.log" Nov 24 12:21:51 crc kubenswrapper[5072]: I1124 12:21:51.892537 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-64d9f94c7b-p7b2p_35ccd8e2-71e0-4a36-a51a-5c9a4734b124/placement-api/0.log" Nov 24 12:21:51 crc kubenswrapper[5072]: I1124 12:21:51.957014 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-64d9f94c7b-p7b2p_35ccd8e2-71e0-4a36-a51a-5c9a4734b124/placement-log/0.log" Nov 24 12:21:52 crc kubenswrapper[5072]: I1124 12:21:52.016430 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_38928c57-6c7d-4fb6-afe8-ed2602e450c3/setup-container/0.log" Nov 24 12:21:52 crc kubenswrapper[5072]: I1124 12:21:52.271925 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_38928c57-6c7d-4fb6-afe8-ed2602e450c3/rabbitmq/0.log" Nov 24 12:21:52 crc kubenswrapper[5072]: I1124 12:21:52.278499 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_02112c1c-a6a9-42e6-938e-e3e8d7b7217c/setup-container/0.log" Nov 24 12:21:52 crc kubenswrapper[5072]: I1124 12:21:52.278865 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_38928c57-6c7d-4fb6-afe8-ed2602e450c3/setup-container/0.log" Nov 24 12:21:52 crc kubenswrapper[5072]: I1124 12:21:52.571703 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_02112c1c-a6a9-42e6-938e-e3e8d7b7217c/setup-container/0.log" Nov 24 12:21:52 crc kubenswrapper[5072]: I1124 12:21:52.577486 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-fbs95_ed449e35-f14d-45cf-b172-49441c6d676a/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 12:21:52 crc kubenswrapper[5072]: I1124 12:21:52.624364 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_02112c1c-a6a9-42e6-938e-e3e8d7b7217c/rabbitmq/0.log" Nov 24 12:21:52 crc kubenswrapper[5072]: I1124 12:21:52.827306 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-xdvcd_0dcc0eb2-52d6-4d82-bddd-960848462a81/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 12:21:52 crc kubenswrapper[5072]: I1124 12:21:52.845827 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-9klcc_d97f4dff-1854-4cf0-9546-1626e9a5856b/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 12:21:53 crc kubenswrapper[5072]: I1124 12:21:53.068746 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-p68cc_c8ddc412-753d-44ff-9ac9-39a003a786dd/ssh-known-hosts-edpm-deployment/0.log" Nov 24 12:21:53 crc kubenswrapper[5072]: I1124 12:21:53.173009 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_c4384a66-1728-45a3-9ab4-d1479c51cd18/tempest-tests-tempest-tests-runner/0.log" Nov 24 12:21:53 crc kubenswrapper[5072]: I1124 12:21:53.306739 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_5e7f7b49-4b5e-4050-bfdb-0cea02628c47/test-operator-logs-container/0.log" Nov 24 12:21:53 crc kubenswrapper[5072]: I1124 12:21:53.391568 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-nw2kj_2f1ddd2f-edb5-4613-9fde-a27861d899bc/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 12:22:11 crc kubenswrapper[5072]: I1124 12:22:11.269182 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_f0ecdfec-d313-40dc-97a6-344109151fe8/memcached/0.log" Nov 24 12:22:13 crc kubenswrapper[5072]: I1124 12:22:13.645325 5072 patch_prober.go:28] interesting pod/machine-config-daemon-jfxnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:22:13 crc kubenswrapper[5072]: I1124 12:22:13.645780 5072 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:22:13 crc kubenswrapper[5072]: I1124 12:22:13.645848 5072 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" Nov 24 12:22:13 crc kubenswrapper[5072]: I1124 12:22:13.647169 5072 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5e4b2551d31676c56045004e4ca1ab40457429150ff7753248ba4a9525c16c9e"} pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 12:22:13 crc kubenswrapper[5072]: I1124 12:22:13.647330 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" containerName="machine-config-daemon" containerID="cri-o://5e4b2551d31676c56045004e4ca1ab40457429150ff7753248ba4a9525c16c9e" gracePeriod=600 Nov 24 12:22:14 crc kubenswrapper[5072]: E1124 12:22:14.323148 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 12:22:14 crc kubenswrapper[5072]: I1124 12:22:14.577683 5072 generic.go:334] "Generic (PLEG): container finished" podID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" containerID="5e4b2551d31676c56045004e4ca1ab40457429150ff7753248ba4a9525c16c9e" exitCode=0 Nov 24 12:22:14 crc kubenswrapper[5072]: I1124 12:22:14.577733 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" event={"ID":"85ee6420-36f0-467c-acf4-ebea8b02c8d5","Type":"ContainerDied","Data":"5e4b2551d31676c56045004e4ca1ab40457429150ff7753248ba4a9525c16c9e"} Nov 24 12:22:14 crc kubenswrapper[5072]: I1124 12:22:14.577770 5072 scope.go:117] "RemoveContainer" containerID="d220dc7647c7de191bb9661af86034533cedb6d0eef421dd6a5fd92481793daf" Nov 24 12:22:14 crc kubenswrapper[5072]: I1124 12:22:14.578460 5072 scope.go:117] "RemoveContainer" containerID="5e4b2551d31676c56045004e4ca1ab40457429150ff7753248ba4a9525c16c9e" Nov 24 12:22:14 crc kubenswrapper[5072]: E1124 12:22:14.578877 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 12:22:22 crc kubenswrapper[5072]: I1124 12:22:22.637802 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9b2186798aeb926003696bc84f4630fc1fe1628e77d31f0b55ade92554p4x65_e7f9a3f4-4e91-406d-b8da-1bf99ac318bd/util/0.log" Nov 24 12:22:22 crc kubenswrapper[5072]: I1124 12:22:22.777365 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9b2186798aeb926003696bc84f4630fc1fe1628e77d31f0b55ade92554p4x65_e7f9a3f4-4e91-406d-b8da-1bf99ac318bd/util/0.log" Nov 24 12:22:22 crc kubenswrapper[5072]: I1124 12:22:22.787182 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9b2186798aeb926003696bc84f4630fc1fe1628e77d31f0b55ade92554p4x65_e7f9a3f4-4e91-406d-b8da-1bf99ac318bd/pull/0.log" Nov 24 12:22:22 crc kubenswrapper[5072]: I1124 12:22:22.881592 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9b2186798aeb926003696bc84f4630fc1fe1628e77d31f0b55ade92554p4x65_e7f9a3f4-4e91-406d-b8da-1bf99ac318bd/pull/0.log" Nov 24 12:22:23 crc kubenswrapper[5072]: I1124 12:22:23.021606 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9b2186798aeb926003696bc84f4630fc1fe1628e77d31f0b55ade92554p4x65_e7f9a3f4-4e91-406d-b8da-1bf99ac318bd/pull/0.log" Nov 24 12:22:23 crc kubenswrapper[5072]: I1124 12:22:23.026611 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9b2186798aeb926003696bc84f4630fc1fe1628e77d31f0b55ade92554p4x65_e7f9a3f4-4e91-406d-b8da-1bf99ac318bd/extract/0.log" Nov 24 12:22:23 crc kubenswrapper[5072]: I1124 12:22:23.037039 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9b2186798aeb926003696bc84f4630fc1fe1628e77d31f0b55ade92554p4x65_e7f9a3f4-4e91-406d-b8da-1bf99ac318bd/util/0.log" Nov 24 12:22:23 crc kubenswrapper[5072]: I1124 12:22:23.691481 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-86dc4d89c8-4jwxd_a4945263-5f74-4c93-b782-8a381e40275c/manager/0.log" Nov 24 12:22:23 crc kubenswrapper[5072]: I1124 12:22:23.700923 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-86dc4d89c8-4jwxd_a4945263-5f74-4c93-b782-8a381e40275c/kube-rbac-proxy/0.log" Nov 24 12:22:23 crc kubenswrapper[5072]: I1124 12:22:23.735668 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-79856dc55c-756nd_459e53de-60cc-4763-a093-4940428df8c3/kube-rbac-proxy/0.log" Nov 24 12:22:23 crc kubenswrapper[5072]: I1124 12:22:23.915578 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-7d695c9b56-bpsnt_500235e4-633d-486d-8ea9-bc0830747b6f/kube-rbac-proxy/0.log" Nov 24 12:22:23 crc kubenswrapper[5072]: I1124 12:22:23.919895 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-79856dc55c-756nd_459e53de-60cc-4763-a093-4940428df8c3/manager/0.log" Nov 24 12:22:23 crc kubenswrapper[5072]: I1124 12:22:23.942161 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-7d695c9b56-bpsnt_500235e4-633d-486d-8ea9-bc0830747b6f/manager/0.log" Nov 24 12:22:24 crc kubenswrapper[5072]: I1124 12:22:24.144478 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-68b95954c9-5s9dg_67cd7ebd-5d77-4c59-a1af-2283997e4de4/kube-rbac-proxy/0.log" Nov 24 12:22:24 crc kubenswrapper[5072]: I1124 12:22:24.214092 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-68b95954c9-5s9dg_67cd7ebd-5d77-4c59-a1af-2283997e4de4/manager/0.log" Nov 24 12:22:24 crc kubenswrapper[5072]: I1124 12:22:24.360957 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-774b86978c-qn647_62a8ddcc-1b1e-4bd6-8e4b-41273932a900/kube-rbac-proxy/0.log" Nov 24 12:22:24 crc kubenswrapper[5072]: I1124 12:22:24.367557 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-774b86978c-qn647_62a8ddcc-1b1e-4bd6-8e4b-41273932a900/manager/0.log" Nov 24 12:22:24 crc kubenswrapper[5072]: I1124 12:22:24.432830 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c9694994-wkqz4_bdcb07cf-3d31-40c8-bd3b-1c791408a3b9/kube-rbac-proxy/0.log" Nov 24 12:22:24 crc kubenswrapper[5072]: I1124 12:22:24.563102 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c9694994-wkqz4_bdcb07cf-3d31-40c8-bd3b-1c791408a3b9/manager/0.log" Nov 24 12:22:24 crc kubenswrapper[5072]: I1124 12:22:24.617243 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-858778c9dc-lrk4z_e8ca42b5-22f1-4101-bbf6-d053bda8b6f2/kube-rbac-proxy/0.log" Nov 24 12:22:24 crc kubenswrapper[5072]: I1124 12:22:24.817286 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-858778c9dc-lrk4z_e8ca42b5-22f1-4101-bbf6-d053bda8b6f2/manager/0.log" Nov 24 12:22:24 crc kubenswrapper[5072]: I1124 12:22:24.821683 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5bfcdc958c-7mzzw_d7f60d9f-304e-4531-aeec-6c4a576d3a1e/manager/0.log" Nov 24 12:22:24 crc kubenswrapper[5072]: I1124 12:22:24.827524 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5bfcdc958c-7mzzw_d7f60d9f-304e-4531-aeec-6c4a576d3a1e/kube-rbac-proxy/0.log" Nov 24 12:22:25 crc kubenswrapper[5072]: I1124 12:22:25.037548 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-748dc6576f-rbff2_39f25192-6179-44cd-894a-0ebf01a675e1/kube-rbac-proxy/0.log" Nov 24 12:22:25 crc kubenswrapper[5072]: I1124 12:22:25.134074 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-748dc6576f-rbff2_39f25192-6179-44cd-894a-0ebf01a675e1/manager/0.log" Nov 24 12:22:25 crc kubenswrapper[5072]: I1124 12:22:25.197348 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-6588bc459f-mnxdw_7bf279a5-5615-474c-8f17-0066eb4a681d/kube-rbac-proxy/0.log" Nov 24 12:22:25 crc kubenswrapper[5072]: I1124 12:22:25.316698 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-6588bc459f-mnxdw_7bf279a5-5615-474c-8f17-0066eb4a681d/manager/0.log" Nov 24 12:22:25 crc kubenswrapper[5072]: I1124 12:22:25.378339 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-cb6c4fdb7-vwkpc_9696dd76-5a2d-46d8-b344-bde781c44bd9/kube-rbac-proxy/0.log" Nov 24 12:22:25 crc kubenswrapper[5072]: I1124 12:22:25.435021 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-cb6c4fdb7-vwkpc_9696dd76-5a2d-46d8-b344-bde781c44bd9/manager/0.log" Nov 24 12:22:25 crc kubenswrapper[5072]: I1124 12:22:25.539592 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7c57c8bbc4-b7nnc_82a02d23-10da-4e39-a81a-9f63180ecc89/kube-rbac-proxy/0.log" Nov 24 12:22:25 crc kubenswrapper[5072]: I1124 12:22:25.567500 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7c57c8bbc4-b7nnc_82a02d23-10da-4e39-a81a-9f63180ecc89/manager/0.log" Nov 24 12:22:25 crc kubenswrapper[5072]: I1124 12:22:25.697131 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-r7mbw_fc8a9f5f-37fe-417e-9016-886b359a5a71/kube-rbac-proxy/0.log" Nov 24 12:22:25 crc kubenswrapper[5072]: I1124 12:22:25.979046 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-fd75fd47d-4z4cm_1b89d966-3ff3-451d-859c-0198a7cde893/kube-rbac-proxy/0.log" Nov 24 12:22:26 crc kubenswrapper[5072]: I1124 12:22:26.034773 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-r7mbw_fc8a9f5f-37fe-417e-9016-886b359a5a71/manager/0.log" Nov 24 12:22:26 crc kubenswrapper[5072]: I1124 12:22:26.054987 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-fd75fd47d-4z4cm_1b89d966-3ff3-451d-859c-0198a7cde893/manager/0.log" Nov 24 12:22:26 crc kubenswrapper[5072]: I1124 12:22:26.197981 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-544b9bb9-5sknj_ff7d4c70-56ad-4baa-b7eb-bba77d3811bb/kube-rbac-proxy/0.log" Nov 24 12:22:26 crc kubenswrapper[5072]: I1124 12:22:26.212916 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-544b9bb9-5sknj_ff7d4c70-56ad-4baa-b7eb-bba77d3811bb/manager/0.log" Nov 24 12:22:26 crc kubenswrapper[5072]: I1124 12:22:26.479837 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-fj9hm_647cb5b8-46fc-4c8d-90af-18ef37a34807/registry-server/0.log" Nov 24 12:22:26 crc kubenswrapper[5072]: I1124 12:22:26.606463 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-68868f9b94-xzgj7_cf28b96d-16c5-40f6-a588-0a77f527d52d/operator/0.log" Nov 24 12:22:26 crc kubenswrapper[5072]: I1124 12:22:26.665688 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-66cf5c67ff-p6hcl_edb8360f-2977-47c4-9029-02341a92a6de/kube-rbac-proxy/0.log" Nov 24 12:22:26 crc kubenswrapper[5072]: I1124 12:22:26.796358 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-66cf5c67ff-p6hcl_edb8360f-2977-47c4-9029-02341a92a6de/manager/0.log" Nov 24 12:22:26 crc kubenswrapper[5072]: I1124 12:22:26.874834 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5db546f9d9-jh4nt_64a55d3a-a7ab-4bce-8497-1992e9591a90/kube-rbac-proxy/0.log" Nov 24 12:22:26 crc kubenswrapper[5072]: I1124 12:22:26.959956 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5db546f9d9-jh4nt_64a55d3a-a7ab-4bce-8497-1992e9591a90/manager/0.log" Nov 24 12:22:27 crc kubenswrapper[5072]: I1124 12:22:27.098889 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-lgdqp_88168be8-a585-468a-a983-f56bbb31b4a0/operator/0.log" Nov 24 12:22:27 crc kubenswrapper[5072]: I1124 12:22:27.246304 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6fdc4fcf86-r7bsx_321368f6-c64b-4d58-ae2a-e939d6d447f7/kube-rbac-proxy/0.log" Nov 24 12:22:27 crc kubenswrapper[5072]: I1124 12:22:27.293209 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6fdc4fcf86-r7bsx_321368f6-c64b-4d58-ae2a-e939d6d447f7/manager/0.log" Nov 24 12:22:27 crc kubenswrapper[5072]: I1124 12:22:27.362165 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-567f98c9d-cfj6h_7c599673-db2a-4c37-88fa-45e7166f6c20/kube-rbac-proxy/0.log" Nov 24 12:22:27 crc kubenswrapper[5072]: I1124 12:22:27.603936 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cb74df96-dvldw_cd9a8dda-b29e-4e10-837a-d00bdcf6bdaa/manager/0.log" Nov 24 12:22:27 crc kubenswrapper[5072]: I1124 12:22:27.606114 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-567f98c9d-cfj6h_7c599673-db2a-4c37-88fa-45e7166f6c20/manager/0.log" Nov 24 12:22:27 crc kubenswrapper[5072]: I1124 12:22:27.632655 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-698dfbd98-5pfmt_ae6c4b3b-27a4-4d23-bdd0-0ea9e100d400/manager/0.log" Nov 24 12:22:27 crc kubenswrapper[5072]: I1124 12:22:27.652601 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cb74df96-dvldw_cd9a8dda-b29e-4e10-837a-d00bdcf6bdaa/kube-rbac-proxy/0.log" Nov 24 12:22:27 crc kubenswrapper[5072]: I1124 12:22:27.805458 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-864885998-bz2zj_0d17eb13-802b-4d4a-b221-1481e16e1110/kube-rbac-proxy/0.log" Nov 24 12:22:27 crc kubenswrapper[5072]: I1124 12:22:27.806301 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-864885998-bz2zj_0d17eb13-802b-4d4a-b221-1481e16e1110/manager/0.log" Nov 24 12:22:29 crc kubenswrapper[5072]: I1124 12:22:29.022552 5072 scope.go:117] "RemoveContainer" containerID="5e4b2551d31676c56045004e4ca1ab40457429150ff7753248ba4a9525c16c9e" Nov 24 12:22:29 crc kubenswrapper[5072]: E1124 12:22:29.023113 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 12:22:44 crc kubenswrapper[5072]: I1124 12:22:44.016938 5072 scope.go:117] "RemoveContainer" containerID="5e4b2551d31676c56045004e4ca1ab40457429150ff7753248ba4a9525c16c9e" Nov 24 12:22:44 crc kubenswrapper[5072]: E1124 12:22:44.017779 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 12:22:46 crc kubenswrapper[5072]: I1124 12:22:46.244166 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-nwsjb_7b8bcc47-53bd-45a5-937f-b515a314f662/control-plane-machine-set-operator/0.log" Nov 24 12:22:46 crc kubenswrapper[5072]: I1124 12:22:46.405508 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-dzh8r_bcbc6938-ae1b-4306-a73d-7f2c5dc64047/kube-rbac-proxy/0.log" Nov 24 12:22:46 crc kubenswrapper[5072]: I1124 12:22:46.489147 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-dzh8r_bcbc6938-ae1b-4306-a73d-7f2c5dc64047/machine-api-operator/0.log" Nov 24 12:22:58 crc kubenswrapper[5072]: I1124 12:22:58.016434 5072 scope.go:117] "RemoveContainer" containerID="5e4b2551d31676c56045004e4ca1ab40457429150ff7753248ba4a9525c16c9e" Nov 24 12:22:58 crc kubenswrapper[5072]: E1124 12:22:58.017664 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 12:22:59 crc kubenswrapper[5072]: I1124 12:22:59.621740 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-g8nvp_69649578-7c12-47bd-900a-a6ebe612c305/cert-manager-controller/0.log" Nov 24 12:22:59 crc kubenswrapper[5072]: I1124 12:22:59.804147 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-v62vq_01b23be1-c336-40a5-8b57-60ed5edddef1/cert-manager-cainjector/0.log" Nov 24 12:22:59 crc kubenswrapper[5072]: I1124 12:22:59.823097 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-hcmw7_5da70e2a-5e52-437b-b1e4-fee7f8460a72/cert-manager-webhook/0.log" Nov 24 12:23:11 crc kubenswrapper[5072]: I1124 12:23:11.016916 5072 scope.go:117] "RemoveContainer" containerID="5e4b2551d31676c56045004e4ca1ab40457429150ff7753248ba4a9525c16c9e" Nov 24 12:23:11 crc kubenswrapper[5072]: E1124 12:23:11.017654 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 12:23:11 crc kubenswrapper[5072]: I1124 12:23:11.920045 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5874bd7bc5-ppjv5_abe6e260-c56f-46ff-b5a7-a7da6df2b64f/nmstate-console-plugin/0.log" Nov 24 12:23:12 crc kubenswrapper[5072]: I1124 12:23:12.648011 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-hhvlc_9b1242fa-766e-4ef6-b41f-0cc670aa35c2/nmstate-handler/0.log" Nov 24 12:23:12 crc kubenswrapper[5072]: I1124 12:23:12.677088 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-2ntqs_186c5c36-95cc-427c-af18-4ba4d0c8ea58/kube-rbac-proxy/0.log" Nov 24 12:23:12 crc kubenswrapper[5072]: I1124 12:23:12.706209 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-2ntqs_186c5c36-95cc-427c-af18-4ba4d0c8ea58/nmstate-metrics/0.log" Nov 24 12:23:12 crc kubenswrapper[5072]: I1124 12:23:12.868139 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-557fdffb88-q824z_b5b7e963-3dd2-4073-9297-2b03a0411ff3/nmstate-operator/0.log" Nov 24 12:23:12 crc kubenswrapper[5072]: I1124 12:23:12.916644 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-6b89b748d8-9x2g2_56a60d6f-8026-4722-95ad-aa81efc124f8/nmstate-webhook/0.log" Nov 24 12:23:22 crc kubenswrapper[5072]: I1124 12:23:22.016626 5072 scope.go:117] "RemoveContainer" containerID="5e4b2551d31676c56045004e4ca1ab40457429150ff7753248ba4a9525c16c9e" Nov 24 12:23:22 crc kubenswrapper[5072]: E1124 12:23:22.017364 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 12:23:26 crc kubenswrapper[5072]: I1124 12:23:26.562001 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-54sxn_b9a94a05-9a99-48b5-8ba7-a1bd99f05577/kube-rbac-proxy/0.log" Nov 24 12:23:26 crc kubenswrapper[5072]: I1124 12:23:26.731951 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-54sxn_b9a94a05-9a99-48b5-8ba7-a1bd99f05577/controller/0.log" Nov 24 12:23:26 crc kubenswrapper[5072]: I1124 12:23:26.829319 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2nhqx_b1d8a0f3-7f9b-4e19-bfcf-addd8fff3b88/cp-frr-files/0.log" Nov 24 12:23:27 crc kubenswrapper[5072]: I1124 12:23:27.013823 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2nhqx_b1d8a0f3-7f9b-4e19-bfcf-addd8fff3b88/cp-reloader/0.log" Nov 24 12:23:27 crc kubenswrapper[5072]: I1124 12:23:27.023422 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2nhqx_b1d8a0f3-7f9b-4e19-bfcf-addd8fff3b88/cp-frr-files/0.log" Nov 24 12:23:27 crc kubenswrapper[5072]: I1124 12:23:27.040870 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2nhqx_b1d8a0f3-7f9b-4e19-bfcf-addd8fff3b88/cp-reloader/0.log" Nov 24 12:23:27 crc kubenswrapper[5072]: I1124 12:23:27.042197 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2nhqx_b1d8a0f3-7f9b-4e19-bfcf-addd8fff3b88/cp-metrics/0.log" Nov 24 12:23:27 crc kubenswrapper[5072]: I1124 12:23:27.265660 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2nhqx_b1d8a0f3-7f9b-4e19-bfcf-addd8fff3b88/cp-metrics/0.log" Nov 24 12:23:27 crc kubenswrapper[5072]: I1124 12:23:27.284092 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2nhqx_b1d8a0f3-7f9b-4e19-bfcf-addd8fff3b88/cp-reloader/0.log" Nov 24 12:23:27 crc kubenswrapper[5072]: I1124 12:23:27.307028 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2nhqx_b1d8a0f3-7f9b-4e19-bfcf-addd8fff3b88/cp-frr-files/0.log" Nov 24 12:23:27 crc kubenswrapper[5072]: I1124 12:23:27.314169 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2nhqx_b1d8a0f3-7f9b-4e19-bfcf-addd8fff3b88/cp-metrics/0.log" Nov 24 12:23:27 crc kubenswrapper[5072]: I1124 12:23:27.456194 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2nhqx_b1d8a0f3-7f9b-4e19-bfcf-addd8fff3b88/cp-metrics/0.log" Nov 24 12:23:27 crc kubenswrapper[5072]: I1124 12:23:27.457553 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2nhqx_b1d8a0f3-7f9b-4e19-bfcf-addd8fff3b88/cp-frr-files/0.log" Nov 24 12:23:27 crc kubenswrapper[5072]: I1124 12:23:27.483243 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2nhqx_b1d8a0f3-7f9b-4e19-bfcf-addd8fff3b88/controller/0.log" Nov 24 12:23:27 crc kubenswrapper[5072]: I1124 12:23:27.500356 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2nhqx_b1d8a0f3-7f9b-4e19-bfcf-addd8fff3b88/cp-reloader/0.log" Nov 24 12:23:27 crc kubenswrapper[5072]: I1124 12:23:27.666077 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2nhqx_b1d8a0f3-7f9b-4e19-bfcf-addd8fff3b88/kube-rbac-proxy-frr/0.log" Nov 24 12:23:27 crc kubenswrapper[5072]: I1124 12:23:27.688748 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2nhqx_b1d8a0f3-7f9b-4e19-bfcf-addd8fff3b88/frr-metrics/0.log" Nov 24 12:23:27 crc kubenswrapper[5072]: I1124 12:23:27.740857 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2nhqx_b1d8a0f3-7f9b-4e19-bfcf-addd8fff3b88/kube-rbac-proxy/0.log" Nov 24 12:23:27 crc kubenswrapper[5072]: I1124 12:23:27.875234 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2nhqx_b1d8a0f3-7f9b-4e19-bfcf-addd8fff3b88/reloader/0.log" Nov 24 12:23:27 crc kubenswrapper[5072]: I1124 12:23:27.930139 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-6998585d5-mjmzs_a4839b57-91b0-4472-ac9e-fd342a3430c0/frr-k8s-webhook-server/0.log" Nov 24 12:23:28 crc kubenswrapper[5072]: I1124 12:23:28.138175 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-b6dc8dd56-6d5x5_30512acc-64dc-4a20-88e5-565a69d8f95c/manager/0.log" Nov 24 12:23:28 crc kubenswrapper[5072]: I1124 12:23:28.333021 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-75d856c88d-rz946_e3c19ac2-dba1-4b49-acb0-1f93285f60b2/webhook-server/0.log" Nov 24 12:23:28 crc kubenswrapper[5072]: I1124 12:23:28.391467 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-xc9ht_e5b09acb-4f8f-45f4-b669-c491f59a52e1/kube-rbac-proxy/0.log" Nov 24 12:23:29 crc kubenswrapper[5072]: I1124 12:23:29.076001 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-xc9ht_e5b09acb-4f8f-45f4-b669-c491f59a52e1/speaker/0.log" Nov 24 12:23:29 crc kubenswrapper[5072]: I1124 12:23:29.384280 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2nhqx_b1d8a0f3-7f9b-4e19-bfcf-addd8fff3b88/frr/0.log" Nov 24 12:23:34 crc kubenswrapper[5072]: I1124 12:23:34.017258 5072 scope.go:117] "RemoveContainer" containerID="5e4b2551d31676c56045004e4ca1ab40457429150ff7753248ba4a9525c16c9e" Nov 24 12:23:34 crc kubenswrapper[5072]: E1124 12:23:34.018033 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 12:23:42 crc kubenswrapper[5072]: I1124 12:23:42.370923 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ez65cw_0b557c16-ec3a-4ee2-96cb-f1fbcfa23f76/util/0.log" Nov 24 12:23:43 crc kubenswrapper[5072]: I1124 12:23:43.054595 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ez65cw_0b557c16-ec3a-4ee2-96cb-f1fbcfa23f76/pull/0.log" Nov 24 12:23:43 crc kubenswrapper[5072]: I1124 12:23:43.086108 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ez65cw_0b557c16-ec3a-4ee2-96cb-f1fbcfa23f76/util/0.log" Nov 24 12:23:43 crc kubenswrapper[5072]: I1124 12:23:43.112391 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ez65cw_0b557c16-ec3a-4ee2-96cb-f1fbcfa23f76/pull/0.log" Nov 24 12:23:43 crc kubenswrapper[5072]: I1124 12:23:43.264436 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ez65cw_0b557c16-ec3a-4ee2-96cb-f1fbcfa23f76/pull/0.log" Nov 24 12:23:43 crc kubenswrapper[5072]: I1124 12:23:43.267927 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ez65cw_0b557c16-ec3a-4ee2-96cb-f1fbcfa23f76/util/0.log" Nov 24 12:23:43 crc kubenswrapper[5072]: I1124 12:23:43.291913 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ez65cw_0b557c16-ec3a-4ee2-96cb-f1fbcfa23f76/extract/0.log" Nov 24 12:23:43 crc kubenswrapper[5072]: I1124 12:23:43.434435 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-b8kkq_0b414b96-7437-45fe-82ff-663bdd600440/extract-utilities/0.log" Nov 24 12:23:43 crc kubenswrapper[5072]: I1124 12:23:43.603244 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-b8kkq_0b414b96-7437-45fe-82ff-663bdd600440/extract-utilities/0.log" Nov 24 12:23:43 crc kubenswrapper[5072]: I1124 12:23:43.606890 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-b8kkq_0b414b96-7437-45fe-82ff-663bdd600440/extract-content/0.log" Nov 24 12:23:43 crc kubenswrapper[5072]: I1124 12:23:43.631207 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-b8kkq_0b414b96-7437-45fe-82ff-663bdd600440/extract-content/0.log" Nov 24 12:23:43 crc kubenswrapper[5072]: I1124 12:23:43.790346 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-b8kkq_0b414b96-7437-45fe-82ff-663bdd600440/extract-content/0.log" Nov 24 12:23:43 crc kubenswrapper[5072]: I1124 12:23:43.821151 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-b8kkq_0b414b96-7437-45fe-82ff-663bdd600440/extract-utilities/0.log" Nov 24 12:23:44 crc kubenswrapper[5072]: I1124 12:23:44.014549 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4nsmr_38853327-58cd-437a-9f17-6558118671bf/extract-utilities/0.log" Nov 24 12:23:44 crc kubenswrapper[5072]: I1124 12:23:44.247401 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4nsmr_38853327-58cd-437a-9f17-6558118671bf/extract-content/0.log" Nov 24 12:23:44 crc kubenswrapper[5072]: I1124 12:23:44.292774 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4nsmr_38853327-58cd-437a-9f17-6558118671bf/extract-content/0.log" Nov 24 12:23:44 crc kubenswrapper[5072]: I1124 12:23:44.299255 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4nsmr_38853327-58cd-437a-9f17-6558118671bf/extract-utilities/0.log" Nov 24 12:23:44 crc kubenswrapper[5072]: I1124 12:23:44.436247 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4nsmr_38853327-58cd-437a-9f17-6558118671bf/extract-utilities/0.log" Nov 24 12:23:44 crc kubenswrapper[5072]: I1124 12:23:44.455149 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4nsmr_38853327-58cd-437a-9f17-6558118671bf/extract-content/0.log" Nov 24 12:23:44 crc kubenswrapper[5072]: I1124 12:23:44.690226 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6dxr5m_e5fd58fa-412d-4812-b49a-ad193626aed8/util/0.log" Nov 24 12:23:45 crc kubenswrapper[5072]: I1124 12:23:45.020038 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6dxr5m_e5fd58fa-412d-4812-b49a-ad193626aed8/pull/0.log" Nov 24 12:23:45 crc kubenswrapper[5072]: I1124 12:23:45.021207 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-b8kkq_0b414b96-7437-45fe-82ff-663bdd600440/registry-server/0.log" Nov 24 12:23:45 crc kubenswrapper[5072]: I1124 12:23:45.050102 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6dxr5m_e5fd58fa-412d-4812-b49a-ad193626aed8/util/0.log" Nov 24 12:23:45 crc kubenswrapper[5072]: I1124 12:23:45.249783 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6dxr5m_e5fd58fa-412d-4812-b49a-ad193626aed8/pull/0.log" Nov 24 12:23:45 crc kubenswrapper[5072]: I1124 12:23:45.250902 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4nsmr_38853327-58cd-437a-9f17-6558118671bf/registry-server/0.log" Nov 24 12:23:45 crc kubenswrapper[5072]: I1124 12:23:45.415853 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6dxr5m_e5fd58fa-412d-4812-b49a-ad193626aed8/pull/0.log" Nov 24 12:23:45 crc kubenswrapper[5072]: I1124 12:23:45.419627 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6dxr5m_e5fd58fa-412d-4812-b49a-ad193626aed8/extract/0.log" Nov 24 12:23:45 crc kubenswrapper[5072]: I1124 12:23:45.440501 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6dxr5m_e5fd58fa-412d-4812-b49a-ad193626aed8/util/0.log" Nov 24 12:23:45 crc kubenswrapper[5072]: I1124 12:23:45.617713 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-4scvq_f3db2294-11de-44ff-ac29-e9f1bcf6cd24/marketplace-operator/0.log" Nov 24 12:23:45 crc kubenswrapper[5072]: I1124 12:23:45.628918 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4jrmf_afa685e2-1d27-44a0-bdb9-ee494b9e8190/extract-utilities/0.log" Nov 24 12:23:45 crc kubenswrapper[5072]: I1124 12:23:45.845316 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4jrmf_afa685e2-1d27-44a0-bdb9-ee494b9e8190/extract-content/0.log" Nov 24 12:23:45 crc kubenswrapper[5072]: I1124 12:23:45.848982 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4jrmf_afa685e2-1d27-44a0-bdb9-ee494b9e8190/extract-content/0.log" Nov 24 12:23:45 crc kubenswrapper[5072]: I1124 12:23:45.867197 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4jrmf_afa685e2-1d27-44a0-bdb9-ee494b9e8190/extract-utilities/0.log" Nov 24 12:23:46 crc kubenswrapper[5072]: I1124 12:23:46.001732 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4jrmf_afa685e2-1d27-44a0-bdb9-ee494b9e8190/extract-content/0.log" Nov 24 12:23:46 crc kubenswrapper[5072]: I1124 12:23:46.013936 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4jrmf_afa685e2-1d27-44a0-bdb9-ee494b9e8190/extract-utilities/0.log" Nov 24 12:23:46 crc kubenswrapper[5072]: I1124 12:23:46.017486 5072 scope.go:117] "RemoveContainer" containerID="5e4b2551d31676c56045004e4ca1ab40457429150ff7753248ba4a9525c16c9e" Nov 24 12:23:46 crc kubenswrapper[5072]: E1124 12:23:46.017833 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 12:23:46 crc kubenswrapper[5072]: I1124 12:23:46.087725 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-j5htq_8b8c141a-32f9-41ba-95af-8448cf8cd002/extract-utilities/0.log" Nov 24 12:23:46 crc kubenswrapper[5072]: I1124 12:23:46.190912 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4jrmf_afa685e2-1d27-44a0-bdb9-ee494b9e8190/registry-server/0.log" Nov 24 12:23:46 crc kubenswrapper[5072]: I1124 12:23:46.260465 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-j5htq_8b8c141a-32f9-41ba-95af-8448cf8cd002/extract-utilities/0.log" Nov 24 12:23:46 crc kubenswrapper[5072]: I1124 12:23:46.293038 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-j5htq_8b8c141a-32f9-41ba-95af-8448cf8cd002/extract-content/0.log" Nov 24 12:23:46 crc kubenswrapper[5072]: I1124 12:23:46.295893 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-j5htq_8b8c141a-32f9-41ba-95af-8448cf8cd002/extract-content/0.log" Nov 24 12:23:46 crc kubenswrapper[5072]: I1124 12:23:46.434241 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-j5htq_8b8c141a-32f9-41ba-95af-8448cf8cd002/extract-utilities/0.log" Nov 24 12:23:46 crc kubenswrapper[5072]: I1124 12:23:46.444977 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-j5htq_8b8c141a-32f9-41ba-95af-8448cf8cd002/extract-content/0.log" Nov 24 12:23:46 crc kubenswrapper[5072]: I1124 12:23:46.616684 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-j5htq_8b8c141a-32f9-41ba-95af-8448cf8cd002/registry-server/0.log" Nov 24 12:23:58 crc kubenswrapper[5072]: I1124 12:23:58.016804 5072 scope.go:117] "RemoveContainer" containerID="5e4b2551d31676c56045004e4ca1ab40457429150ff7753248ba4a9525c16c9e" Nov 24 12:23:58 crc kubenswrapper[5072]: E1124 12:23:58.017757 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 12:24:12 crc kubenswrapper[5072]: I1124 12:24:12.016413 5072 scope.go:117] "RemoveContainer" containerID="5e4b2551d31676c56045004e4ca1ab40457429150ff7753248ba4a9525c16c9e" Nov 24 12:24:12 crc kubenswrapper[5072]: E1124 12:24:12.017166 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 12:24:27 crc kubenswrapper[5072]: I1124 12:24:27.017157 5072 scope.go:117] "RemoveContainer" containerID="5e4b2551d31676c56045004e4ca1ab40457429150ff7753248ba4a9525c16c9e" Nov 24 12:24:27 crc kubenswrapper[5072]: E1124 12:24:27.017941 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 12:24:42 crc kubenswrapper[5072]: I1124 12:24:42.016691 5072 scope.go:117] "RemoveContainer" containerID="5e4b2551d31676c56045004e4ca1ab40457429150ff7753248ba4a9525c16c9e" Nov 24 12:24:42 crc kubenswrapper[5072]: E1124 12:24:42.017392 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 12:24:53 crc kubenswrapper[5072]: I1124 12:24:53.016293 5072 scope.go:117] "RemoveContainer" containerID="5e4b2551d31676c56045004e4ca1ab40457429150ff7753248ba4a9525c16c9e" Nov 24 12:24:53 crc kubenswrapper[5072]: E1124 12:24:53.017229 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 12:24:56 crc kubenswrapper[5072]: I1124 12:24:56.269972 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xd6gh"] Nov 24 12:24:56 crc kubenswrapper[5072]: E1124 12:24:56.270974 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d51a210-96fd-493c-bb5f-e5dcb287a43c" containerName="container-00" Nov 24 12:24:56 crc kubenswrapper[5072]: I1124 12:24:56.270989 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d51a210-96fd-493c-bb5f-e5dcb287a43c" containerName="container-00" Nov 24 12:24:56 crc kubenswrapper[5072]: I1124 12:24:56.271437 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d51a210-96fd-493c-bb5f-e5dcb287a43c" containerName="container-00" Nov 24 12:24:56 crc kubenswrapper[5072]: I1124 12:24:56.272783 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xd6gh" Nov 24 12:24:56 crc kubenswrapper[5072]: I1124 12:24:56.283950 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xd6gh"] Nov 24 12:24:56 crc kubenswrapper[5072]: I1124 12:24:56.471770 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhrbt\" (UniqueName: \"kubernetes.io/projected/f71f0d16-cdd3-4830-b04a-21c40dca10d9-kube-api-access-lhrbt\") pod \"redhat-operators-xd6gh\" (UID: \"f71f0d16-cdd3-4830-b04a-21c40dca10d9\") " pod="openshift-marketplace/redhat-operators-xd6gh" Nov 24 12:24:56 crc kubenswrapper[5072]: I1124 12:24:56.471861 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f71f0d16-cdd3-4830-b04a-21c40dca10d9-utilities\") pod \"redhat-operators-xd6gh\" (UID: \"f71f0d16-cdd3-4830-b04a-21c40dca10d9\") " pod="openshift-marketplace/redhat-operators-xd6gh" Nov 24 12:24:56 crc kubenswrapper[5072]: I1124 12:24:56.472046 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f71f0d16-cdd3-4830-b04a-21c40dca10d9-catalog-content\") pod \"redhat-operators-xd6gh\" (UID: \"f71f0d16-cdd3-4830-b04a-21c40dca10d9\") " pod="openshift-marketplace/redhat-operators-xd6gh" Nov 24 12:24:56 crc kubenswrapper[5072]: I1124 12:24:56.574051 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhrbt\" (UniqueName: \"kubernetes.io/projected/f71f0d16-cdd3-4830-b04a-21c40dca10d9-kube-api-access-lhrbt\") pod \"redhat-operators-xd6gh\" (UID: \"f71f0d16-cdd3-4830-b04a-21c40dca10d9\") " pod="openshift-marketplace/redhat-operators-xd6gh" Nov 24 12:24:56 crc kubenswrapper[5072]: I1124 12:24:56.574125 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f71f0d16-cdd3-4830-b04a-21c40dca10d9-utilities\") pod \"redhat-operators-xd6gh\" (UID: \"f71f0d16-cdd3-4830-b04a-21c40dca10d9\") " pod="openshift-marketplace/redhat-operators-xd6gh" Nov 24 12:24:56 crc kubenswrapper[5072]: I1124 12:24:56.574193 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f71f0d16-cdd3-4830-b04a-21c40dca10d9-catalog-content\") pod \"redhat-operators-xd6gh\" (UID: \"f71f0d16-cdd3-4830-b04a-21c40dca10d9\") " pod="openshift-marketplace/redhat-operators-xd6gh" Nov 24 12:24:56 crc kubenswrapper[5072]: I1124 12:24:56.574739 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f71f0d16-cdd3-4830-b04a-21c40dca10d9-utilities\") pod \"redhat-operators-xd6gh\" (UID: \"f71f0d16-cdd3-4830-b04a-21c40dca10d9\") " pod="openshift-marketplace/redhat-operators-xd6gh" Nov 24 12:24:56 crc kubenswrapper[5072]: I1124 12:24:56.574769 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f71f0d16-cdd3-4830-b04a-21c40dca10d9-catalog-content\") pod \"redhat-operators-xd6gh\" (UID: \"f71f0d16-cdd3-4830-b04a-21c40dca10d9\") " pod="openshift-marketplace/redhat-operators-xd6gh" Nov 24 12:24:56 crc kubenswrapper[5072]: I1124 12:24:56.605200 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhrbt\" (UniqueName: \"kubernetes.io/projected/f71f0d16-cdd3-4830-b04a-21c40dca10d9-kube-api-access-lhrbt\") pod \"redhat-operators-xd6gh\" (UID: \"f71f0d16-cdd3-4830-b04a-21c40dca10d9\") " pod="openshift-marketplace/redhat-operators-xd6gh" Nov 24 12:24:56 crc kubenswrapper[5072]: I1124 12:24:56.893925 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xd6gh" Nov 24 12:24:57 crc kubenswrapper[5072]: I1124 12:24:57.413873 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xd6gh"] Nov 24 12:24:58 crc kubenswrapper[5072]: I1124 12:24:58.059400 5072 generic.go:334] "Generic (PLEG): container finished" podID="f71f0d16-cdd3-4830-b04a-21c40dca10d9" containerID="3b54e97a0c745812a959be344fe1369c3670cffa508be7bd02993ffde6c206e3" exitCode=0 Nov 24 12:24:58 crc kubenswrapper[5072]: I1124 12:24:58.059496 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xd6gh" event={"ID":"f71f0d16-cdd3-4830-b04a-21c40dca10d9","Type":"ContainerDied","Data":"3b54e97a0c745812a959be344fe1369c3670cffa508be7bd02993ffde6c206e3"} Nov 24 12:24:58 crc kubenswrapper[5072]: I1124 12:24:58.059681 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xd6gh" event={"ID":"f71f0d16-cdd3-4830-b04a-21c40dca10d9","Type":"ContainerStarted","Data":"d6a17608af74122ececb6c2541be5155e9124ff8bc63610538ba0478c94da3a9"} Nov 24 12:24:59 crc kubenswrapper[5072]: I1124 12:24:59.069302 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xd6gh" event={"ID":"f71f0d16-cdd3-4830-b04a-21c40dca10d9","Type":"ContainerStarted","Data":"e47b6eed82f9f41576d69d2d6ce0d958b8125bbc8cb3b34ae15edc7bd66984b0"} Nov 24 12:25:05 crc kubenswrapper[5072]: I1124 12:25:05.120939 5072 generic.go:334] "Generic (PLEG): container finished" podID="f71f0d16-cdd3-4830-b04a-21c40dca10d9" containerID="e47b6eed82f9f41576d69d2d6ce0d958b8125bbc8cb3b34ae15edc7bd66984b0" exitCode=0 Nov 24 12:25:05 crc kubenswrapper[5072]: I1124 12:25:05.121125 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xd6gh" event={"ID":"f71f0d16-cdd3-4830-b04a-21c40dca10d9","Type":"ContainerDied","Data":"e47b6eed82f9f41576d69d2d6ce0d958b8125bbc8cb3b34ae15edc7bd66984b0"} Nov 24 12:25:05 crc kubenswrapper[5072]: I1124 12:25:05.125715 5072 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 12:25:06 crc kubenswrapper[5072]: I1124 12:25:06.016454 5072 scope.go:117] "RemoveContainer" containerID="5e4b2551d31676c56045004e4ca1ab40457429150ff7753248ba4a9525c16c9e" Nov 24 12:25:06 crc kubenswrapper[5072]: E1124 12:25:06.017029 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 12:25:07 crc kubenswrapper[5072]: I1124 12:25:07.141791 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xd6gh" event={"ID":"f71f0d16-cdd3-4830-b04a-21c40dca10d9","Type":"ContainerStarted","Data":"f02a201a3c4214faeee8338fadc6a7b60d7bc10373627df435baad66092d9cac"} Nov 24 12:25:07 crc kubenswrapper[5072]: I1124 12:25:07.177200 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xd6gh" podStartSLOduration=3.703593866 podStartE2EDuration="11.177178359s" podCreationTimestamp="2025-11-24 12:24:56 +0000 UTC" firstStartedPulling="2025-11-24 12:24:58.060811641 +0000 UTC m=+4549.772336137" lastFinishedPulling="2025-11-24 12:25:05.534396144 +0000 UTC m=+4557.245920630" observedRunningTime="2025-11-24 12:25:07.166386328 +0000 UTC m=+4558.877910824" watchObservedRunningTime="2025-11-24 12:25:07.177178359 +0000 UTC m=+4558.888702855" Nov 24 12:25:11 crc kubenswrapper[5072]: I1124 12:25:11.895687 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-nxbx5"] Nov 24 12:25:11 crc kubenswrapper[5072]: I1124 12:25:11.900431 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nxbx5" Nov 24 12:25:11 crc kubenswrapper[5072]: I1124 12:25:11.908730 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nxbx5"] Nov 24 12:25:12 crc kubenswrapper[5072]: I1124 12:25:12.006296 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlqmw\" (UniqueName: \"kubernetes.io/projected/469ac11b-b247-4725-bb22-c7be72c437a2-kube-api-access-xlqmw\") pod \"certified-operators-nxbx5\" (UID: \"469ac11b-b247-4725-bb22-c7be72c437a2\") " pod="openshift-marketplace/certified-operators-nxbx5" Nov 24 12:25:12 crc kubenswrapper[5072]: I1124 12:25:12.006418 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/469ac11b-b247-4725-bb22-c7be72c437a2-catalog-content\") pod \"certified-operators-nxbx5\" (UID: \"469ac11b-b247-4725-bb22-c7be72c437a2\") " pod="openshift-marketplace/certified-operators-nxbx5" Nov 24 12:25:12 crc kubenswrapper[5072]: I1124 12:25:12.006535 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/469ac11b-b247-4725-bb22-c7be72c437a2-utilities\") pod \"certified-operators-nxbx5\" (UID: \"469ac11b-b247-4725-bb22-c7be72c437a2\") " pod="openshift-marketplace/certified-operators-nxbx5" Nov 24 12:25:12 crc kubenswrapper[5072]: I1124 12:25:12.108733 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlqmw\" (UniqueName: \"kubernetes.io/projected/469ac11b-b247-4725-bb22-c7be72c437a2-kube-api-access-xlqmw\") pod \"certified-operators-nxbx5\" (UID: \"469ac11b-b247-4725-bb22-c7be72c437a2\") " pod="openshift-marketplace/certified-operators-nxbx5" Nov 24 12:25:12 crc kubenswrapper[5072]: I1124 12:25:12.108885 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/469ac11b-b247-4725-bb22-c7be72c437a2-catalog-content\") pod \"certified-operators-nxbx5\" (UID: \"469ac11b-b247-4725-bb22-c7be72c437a2\") " pod="openshift-marketplace/certified-operators-nxbx5" Nov 24 12:25:12 crc kubenswrapper[5072]: I1124 12:25:12.108978 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/469ac11b-b247-4725-bb22-c7be72c437a2-utilities\") pod \"certified-operators-nxbx5\" (UID: \"469ac11b-b247-4725-bb22-c7be72c437a2\") " pod="openshift-marketplace/certified-operators-nxbx5" Nov 24 12:25:12 crc kubenswrapper[5072]: I1124 12:25:12.109850 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/469ac11b-b247-4725-bb22-c7be72c437a2-catalog-content\") pod \"certified-operators-nxbx5\" (UID: \"469ac11b-b247-4725-bb22-c7be72c437a2\") " pod="openshift-marketplace/certified-operators-nxbx5" Nov 24 12:25:12 crc kubenswrapper[5072]: I1124 12:25:12.109969 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/469ac11b-b247-4725-bb22-c7be72c437a2-utilities\") pod \"certified-operators-nxbx5\" (UID: \"469ac11b-b247-4725-bb22-c7be72c437a2\") " pod="openshift-marketplace/certified-operators-nxbx5" Nov 24 12:25:12 crc kubenswrapper[5072]: I1124 12:25:12.132264 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlqmw\" (UniqueName: \"kubernetes.io/projected/469ac11b-b247-4725-bb22-c7be72c437a2-kube-api-access-xlqmw\") pod \"certified-operators-nxbx5\" (UID: \"469ac11b-b247-4725-bb22-c7be72c437a2\") " pod="openshift-marketplace/certified-operators-nxbx5" Nov 24 12:25:12 crc kubenswrapper[5072]: I1124 12:25:12.239170 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nxbx5" Nov 24 12:25:12 crc kubenswrapper[5072]: W1124 12:25:12.863146 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod469ac11b_b247_4725_bb22_c7be72c437a2.slice/crio-c4cd49a131250fb5eeb74860839e9a6aad715223de7cd219a2fee431be4dfc0d WatchSource:0}: Error finding container c4cd49a131250fb5eeb74860839e9a6aad715223de7cd219a2fee431be4dfc0d: Status 404 returned error can't find the container with id c4cd49a131250fb5eeb74860839e9a6aad715223de7cd219a2fee431be4dfc0d Nov 24 12:25:12 crc kubenswrapper[5072]: I1124 12:25:12.868728 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nxbx5"] Nov 24 12:25:13 crc kubenswrapper[5072]: I1124 12:25:13.198736 5072 generic.go:334] "Generic (PLEG): container finished" podID="469ac11b-b247-4725-bb22-c7be72c437a2" containerID="2277352de04354a35d4ca60d58f3c221ca1ac3d665e7087fab55de1b2c2dfa22" exitCode=0 Nov 24 12:25:13 crc kubenswrapper[5072]: I1124 12:25:13.198776 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nxbx5" event={"ID":"469ac11b-b247-4725-bb22-c7be72c437a2","Type":"ContainerDied","Data":"2277352de04354a35d4ca60d58f3c221ca1ac3d665e7087fab55de1b2c2dfa22"} Nov 24 12:25:13 crc kubenswrapper[5072]: I1124 12:25:13.198799 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nxbx5" event={"ID":"469ac11b-b247-4725-bb22-c7be72c437a2","Type":"ContainerStarted","Data":"c4cd49a131250fb5eeb74860839e9a6aad715223de7cd219a2fee431be4dfc0d"} Nov 24 12:25:15 crc kubenswrapper[5072]: I1124 12:25:15.227539 5072 generic.go:334] "Generic (PLEG): container finished" podID="469ac11b-b247-4725-bb22-c7be72c437a2" containerID="8cd384e67d70e329675d5cc7fd21753928e16ba58d5d19c70de0d61fc56ecab3" exitCode=0 Nov 24 12:25:15 crc kubenswrapper[5072]: I1124 12:25:15.228074 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nxbx5" event={"ID":"469ac11b-b247-4725-bb22-c7be72c437a2","Type":"ContainerDied","Data":"8cd384e67d70e329675d5cc7fd21753928e16ba58d5d19c70de0d61fc56ecab3"} Nov 24 12:25:16 crc kubenswrapper[5072]: I1124 12:25:16.895393 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xd6gh" Nov 24 12:25:16 crc kubenswrapper[5072]: I1124 12:25:16.895910 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xd6gh" Nov 24 12:25:16 crc kubenswrapper[5072]: I1124 12:25:16.954036 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xd6gh" Nov 24 12:25:17 crc kubenswrapper[5072]: I1124 12:25:17.251324 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nxbx5" event={"ID":"469ac11b-b247-4725-bb22-c7be72c437a2","Type":"ContainerStarted","Data":"5d2b5ce9e725716350be463c6644da50c5807193be1111327117823669d19903"} Nov 24 12:25:17 crc kubenswrapper[5072]: I1124 12:25:17.307392 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-nxbx5" podStartSLOduration=3.680207791 podStartE2EDuration="6.307362526s" podCreationTimestamp="2025-11-24 12:25:11 +0000 UTC" firstStartedPulling="2025-11-24 12:25:13.201421071 +0000 UTC m=+4564.912945547" lastFinishedPulling="2025-11-24 12:25:15.828575806 +0000 UTC m=+4567.540100282" observedRunningTime="2025-11-24 12:25:17.306066823 +0000 UTC m=+4569.017591289" watchObservedRunningTime="2025-11-24 12:25:17.307362526 +0000 UTC m=+4569.018887002" Nov 24 12:25:17 crc kubenswrapper[5072]: I1124 12:25:17.311806 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xd6gh" Nov 24 12:25:19 crc kubenswrapper[5072]: I1124 12:25:19.270118 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xd6gh"] Nov 24 12:25:19 crc kubenswrapper[5072]: I1124 12:25:19.272118 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xd6gh" podUID="f71f0d16-cdd3-4830-b04a-21c40dca10d9" containerName="registry-server" containerID="cri-o://f02a201a3c4214faeee8338fadc6a7b60d7bc10373627df435baad66092d9cac" gracePeriod=2 Nov 24 12:25:19 crc kubenswrapper[5072]: I1124 12:25:19.790479 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xd6gh" Nov 24 12:25:19 crc kubenswrapper[5072]: I1124 12:25:19.884872 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f71f0d16-cdd3-4830-b04a-21c40dca10d9-utilities\") pod \"f71f0d16-cdd3-4830-b04a-21c40dca10d9\" (UID: \"f71f0d16-cdd3-4830-b04a-21c40dca10d9\") " Nov 24 12:25:19 crc kubenswrapper[5072]: I1124 12:25:19.884974 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f71f0d16-cdd3-4830-b04a-21c40dca10d9-catalog-content\") pod \"f71f0d16-cdd3-4830-b04a-21c40dca10d9\" (UID: \"f71f0d16-cdd3-4830-b04a-21c40dca10d9\") " Nov 24 12:25:19 crc kubenswrapper[5072]: I1124 12:25:19.885018 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lhrbt\" (UniqueName: \"kubernetes.io/projected/f71f0d16-cdd3-4830-b04a-21c40dca10d9-kube-api-access-lhrbt\") pod \"f71f0d16-cdd3-4830-b04a-21c40dca10d9\" (UID: \"f71f0d16-cdd3-4830-b04a-21c40dca10d9\") " Nov 24 12:25:19 crc kubenswrapper[5072]: I1124 12:25:19.886017 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f71f0d16-cdd3-4830-b04a-21c40dca10d9-utilities" (OuterVolumeSpecName: "utilities") pod "f71f0d16-cdd3-4830-b04a-21c40dca10d9" (UID: "f71f0d16-cdd3-4830-b04a-21c40dca10d9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:25:19 crc kubenswrapper[5072]: I1124 12:25:19.892110 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f71f0d16-cdd3-4830-b04a-21c40dca10d9-kube-api-access-lhrbt" (OuterVolumeSpecName: "kube-api-access-lhrbt") pod "f71f0d16-cdd3-4830-b04a-21c40dca10d9" (UID: "f71f0d16-cdd3-4830-b04a-21c40dca10d9"). InnerVolumeSpecName "kube-api-access-lhrbt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:25:19 crc kubenswrapper[5072]: I1124 12:25:19.985747 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f71f0d16-cdd3-4830-b04a-21c40dca10d9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f71f0d16-cdd3-4830-b04a-21c40dca10d9" (UID: "f71f0d16-cdd3-4830-b04a-21c40dca10d9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:25:19 crc kubenswrapper[5072]: I1124 12:25:19.987688 5072 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f71f0d16-cdd3-4830-b04a-21c40dca10d9-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:25:19 crc kubenswrapper[5072]: I1124 12:25:19.987709 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lhrbt\" (UniqueName: \"kubernetes.io/projected/f71f0d16-cdd3-4830-b04a-21c40dca10d9-kube-api-access-lhrbt\") on node \"crc\" DevicePath \"\"" Nov 24 12:25:19 crc kubenswrapper[5072]: I1124 12:25:19.987720 5072 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f71f0d16-cdd3-4830-b04a-21c40dca10d9-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:25:20 crc kubenswrapper[5072]: I1124 12:25:20.016707 5072 scope.go:117] "RemoveContainer" containerID="5e4b2551d31676c56045004e4ca1ab40457429150ff7753248ba4a9525c16c9e" Nov 24 12:25:20 crc kubenswrapper[5072]: E1124 12:25:20.017291 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 12:25:20 crc kubenswrapper[5072]: I1124 12:25:20.285413 5072 generic.go:334] "Generic (PLEG): container finished" podID="f71f0d16-cdd3-4830-b04a-21c40dca10d9" containerID="f02a201a3c4214faeee8338fadc6a7b60d7bc10373627df435baad66092d9cac" exitCode=0 Nov 24 12:25:20 crc kubenswrapper[5072]: I1124 12:25:20.285707 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xd6gh" Nov 24 12:25:20 crc kubenswrapper[5072]: I1124 12:25:20.293412 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xd6gh" event={"ID":"f71f0d16-cdd3-4830-b04a-21c40dca10d9","Type":"ContainerDied","Data":"f02a201a3c4214faeee8338fadc6a7b60d7bc10373627df435baad66092d9cac"} Nov 24 12:25:20 crc kubenswrapper[5072]: I1124 12:25:20.293468 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xd6gh" event={"ID":"f71f0d16-cdd3-4830-b04a-21c40dca10d9","Type":"ContainerDied","Data":"d6a17608af74122ececb6c2541be5155e9124ff8bc63610538ba0478c94da3a9"} Nov 24 12:25:20 crc kubenswrapper[5072]: I1124 12:25:20.293489 5072 scope.go:117] "RemoveContainer" containerID="f02a201a3c4214faeee8338fadc6a7b60d7bc10373627df435baad66092d9cac" Nov 24 12:25:20 crc kubenswrapper[5072]: I1124 12:25:20.323652 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xd6gh"] Nov 24 12:25:20 crc kubenswrapper[5072]: I1124 12:25:20.329316 5072 scope.go:117] "RemoveContainer" containerID="e47b6eed82f9f41576d69d2d6ce0d958b8125bbc8cb3b34ae15edc7bd66984b0" Nov 24 12:25:20 crc kubenswrapper[5072]: I1124 12:25:20.331225 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xd6gh"] Nov 24 12:25:20 crc kubenswrapper[5072]: I1124 12:25:20.348669 5072 scope.go:117] "RemoveContainer" containerID="3b54e97a0c745812a959be344fe1369c3670cffa508be7bd02993ffde6c206e3" Nov 24 12:25:20 crc kubenswrapper[5072]: I1124 12:25:20.392153 5072 scope.go:117] "RemoveContainer" containerID="f02a201a3c4214faeee8338fadc6a7b60d7bc10373627df435baad66092d9cac" Nov 24 12:25:20 crc kubenswrapper[5072]: E1124 12:25:20.392639 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f02a201a3c4214faeee8338fadc6a7b60d7bc10373627df435baad66092d9cac\": container with ID starting with f02a201a3c4214faeee8338fadc6a7b60d7bc10373627df435baad66092d9cac not found: ID does not exist" containerID="f02a201a3c4214faeee8338fadc6a7b60d7bc10373627df435baad66092d9cac" Nov 24 12:25:20 crc kubenswrapper[5072]: I1124 12:25:20.392679 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f02a201a3c4214faeee8338fadc6a7b60d7bc10373627df435baad66092d9cac"} err="failed to get container status \"f02a201a3c4214faeee8338fadc6a7b60d7bc10373627df435baad66092d9cac\": rpc error: code = NotFound desc = could not find container \"f02a201a3c4214faeee8338fadc6a7b60d7bc10373627df435baad66092d9cac\": container with ID starting with f02a201a3c4214faeee8338fadc6a7b60d7bc10373627df435baad66092d9cac not found: ID does not exist" Nov 24 12:25:20 crc kubenswrapper[5072]: I1124 12:25:20.392705 5072 scope.go:117] "RemoveContainer" containerID="e47b6eed82f9f41576d69d2d6ce0d958b8125bbc8cb3b34ae15edc7bd66984b0" Nov 24 12:25:20 crc kubenswrapper[5072]: E1124 12:25:20.392918 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e47b6eed82f9f41576d69d2d6ce0d958b8125bbc8cb3b34ae15edc7bd66984b0\": container with ID starting with e47b6eed82f9f41576d69d2d6ce0d958b8125bbc8cb3b34ae15edc7bd66984b0 not found: ID does not exist" containerID="e47b6eed82f9f41576d69d2d6ce0d958b8125bbc8cb3b34ae15edc7bd66984b0" Nov 24 12:25:20 crc kubenswrapper[5072]: I1124 12:25:20.392948 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e47b6eed82f9f41576d69d2d6ce0d958b8125bbc8cb3b34ae15edc7bd66984b0"} err="failed to get container status \"e47b6eed82f9f41576d69d2d6ce0d958b8125bbc8cb3b34ae15edc7bd66984b0\": rpc error: code = NotFound desc = could not find container \"e47b6eed82f9f41576d69d2d6ce0d958b8125bbc8cb3b34ae15edc7bd66984b0\": container with ID starting with e47b6eed82f9f41576d69d2d6ce0d958b8125bbc8cb3b34ae15edc7bd66984b0 not found: ID does not exist" Nov 24 12:25:20 crc kubenswrapper[5072]: I1124 12:25:20.392965 5072 scope.go:117] "RemoveContainer" containerID="3b54e97a0c745812a959be344fe1369c3670cffa508be7bd02993ffde6c206e3" Nov 24 12:25:20 crc kubenswrapper[5072]: E1124 12:25:20.393332 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b54e97a0c745812a959be344fe1369c3670cffa508be7bd02993ffde6c206e3\": container with ID starting with 3b54e97a0c745812a959be344fe1369c3670cffa508be7bd02993ffde6c206e3 not found: ID does not exist" containerID="3b54e97a0c745812a959be344fe1369c3670cffa508be7bd02993ffde6c206e3" Nov 24 12:25:20 crc kubenswrapper[5072]: I1124 12:25:20.393360 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b54e97a0c745812a959be344fe1369c3670cffa508be7bd02993ffde6c206e3"} err="failed to get container status \"3b54e97a0c745812a959be344fe1369c3670cffa508be7bd02993ffde6c206e3\": rpc error: code = NotFound desc = could not find container \"3b54e97a0c745812a959be344fe1369c3670cffa508be7bd02993ffde6c206e3\": container with ID starting with 3b54e97a0c745812a959be344fe1369c3670cffa508be7bd02993ffde6c206e3 not found: ID does not exist" Nov 24 12:25:21 crc kubenswrapper[5072]: I1124 12:25:21.026913 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f71f0d16-cdd3-4830-b04a-21c40dca10d9" path="/var/lib/kubelet/pods/f71f0d16-cdd3-4830-b04a-21c40dca10d9/volumes" Nov 24 12:25:22 crc kubenswrapper[5072]: I1124 12:25:22.239897 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-nxbx5" Nov 24 12:25:22 crc kubenswrapper[5072]: I1124 12:25:22.240209 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-nxbx5" Nov 24 12:25:22 crc kubenswrapper[5072]: I1124 12:25:22.286899 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-nxbx5" Nov 24 12:25:22 crc kubenswrapper[5072]: I1124 12:25:22.359997 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-nxbx5" Nov 24 12:25:23 crc kubenswrapper[5072]: I1124 12:25:23.484205 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nxbx5"] Nov 24 12:25:24 crc kubenswrapper[5072]: I1124 12:25:24.331659 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-nxbx5" podUID="469ac11b-b247-4725-bb22-c7be72c437a2" containerName="registry-server" containerID="cri-o://5d2b5ce9e725716350be463c6644da50c5807193be1111327117823669d19903" gracePeriod=2 Nov 24 12:25:25 crc kubenswrapper[5072]: I1124 12:25:25.375160 5072 generic.go:334] "Generic (PLEG): container finished" podID="469ac11b-b247-4725-bb22-c7be72c437a2" containerID="5d2b5ce9e725716350be463c6644da50c5807193be1111327117823669d19903" exitCode=0 Nov 24 12:25:25 crc kubenswrapper[5072]: I1124 12:25:25.375234 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nxbx5" event={"ID":"469ac11b-b247-4725-bb22-c7be72c437a2","Type":"ContainerDied","Data":"5d2b5ce9e725716350be463c6644da50c5807193be1111327117823669d19903"} Nov 24 12:25:25 crc kubenswrapper[5072]: I1124 12:25:25.547411 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nxbx5" Nov 24 12:25:25 crc kubenswrapper[5072]: I1124 12:25:25.594845 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xlqmw\" (UniqueName: \"kubernetes.io/projected/469ac11b-b247-4725-bb22-c7be72c437a2-kube-api-access-xlqmw\") pod \"469ac11b-b247-4725-bb22-c7be72c437a2\" (UID: \"469ac11b-b247-4725-bb22-c7be72c437a2\") " Nov 24 12:25:25 crc kubenswrapper[5072]: I1124 12:25:25.594996 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/469ac11b-b247-4725-bb22-c7be72c437a2-utilities\") pod \"469ac11b-b247-4725-bb22-c7be72c437a2\" (UID: \"469ac11b-b247-4725-bb22-c7be72c437a2\") " Nov 24 12:25:25 crc kubenswrapper[5072]: I1124 12:25:25.595030 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/469ac11b-b247-4725-bb22-c7be72c437a2-catalog-content\") pod \"469ac11b-b247-4725-bb22-c7be72c437a2\" (UID: \"469ac11b-b247-4725-bb22-c7be72c437a2\") " Nov 24 12:25:25 crc kubenswrapper[5072]: I1124 12:25:25.596642 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/469ac11b-b247-4725-bb22-c7be72c437a2-utilities" (OuterVolumeSpecName: "utilities") pod "469ac11b-b247-4725-bb22-c7be72c437a2" (UID: "469ac11b-b247-4725-bb22-c7be72c437a2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:25:25 crc kubenswrapper[5072]: I1124 12:25:25.606882 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/469ac11b-b247-4725-bb22-c7be72c437a2-kube-api-access-xlqmw" (OuterVolumeSpecName: "kube-api-access-xlqmw") pod "469ac11b-b247-4725-bb22-c7be72c437a2" (UID: "469ac11b-b247-4725-bb22-c7be72c437a2"). InnerVolumeSpecName "kube-api-access-xlqmw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:25:25 crc kubenswrapper[5072]: I1124 12:25:25.641131 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/469ac11b-b247-4725-bb22-c7be72c437a2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "469ac11b-b247-4725-bb22-c7be72c437a2" (UID: "469ac11b-b247-4725-bb22-c7be72c437a2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:25:25 crc kubenswrapper[5072]: I1124 12:25:25.697143 5072 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/469ac11b-b247-4725-bb22-c7be72c437a2-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:25:25 crc kubenswrapper[5072]: I1124 12:25:25.697190 5072 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/469ac11b-b247-4725-bb22-c7be72c437a2-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:25:25 crc kubenswrapper[5072]: I1124 12:25:25.697209 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xlqmw\" (UniqueName: \"kubernetes.io/projected/469ac11b-b247-4725-bb22-c7be72c437a2-kube-api-access-xlqmw\") on node \"crc\" DevicePath \"\"" Nov 24 12:25:26 crc kubenswrapper[5072]: I1124 12:25:26.387666 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nxbx5" event={"ID":"469ac11b-b247-4725-bb22-c7be72c437a2","Type":"ContainerDied","Data":"c4cd49a131250fb5eeb74860839e9a6aad715223de7cd219a2fee431be4dfc0d"} Nov 24 12:25:26 crc kubenswrapper[5072]: I1124 12:25:26.387958 5072 scope.go:117] "RemoveContainer" containerID="5d2b5ce9e725716350be463c6644da50c5807193be1111327117823669d19903" Nov 24 12:25:26 crc kubenswrapper[5072]: I1124 12:25:26.388237 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nxbx5" Nov 24 12:25:26 crc kubenswrapper[5072]: I1124 12:25:26.413553 5072 scope.go:117] "RemoveContainer" containerID="8cd384e67d70e329675d5cc7fd21753928e16ba58d5d19c70de0d61fc56ecab3" Nov 24 12:25:26 crc kubenswrapper[5072]: I1124 12:25:26.432338 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nxbx5"] Nov 24 12:25:26 crc kubenswrapper[5072]: I1124 12:25:26.440210 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-nxbx5"] Nov 24 12:25:26 crc kubenswrapper[5072]: I1124 12:25:26.732992 5072 scope.go:117] "RemoveContainer" containerID="2277352de04354a35d4ca60d58f3c221ca1ac3d665e7087fab55de1b2c2dfa22" Nov 24 12:25:27 crc kubenswrapper[5072]: I1124 12:25:27.035760 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="469ac11b-b247-4725-bb22-c7be72c437a2" path="/var/lib/kubelet/pods/469ac11b-b247-4725-bb22-c7be72c437a2/volumes" Nov 24 12:25:33 crc kubenswrapper[5072]: I1124 12:25:33.016450 5072 scope.go:117] "RemoveContainer" containerID="5e4b2551d31676c56045004e4ca1ab40457429150ff7753248ba4a9525c16c9e" Nov 24 12:25:33 crc kubenswrapper[5072]: E1124 12:25:33.017234 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 12:25:44 crc kubenswrapper[5072]: I1124 12:25:44.565341 5072 generic.go:334] "Generic (PLEG): container finished" podID="eff8ab72-c26f-4434-a4f1-1a19dbe034ba" containerID="a28b3c2b95aef109795b6fc4cbd99c5e2a681c6f2b02cef137cde68082450edc" exitCode=0 Nov 24 12:25:44 crc kubenswrapper[5072]: I1124 12:25:44.565429 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-x5lkh/must-gather-h9lrd" event={"ID":"eff8ab72-c26f-4434-a4f1-1a19dbe034ba","Type":"ContainerDied","Data":"a28b3c2b95aef109795b6fc4cbd99c5e2a681c6f2b02cef137cde68082450edc"} Nov 24 12:25:44 crc kubenswrapper[5072]: I1124 12:25:44.566913 5072 scope.go:117] "RemoveContainer" containerID="a28b3c2b95aef109795b6fc4cbd99c5e2a681c6f2b02cef137cde68082450edc" Nov 24 12:25:45 crc kubenswrapper[5072]: I1124 12:25:45.211259 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-x5lkh_must-gather-h9lrd_eff8ab72-c26f-4434-a4f1-1a19dbe034ba/gather/0.log" Nov 24 12:25:48 crc kubenswrapper[5072]: I1124 12:25:48.016596 5072 scope.go:117] "RemoveContainer" containerID="5e4b2551d31676c56045004e4ca1ab40457429150ff7753248ba4a9525c16c9e" Nov 24 12:25:48 crc kubenswrapper[5072]: E1124 12:25:48.017457 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 12:25:53 crc kubenswrapper[5072]: I1124 12:25:53.485851 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-x5lkh/must-gather-h9lrd"] Nov 24 12:25:53 crc kubenswrapper[5072]: I1124 12:25:53.486647 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-x5lkh/must-gather-h9lrd" podUID="eff8ab72-c26f-4434-a4f1-1a19dbe034ba" containerName="copy" containerID="cri-o://e473bd2a93bcfb019dbea326946e4ecb12889b9d8000fea79757b4e5a7b4311a" gracePeriod=2 Nov 24 12:25:53 crc kubenswrapper[5072]: I1124 12:25:53.493854 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-x5lkh/must-gather-h9lrd"] Nov 24 12:25:53 crc kubenswrapper[5072]: I1124 12:25:53.692953 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-x5lkh_must-gather-h9lrd_eff8ab72-c26f-4434-a4f1-1a19dbe034ba/copy/0.log" Nov 24 12:25:53 crc kubenswrapper[5072]: I1124 12:25:53.700818 5072 generic.go:334] "Generic (PLEG): container finished" podID="eff8ab72-c26f-4434-a4f1-1a19dbe034ba" containerID="e473bd2a93bcfb019dbea326946e4ecb12889b9d8000fea79757b4e5a7b4311a" exitCode=143 Nov 24 12:25:54 crc kubenswrapper[5072]: I1124 12:25:54.165097 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-x5lkh_must-gather-h9lrd_eff8ab72-c26f-4434-a4f1-1a19dbe034ba/copy/0.log" Nov 24 12:25:54 crc kubenswrapper[5072]: I1124 12:25:54.165942 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x5lkh/must-gather-h9lrd" Nov 24 12:25:54 crc kubenswrapper[5072]: I1124 12:25:54.340264 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/eff8ab72-c26f-4434-a4f1-1a19dbe034ba-must-gather-output\") pod \"eff8ab72-c26f-4434-a4f1-1a19dbe034ba\" (UID: \"eff8ab72-c26f-4434-a4f1-1a19dbe034ba\") " Nov 24 12:25:54 crc kubenswrapper[5072]: I1124 12:25:54.340604 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwc8c\" (UniqueName: \"kubernetes.io/projected/eff8ab72-c26f-4434-a4f1-1a19dbe034ba-kube-api-access-nwc8c\") pod \"eff8ab72-c26f-4434-a4f1-1a19dbe034ba\" (UID: \"eff8ab72-c26f-4434-a4f1-1a19dbe034ba\") " Nov 24 12:25:54 crc kubenswrapper[5072]: I1124 12:25:54.350743 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eff8ab72-c26f-4434-a4f1-1a19dbe034ba-kube-api-access-nwc8c" (OuterVolumeSpecName: "kube-api-access-nwc8c") pod "eff8ab72-c26f-4434-a4f1-1a19dbe034ba" (UID: "eff8ab72-c26f-4434-a4f1-1a19dbe034ba"). InnerVolumeSpecName "kube-api-access-nwc8c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:25:54 crc kubenswrapper[5072]: I1124 12:25:54.442737 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwc8c\" (UniqueName: \"kubernetes.io/projected/eff8ab72-c26f-4434-a4f1-1a19dbe034ba-kube-api-access-nwc8c\") on node \"crc\" DevicePath \"\"" Nov 24 12:25:54 crc kubenswrapper[5072]: I1124 12:25:54.500455 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eff8ab72-c26f-4434-a4f1-1a19dbe034ba-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "eff8ab72-c26f-4434-a4f1-1a19dbe034ba" (UID: "eff8ab72-c26f-4434-a4f1-1a19dbe034ba"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:25:54 crc kubenswrapper[5072]: I1124 12:25:54.544657 5072 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/eff8ab72-c26f-4434-a4f1-1a19dbe034ba-must-gather-output\") on node \"crc\" DevicePath \"\"" Nov 24 12:25:54 crc kubenswrapper[5072]: I1124 12:25:54.711140 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-x5lkh_must-gather-h9lrd_eff8ab72-c26f-4434-a4f1-1a19dbe034ba/copy/0.log" Nov 24 12:25:54 crc kubenswrapper[5072]: I1124 12:25:54.711579 5072 scope.go:117] "RemoveContainer" containerID="e473bd2a93bcfb019dbea326946e4ecb12889b9d8000fea79757b4e5a7b4311a" Nov 24 12:25:54 crc kubenswrapper[5072]: I1124 12:25:54.711700 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x5lkh/must-gather-h9lrd" Nov 24 12:25:54 crc kubenswrapper[5072]: I1124 12:25:54.736435 5072 scope.go:117] "RemoveContainer" containerID="a28b3c2b95aef109795b6fc4cbd99c5e2a681c6f2b02cef137cde68082450edc" Nov 24 12:25:55 crc kubenswrapper[5072]: I1124 12:25:55.028198 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eff8ab72-c26f-4434-a4f1-1a19dbe034ba" path="/var/lib/kubelet/pods/eff8ab72-c26f-4434-a4f1-1a19dbe034ba/volumes" Nov 24 12:26:02 crc kubenswrapper[5072]: I1124 12:26:02.016640 5072 scope.go:117] "RemoveContainer" containerID="5e4b2551d31676c56045004e4ca1ab40457429150ff7753248ba4a9525c16c9e" Nov 24 12:26:02 crc kubenswrapper[5072]: E1124 12:26:02.018072 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 12:26:13 crc kubenswrapper[5072]: I1124 12:26:13.016190 5072 scope.go:117] "RemoveContainer" containerID="5e4b2551d31676c56045004e4ca1ab40457429150ff7753248ba4a9525c16c9e" Nov 24 12:26:13 crc kubenswrapper[5072]: E1124 12:26:13.018411 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 12:26:23 crc kubenswrapper[5072]: I1124 12:26:23.792013 5072 scope.go:117] "RemoveContainer" containerID="917e903071d55258fc1b06727b6c3c2590911a9c4dbd07b544f7432e04fc1e56" Nov 24 12:26:25 crc kubenswrapper[5072]: I1124 12:26:25.016779 5072 scope.go:117] "RemoveContainer" containerID="5e4b2551d31676c56045004e4ca1ab40457429150ff7753248ba4a9525c16c9e" Nov 24 12:26:25 crc kubenswrapper[5072]: E1124 12:26:25.017773 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 12:26:39 crc kubenswrapper[5072]: I1124 12:26:39.024193 5072 scope.go:117] "RemoveContainer" containerID="5e4b2551d31676c56045004e4ca1ab40457429150ff7753248ba4a9525c16c9e" Nov 24 12:26:39 crc kubenswrapper[5072]: E1124 12:26:39.025117 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 12:26:54 crc kubenswrapper[5072]: I1124 12:26:54.016416 5072 scope.go:117] "RemoveContainer" containerID="5e4b2551d31676c56045004e4ca1ab40457429150ff7753248ba4a9525c16c9e" Nov 24 12:26:54 crc kubenswrapper[5072]: E1124 12:26:54.017118 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 12:27:05 crc kubenswrapper[5072]: I1124 12:27:05.016732 5072 scope.go:117] "RemoveContainer" containerID="5e4b2551d31676c56045004e4ca1ab40457429150ff7753248ba4a9525c16c9e" Nov 24 12:27:05 crc kubenswrapper[5072]: E1124 12:27:05.017887 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 12:27:16 crc kubenswrapper[5072]: I1124 12:27:16.016821 5072 scope.go:117] "RemoveContainer" containerID="5e4b2551d31676c56045004e4ca1ab40457429150ff7753248ba4a9525c16c9e" Nov 24 12:27:16 crc kubenswrapper[5072]: I1124 12:27:16.479813 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" event={"ID":"85ee6420-36f0-467c-acf4-ebea8b02c8d5","Type":"ContainerStarted","Data":"7b04b3f19e5637c82668c12efbec9e34299e5f49bfee3074b7fc7f031d0a99f9"} Nov 24 12:27:17 crc kubenswrapper[5072]: I1124 12:27:17.822276 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-54wm9"] Nov 24 12:27:17 crc kubenswrapper[5072]: E1124 12:27:17.823198 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eff8ab72-c26f-4434-a4f1-1a19dbe034ba" containerName="copy" Nov 24 12:27:17 crc kubenswrapper[5072]: I1124 12:27:17.823215 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="eff8ab72-c26f-4434-a4f1-1a19dbe034ba" containerName="copy" Nov 24 12:27:17 crc kubenswrapper[5072]: E1124 12:27:17.823228 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f71f0d16-cdd3-4830-b04a-21c40dca10d9" containerName="extract-utilities" Nov 24 12:27:17 crc kubenswrapper[5072]: I1124 12:27:17.823235 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="f71f0d16-cdd3-4830-b04a-21c40dca10d9" containerName="extract-utilities" Nov 24 12:27:17 crc kubenswrapper[5072]: E1124 12:27:17.823252 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f71f0d16-cdd3-4830-b04a-21c40dca10d9" containerName="extract-content" Nov 24 12:27:17 crc kubenswrapper[5072]: I1124 12:27:17.823258 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="f71f0d16-cdd3-4830-b04a-21c40dca10d9" containerName="extract-content" Nov 24 12:27:17 crc kubenswrapper[5072]: E1124 12:27:17.823275 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="469ac11b-b247-4725-bb22-c7be72c437a2" containerName="extract-content" Nov 24 12:27:17 crc kubenswrapper[5072]: I1124 12:27:17.823280 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="469ac11b-b247-4725-bb22-c7be72c437a2" containerName="extract-content" Nov 24 12:27:17 crc kubenswrapper[5072]: E1124 12:27:17.823292 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f71f0d16-cdd3-4830-b04a-21c40dca10d9" containerName="registry-server" Nov 24 12:27:17 crc kubenswrapper[5072]: I1124 12:27:17.823298 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="f71f0d16-cdd3-4830-b04a-21c40dca10d9" containerName="registry-server" Nov 24 12:27:17 crc kubenswrapper[5072]: E1124 12:27:17.823309 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="469ac11b-b247-4725-bb22-c7be72c437a2" containerName="registry-server" Nov 24 12:27:17 crc kubenswrapper[5072]: I1124 12:27:17.823314 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="469ac11b-b247-4725-bb22-c7be72c437a2" containerName="registry-server" Nov 24 12:27:17 crc kubenswrapper[5072]: E1124 12:27:17.823324 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="469ac11b-b247-4725-bb22-c7be72c437a2" containerName="extract-utilities" Nov 24 12:27:17 crc kubenswrapper[5072]: I1124 12:27:17.823329 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="469ac11b-b247-4725-bb22-c7be72c437a2" containerName="extract-utilities" Nov 24 12:27:17 crc kubenswrapper[5072]: E1124 12:27:17.823347 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eff8ab72-c26f-4434-a4f1-1a19dbe034ba" containerName="gather" Nov 24 12:27:17 crc kubenswrapper[5072]: I1124 12:27:17.823352 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="eff8ab72-c26f-4434-a4f1-1a19dbe034ba" containerName="gather" Nov 24 12:27:17 crc kubenswrapper[5072]: I1124 12:27:17.823564 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="f71f0d16-cdd3-4830-b04a-21c40dca10d9" containerName="registry-server" Nov 24 12:27:17 crc kubenswrapper[5072]: I1124 12:27:17.823587 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="eff8ab72-c26f-4434-a4f1-1a19dbe034ba" containerName="gather" Nov 24 12:27:17 crc kubenswrapper[5072]: I1124 12:27:17.823600 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="eff8ab72-c26f-4434-a4f1-1a19dbe034ba" containerName="copy" Nov 24 12:27:17 crc kubenswrapper[5072]: I1124 12:27:17.823617 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="469ac11b-b247-4725-bb22-c7be72c437a2" containerName="registry-server" Nov 24 12:27:17 crc kubenswrapper[5072]: I1124 12:27:17.825967 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-54wm9" Nov 24 12:27:17 crc kubenswrapper[5072]: I1124 12:27:17.836005 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-54wm9"] Nov 24 12:27:17 crc kubenswrapper[5072]: I1124 12:27:17.898334 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce7809d3-2c75-4fd7-a787-3943cabfe52e-catalog-content\") pod \"redhat-marketplace-54wm9\" (UID: \"ce7809d3-2c75-4fd7-a787-3943cabfe52e\") " pod="openshift-marketplace/redhat-marketplace-54wm9" Nov 24 12:27:17 crc kubenswrapper[5072]: I1124 12:27:17.898761 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlnbb\" (UniqueName: \"kubernetes.io/projected/ce7809d3-2c75-4fd7-a787-3943cabfe52e-kube-api-access-wlnbb\") pod \"redhat-marketplace-54wm9\" (UID: \"ce7809d3-2c75-4fd7-a787-3943cabfe52e\") " pod="openshift-marketplace/redhat-marketplace-54wm9" Nov 24 12:27:17 crc kubenswrapper[5072]: I1124 12:27:17.899146 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce7809d3-2c75-4fd7-a787-3943cabfe52e-utilities\") pod \"redhat-marketplace-54wm9\" (UID: \"ce7809d3-2c75-4fd7-a787-3943cabfe52e\") " pod="openshift-marketplace/redhat-marketplace-54wm9" Nov 24 12:27:18 crc kubenswrapper[5072]: I1124 12:27:18.000800 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce7809d3-2c75-4fd7-a787-3943cabfe52e-catalog-content\") pod \"redhat-marketplace-54wm9\" (UID: \"ce7809d3-2c75-4fd7-a787-3943cabfe52e\") " pod="openshift-marketplace/redhat-marketplace-54wm9" Nov 24 12:27:18 crc kubenswrapper[5072]: I1124 12:27:18.000942 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlnbb\" (UniqueName: \"kubernetes.io/projected/ce7809d3-2c75-4fd7-a787-3943cabfe52e-kube-api-access-wlnbb\") pod \"redhat-marketplace-54wm9\" (UID: \"ce7809d3-2c75-4fd7-a787-3943cabfe52e\") " pod="openshift-marketplace/redhat-marketplace-54wm9" Nov 24 12:27:18 crc kubenswrapper[5072]: I1124 12:27:18.001028 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce7809d3-2c75-4fd7-a787-3943cabfe52e-utilities\") pod \"redhat-marketplace-54wm9\" (UID: \"ce7809d3-2c75-4fd7-a787-3943cabfe52e\") " pod="openshift-marketplace/redhat-marketplace-54wm9" Nov 24 12:27:18 crc kubenswrapper[5072]: I1124 12:27:18.001612 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce7809d3-2c75-4fd7-a787-3943cabfe52e-catalog-content\") pod \"redhat-marketplace-54wm9\" (UID: \"ce7809d3-2c75-4fd7-a787-3943cabfe52e\") " pod="openshift-marketplace/redhat-marketplace-54wm9" Nov 24 12:27:18 crc kubenswrapper[5072]: I1124 12:27:18.001650 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce7809d3-2c75-4fd7-a787-3943cabfe52e-utilities\") pod \"redhat-marketplace-54wm9\" (UID: \"ce7809d3-2c75-4fd7-a787-3943cabfe52e\") " pod="openshift-marketplace/redhat-marketplace-54wm9" Nov 24 12:27:18 crc kubenswrapper[5072]: I1124 12:27:18.037133 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlnbb\" (UniqueName: \"kubernetes.io/projected/ce7809d3-2c75-4fd7-a787-3943cabfe52e-kube-api-access-wlnbb\") pod \"redhat-marketplace-54wm9\" (UID: \"ce7809d3-2c75-4fd7-a787-3943cabfe52e\") " pod="openshift-marketplace/redhat-marketplace-54wm9" Nov 24 12:27:18 crc kubenswrapper[5072]: I1124 12:27:18.170294 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-54wm9" Nov 24 12:27:18 crc kubenswrapper[5072]: I1124 12:27:18.633996 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-54wm9"] Nov 24 12:27:19 crc kubenswrapper[5072]: I1124 12:27:19.517713 5072 generic.go:334] "Generic (PLEG): container finished" podID="ce7809d3-2c75-4fd7-a787-3943cabfe52e" containerID="a479eb56f8e2545820056b92670a809a54fac3c56fd19b21b7806b7df988dbe8" exitCode=0 Nov 24 12:27:19 crc kubenswrapper[5072]: I1124 12:27:19.517827 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-54wm9" event={"ID":"ce7809d3-2c75-4fd7-a787-3943cabfe52e","Type":"ContainerDied","Data":"a479eb56f8e2545820056b92670a809a54fac3c56fd19b21b7806b7df988dbe8"} Nov 24 12:27:19 crc kubenswrapper[5072]: I1124 12:27:19.517995 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-54wm9" event={"ID":"ce7809d3-2c75-4fd7-a787-3943cabfe52e","Type":"ContainerStarted","Data":"39a47b87f835a804c565086eee9db6791039c9798cefd071a07177b397510b3a"} Nov 24 12:27:22 crc kubenswrapper[5072]: I1124 12:27:22.544009 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-54wm9" event={"ID":"ce7809d3-2c75-4fd7-a787-3943cabfe52e","Type":"ContainerStarted","Data":"bd04ce9c0f52e889691aa00e0712c533580e73e0fb5c68489666240ec1050c22"} Nov 24 12:27:23 crc kubenswrapper[5072]: I1124 12:27:23.559966 5072 generic.go:334] "Generic (PLEG): container finished" podID="ce7809d3-2c75-4fd7-a787-3943cabfe52e" containerID="bd04ce9c0f52e889691aa00e0712c533580e73e0fb5c68489666240ec1050c22" exitCode=0 Nov 24 12:27:23 crc kubenswrapper[5072]: I1124 12:27:23.560089 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-54wm9" event={"ID":"ce7809d3-2c75-4fd7-a787-3943cabfe52e","Type":"ContainerDied","Data":"bd04ce9c0f52e889691aa00e0712c533580e73e0fb5c68489666240ec1050c22"} Nov 24 12:27:23 crc kubenswrapper[5072]: I1124 12:27:23.883500 5072 scope.go:117] "RemoveContainer" containerID="f44be46ba01caddd22551e1313d5b7f1e41c8b007092cf4a7a53df854bd93017" Nov 24 12:27:24 crc kubenswrapper[5072]: I1124 12:27:24.574308 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-54wm9" event={"ID":"ce7809d3-2c75-4fd7-a787-3943cabfe52e","Type":"ContainerStarted","Data":"3c0e5f9e72579e43cb509c64d1f907be687c7a3c65329f6fb3e5a0fc77264643"} Nov 24 12:27:24 crc kubenswrapper[5072]: I1124 12:27:24.602906 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-54wm9" podStartSLOduration=2.825014881 podStartE2EDuration="7.602883416s" podCreationTimestamp="2025-11-24 12:27:17 +0000 UTC" firstStartedPulling="2025-11-24 12:27:19.520328865 +0000 UTC m=+4691.231853381" lastFinishedPulling="2025-11-24 12:27:24.29819744 +0000 UTC m=+4696.009721916" observedRunningTime="2025-11-24 12:27:24.592575797 +0000 UTC m=+4696.304100273" watchObservedRunningTime="2025-11-24 12:27:24.602883416 +0000 UTC m=+4696.314407892" Nov 24 12:27:28 crc kubenswrapper[5072]: I1124 12:27:28.171425 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-54wm9" Nov 24 12:27:28 crc kubenswrapper[5072]: I1124 12:27:28.171981 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-54wm9" Nov 24 12:27:28 crc kubenswrapper[5072]: I1124 12:27:28.231029 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-54wm9" Nov 24 12:27:38 crc kubenswrapper[5072]: I1124 12:27:38.227598 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-54wm9" Nov 24 12:27:38 crc kubenswrapper[5072]: I1124 12:27:38.278210 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-54wm9"] Nov 24 12:27:38 crc kubenswrapper[5072]: I1124 12:27:38.723832 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-54wm9" podUID="ce7809d3-2c75-4fd7-a787-3943cabfe52e" containerName="registry-server" containerID="cri-o://3c0e5f9e72579e43cb509c64d1f907be687c7a3c65329f6fb3e5a0fc77264643" gracePeriod=2 Nov 24 12:27:39 crc kubenswrapper[5072]: I1124 12:27:39.575546 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-54wm9" Nov 24 12:27:39 crc kubenswrapper[5072]: I1124 12:27:39.673830 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce7809d3-2c75-4fd7-a787-3943cabfe52e-utilities\") pod \"ce7809d3-2c75-4fd7-a787-3943cabfe52e\" (UID: \"ce7809d3-2c75-4fd7-a787-3943cabfe52e\") " Nov 24 12:27:39 crc kubenswrapper[5072]: I1124 12:27:39.674001 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce7809d3-2c75-4fd7-a787-3943cabfe52e-catalog-content\") pod \"ce7809d3-2c75-4fd7-a787-3943cabfe52e\" (UID: \"ce7809d3-2c75-4fd7-a787-3943cabfe52e\") " Nov 24 12:27:39 crc kubenswrapper[5072]: I1124 12:27:39.674110 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlnbb\" (UniqueName: \"kubernetes.io/projected/ce7809d3-2c75-4fd7-a787-3943cabfe52e-kube-api-access-wlnbb\") pod \"ce7809d3-2c75-4fd7-a787-3943cabfe52e\" (UID: \"ce7809d3-2c75-4fd7-a787-3943cabfe52e\") " Nov 24 12:27:39 crc kubenswrapper[5072]: I1124 12:27:39.675301 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce7809d3-2c75-4fd7-a787-3943cabfe52e-utilities" (OuterVolumeSpecName: "utilities") pod "ce7809d3-2c75-4fd7-a787-3943cabfe52e" (UID: "ce7809d3-2c75-4fd7-a787-3943cabfe52e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:27:39 crc kubenswrapper[5072]: I1124 12:27:39.680046 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce7809d3-2c75-4fd7-a787-3943cabfe52e-kube-api-access-wlnbb" (OuterVolumeSpecName: "kube-api-access-wlnbb") pod "ce7809d3-2c75-4fd7-a787-3943cabfe52e" (UID: "ce7809d3-2c75-4fd7-a787-3943cabfe52e"). InnerVolumeSpecName "kube-api-access-wlnbb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:27:39 crc kubenswrapper[5072]: I1124 12:27:39.698878 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce7809d3-2c75-4fd7-a787-3943cabfe52e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ce7809d3-2c75-4fd7-a787-3943cabfe52e" (UID: "ce7809d3-2c75-4fd7-a787-3943cabfe52e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:27:39 crc kubenswrapper[5072]: I1124 12:27:39.735600 5072 generic.go:334] "Generic (PLEG): container finished" podID="ce7809d3-2c75-4fd7-a787-3943cabfe52e" containerID="3c0e5f9e72579e43cb509c64d1f907be687c7a3c65329f6fb3e5a0fc77264643" exitCode=0 Nov 24 12:27:39 crc kubenswrapper[5072]: I1124 12:27:39.735643 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-54wm9" event={"ID":"ce7809d3-2c75-4fd7-a787-3943cabfe52e","Type":"ContainerDied","Data":"3c0e5f9e72579e43cb509c64d1f907be687c7a3c65329f6fb3e5a0fc77264643"} Nov 24 12:27:39 crc kubenswrapper[5072]: I1124 12:27:39.735673 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-54wm9" event={"ID":"ce7809d3-2c75-4fd7-a787-3943cabfe52e","Type":"ContainerDied","Data":"39a47b87f835a804c565086eee9db6791039c9798cefd071a07177b397510b3a"} Nov 24 12:27:39 crc kubenswrapper[5072]: I1124 12:27:39.735677 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-54wm9" Nov 24 12:27:39 crc kubenswrapper[5072]: I1124 12:27:39.735692 5072 scope.go:117] "RemoveContainer" containerID="3c0e5f9e72579e43cb509c64d1f907be687c7a3c65329f6fb3e5a0fc77264643" Nov 24 12:27:39 crc kubenswrapper[5072]: I1124 12:27:39.779004 5072 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce7809d3-2c75-4fd7-a787-3943cabfe52e-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:27:39 crc kubenswrapper[5072]: I1124 12:27:39.779035 5072 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce7809d3-2c75-4fd7-a787-3943cabfe52e-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:27:39 crc kubenswrapper[5072]: I1124 12:27:39.779046 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wlnbb\" (UniqueName: \"kubernetes.io/projected/ce7809d3-2c75-4fd7-a787-3943cabfe52e-kube-api-access-wlnbb\") on node \"crc\" DevicePath \"\"" Nov 24 12:27:39 crc kubenswrapper[5072]: I1124 12:27:39.783697 5072 scope.go:117] "RemoveContainer" containerID="bd04ce9c0f52e889691aa00e0712c533580e73e0fb5c68489666240ec1050c22" Nov 24 12:27:39 crc kubenswrapper[5072]: I1124 12:27:39.789882 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-54wm9"] Nov 24 12:27:39 crc kubenswrapper[5072]: I1124 12:27:39.800481 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-54wm9"] Nov 24 12:27:39 crc kubenswrapper[5072]: I1124 12:27:39.807586 5072 scope.go:117] "RemoveContainer" containerID="a479eb56f8e2545820056b92670a809a54fac3c56fd19b21b7806b7df988dbe8" Nov 24 12:27:39 crc kubenswrapper[5072]: I1124 12:27:39.844713 5072 scope.go:117] "RemoveContainer" containerID="3c0e5f9e72579e43cb509c64d1f907be687c7a3c65329f6fb3e5a0fc77264643" Nov 24 12:27:39 crc kubenswrapper[5072]: E1124 12:27:39.845110 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c0e5f9e72579e43cb509c64d1f907be687c7a3c65329f6fb3e5a0fc77264643\": container with ID starting with 3c0e5f9e72579e43cb509c64d1f907be687c7a3c65329f6fb3e5a0fc77264643 not found: ID does not exist" containerID="3c0e5f9e72579e43cb509c64d1f907be687c7a3c65329f6fb3e5a0fc77264643" Nov 24 12:27:39 crc kubenswrapper[5072]: I1124 12:27:39.845142 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c0e5f9e72579e43cb509c64d1f907be687c7a3c65329f6fb3e5a0fc77264643"} err="failed to get container status \"3c0e5f9e72579e43cb509c64d1f907be687c7a3c65329f6fb3e5a0fc77264643\": rpc error: code = NotFound desc = could not find container \"3c0e5f9e72579e43cb509c64d1f907be687c7a3c65329f6fb3e5a0fc77264643\": container with ID starting with 3c0e5f9e72579e43cb509c64d1f907be687c7a3c65329f6fb3e5a0fc77264643 not found: ID does not exist" Nov 24 12:27:39 crc kubenswrapper[5072]: I1124 12:27:39.845162 5072 scope.go:117] "RemoveContainer" containerID="bd04ce9c0f52e889691aa00e0712c533580e73e0fb5c68489666240ec1050c22" Nov 24 12:27:39 crc kubenswrapper[5072]: E1124 12:27:39.845657 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd04ce9c0f52e889691aa00e0712c533580e73e0fb5c68489666240ec1050c22\": container with ID starting with bd04ce9c0f52e889691aa00e0712c533580e73e0fb5c68489666240ec1050c22 not found: ID does not exist" containerID="bd04ce9c0f52e889691aa00e0712c533580e73e0fb5c68489666240ec1050c22" Nov 24 12:27:39 crc kubenswrapper[5072]: I1124 12:27:39.845707 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd04ce9c0f52e889691aa00e0712c533580e73e0fb5c68489666240ec1050c22"} err="failed to get container status \"bd04ce9c0f52e889691aa00e0712c533580e73e0fb5c68489666240ec1050c22\": rpc error: code = NotFound desc = could not find container \"bd04ce9c0f52e889691aa00e0712c533580e73e0fb5c68489666240ec1050c22\": container with ID starting with bd04ce9c0f52e889691aa00e0712c533580e73e0fb5c68489666240ec1050c22 not found: ID does not exist" Nov 24 12:27:39 crc kubenswrapper[5072]: I1124 12:27:39.845741 5072 scope.go:117] "RemoveContainer" containerID="a479eb56f8e2545820056b92670a809a54fac3c56fd19b21b7806b7df988dbe8" Nov 24 12:27:39 crc kubenswrapper[5072]: E1124 12:27:39.846114 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a479eb56f8e2545820056b92670a809a54fac3c56fd19b21b7806b7df988dbe8\": container with ID starting with a479eb56f8e2545820056b92670a809a54fac3c56fd19b21b7806b7df988dbe8 not found: ID does not exist" containerID="a479eb56f8e2545820056b92670a809a54fac3c56fd19b21b7806b7df988dbe8" Nov 24 12:27:39 crc kubenswrapper[5072]: I1124 12:27:39.846169 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a479eb56f8e2545820056b92670a809a54fac3c56fd19b21b7806b7df988dbe8"} err="failed to get container status \"a479eb56f8e2545820056b92670a809a54fac3c56fd19b21b7806b7df988dbe8\": rpc error: code = NotFound desc = could not find container \"a479eb56f8e2545820056b92670a809a54fac3c56fd19b21b7806b7df988dbe8\": container with ID starting with a479eb56f8e2545820056b92670a809a54fac3c56fd19b21b7806b7df988dbe8 not found: ID does not exist" Nov 24 12:27:41 crc kubenswrapper[5072]: I1124 12:27:41.028746 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce7809d3-2c75-4fd7-a787-3943cabfe52e" path="/var/lib/kubelet/pods/ce7809d3-2c75-4fd7-a787-3943cabfe52e/volumes" Nov 24 12:28:41 crc kubenswrapper[5072]: I1124 12:28:41.745668 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-hvgw9/must-gather-2zwx9"] Nov 24 12:28:41 crc kubenswrapper[5072]: E1124 12:28:41.746665 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce7809d3-2c75-4fd7-a787-3943cabfe52e" containerName="registry-server" Nov 24 12:28:41 crc kubenswrapper[5072]: I1124 12:28:41.746681 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce7809d3-2c75-4fd7-a787-3943cabfe52e" containerName="registry-server" Nov 24 12:28:41 crc kubenswrapper[5072]: E1124 12:28:41.746698 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce7809d3-2c75-4fd7-a787-3943cabfe52e" containerName="extract-content" Nov 24 12:28:41 crc kubenswrapper[5072]: I1124 12:28:41.746706 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce7809d3-2c75-4fd7-a787-3943cabfe52e" containerName="extract-content" Nov 24 12:28:41 crc kubenswrapper[5072]: E1124 12:28:41.746722 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce7809d3-2c75-4fd7-a787-3943cabfe52e" containerName="extract-utilities" Nov 24 12:28:41 crc kubenswrapper[5072]: I1124 12:28:41.746731 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce7809d3-2c75-4fd7-a787-3943cabfe52e" containerName="extract-utilities" Nov 24 12:28:41 crc kubenswrapper[5072]: I1124 12:28:41.746970 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce7809d3-2c75-4fd7-a787-3943cabfe52e" containerName="registry-server" Nov 24 12:28:41 crc kubenswrapper[5072]: I1124 12:28:41.748910 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hvgw9/must-gather-2zwx9" Nov 24 12:28:41 crc kubenswrapper[5072]: I1124 12:28:41.755075 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-hvgw9"/"openshift-service-ca.crt" Nov 24 12:28:41 crc kubenswrapper[5072]: I1124 12:28:41.755287 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-hvgw9"/"kube-root-ca.crt" Nov 24 12:28:41 crc kubenswrapper[5072]: I1124 12:28:41.755503 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-hvgw9"/"default-dockercfg-hh4xs" Nov 24 12:28:41 crc kubenswrapper[5072]: I1124 12:28:41.770536 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-hvgw9/must-gather-2zwx9"] Nov 24 12:28:41 crc kubenswrapper[5072]: I1124 12:28:41.804655 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gq858\" (UniqueName: \"kubernetes.io/projected/84996fa3-ea52-4f6d-a4e2-5512ae4c119b-kube-api-access-gq858\") pod \"must-gather-2zwx9\" (UID: \"84996fa3-ea52-4f6d-a4e2-5512ae4c119b\") " pod="openshift-must-gather-hvgw9/must-gather-2zwx9" Nov 24 12:28:41 crc kubenswrapper[5072]: I1124 12:28:41.804736 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/84996fa3-ea52-4f6d-a4e2-5512ae4c119b-must-gather-output\") pod \"must-gather-2zwx9\" (UID: \"84996fa3-ea52-4f6d-a4e2-5512ae4c119b\") " pod="openshift-must-gather-hvgw9/must-gather-2zwx9" Nov 24 12:28:41 crc kubenswrapper[5072]: I1124 12:28:41.906956 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gq858\" (UniqueName: \"kubernetes.io/projected/84996fa3-ea52-4f6d-a4e2-5512ae4c119b-kube-api-access-gq858\") pod \"must-gather-2zwx9\" (UID: \"84996fa3-ea52-4f6d-a4e2-5512ae4c119b\") " pod="openshift-must-gather-hvgw9/must-gather-2zwx9" Nov 24 12:28:41 crc kubenswrapper[5072]: I1124 12:28:41.907049 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/84996fa3-ea52-4f6d-a4e2-5512ae4c119b-must-gather-output\") pod \"must-gather-2zwx9\" (UID: \"84996fa3-ea52-4f6d-a4e2-5512ae4c119b\") " pod="openshift-must-gather-hvgw9/must-gather-2zwx9" Nov 24 12:28:41 crc kubenswrapper[5072]: I1124 12:28:41.907701 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/84996fa3-ea52-4f6d-a4e2-5512ae4c119b-must-gather-output\") pod \"must-gather-2zwx9\" (UID: \"84996fa3-ea52-4f6d-a4e2-5512ae4c119b\") " pod="openshift-must-gather-hvgw9/must-gather-2zwx9" Nov 24 12:28:41 crc kubenswrapper[5072]: I1124 12:28:41.930890 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gq858\" (UniqueName: \"kubernetes.io/projected/84996fa3-ea52-4f6d-a4e2-5512ae4c119b-kube-api-access-gq858\") pod \"must-gather-2zwx9\" (UID: \"84996fa3-ea52-4f6d-a4e2-5512ae4c119b\") " pod="openshift-must-gather-hvgw9/must-gather-2zwx9" Nov 24 12:28:42 crc kubenswrapper[5072]: I1124 12:28:42.073442 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hvgw9/must-gather-2zwx9" Nov 24 12:28:42 crc kubenswrapper[5072]: I1124 12:28:42.562426 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-hvgw9/must-gather-2zwx9"] Nov 24 12:28:43 crc kubenswrapper[5072]: I1124 12:28:43.376451 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hvgw9/must-gather-2zwx9" event={"ID":"84996fa3-ea52-4f6d-a4e2-5512ae4c119b","Type":"ContainerStarted","Data":"e7b8de71cdf8221471791a413a4dfdaa5eaada918fae64ff3b76739c35567c62"} Nov 24 12:28:43 crc kubenswrapper[5072]: I1124 12:28:43.376793 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hvgw9/must-gather-2zwx9" event={"ID":"84996fa3-ea52-4f6d-a4e2-5512ae4c119b","Type":"ContainerStarted","Data":"a3570275dbba848701f662c25935cc7ce48c09abfc62a2232afc0468eef12721"} Nov 24 12:28:44 crc kubenswrapper[5072]: I1124 12:28:44.386936 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hvgw9/must-gather-2zwx9" event={"ID":"84996fa3-ea52-4f6d-a4e2-5512ae4c119b","Type":"ContainerStarted","Data":"54eae4c0d781d7c31dab54c2d662b47c6e9e5f9ea3a6b60ecd3f0096d285c50d"} Nov 24 12:28:44 crc kubenswrapper[5072]: I1124 12:28:44.408249 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-hvgw9/must-gather-2zwx9" podStartSLOduration=3.408229701 podStartE2EDuration="3.408229701s" podCreationTimestamp="2025-11-24 12:28:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:28:44.400119787 +0000 UTC m=+4776.111644273" watchObservedRunningTime="2025-11-24 12:28:44.408229701 +0000 UTC m=+4776.119754187" Nov 24 12:28:48 crc kubenswrapper[5072]: I1124 12:28:48.651583 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-hvgw9/crc-debug-xcw4k"] Nov 24 12:28:48 crc kubenswrapper[5072]: I1124 12:28:48.653711 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hvgw9/crc-debug-xcw4k" Nov 24 12:28:48 crc kubenswrapper[5072]: I1124 12:28:48.797020 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bhmw\" (UniqueName: \"kubernetes.io/projected/8ea00932-15b6-4ea5-8845-2bf946d7eaca-kube-api-access-9bhmw\") pod \"crc-debug-xcw4k\" (UID: \"8ea00932-15b6-4ea5-8845-2bf946d7eaca\") " pod="openshift-must-gather-hvgw9/crc-debug-xcw4k" Nov 24 12:28:48 crc kubenswrapper[5072]: I1124 12:28:48.797407 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8ea00932-15b6-4ea5-8845-2bf946d7eaca-host\") pod \"crc-debug-xcw4k\" (UID: \"8ea00932-15b6-4ea5-8845-2bf946d7eaca\") " pod="openshift-must-gather-hvgw9/crc-debug-xcw4k" Nov 24 12:28:48 crc kubenswrapper[5072]: I1124 12:28:48.899233 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bhmw\" (UniqueName: \"kubernetes.io/projected/8ea00932-15b6-4ea5-8845-2bf946d7eaca-kube-api-access-9bhmw\") pod \"crc-debug-xcw4k\" (UID: \"8ea00932-15b6-4ea5-8845-2bf946d7eaca\") " pod="openshift-must-gather-hvgw9/crc-debug-xcw4k" Nov 24 12:28:48 crc kubenswrapper[5072]: I1124 12:28:48.899301 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8ea00932-15b6-4ea5-8845-2bf946d7eaca-host\") pod \"crc-debug-xcw4k\" (UID: \"8ea00932-15b6-4ea5-8845-2bf946d7eaca\") " pod="openshift-must-gather-hvgw9/crc-debug-xcw4k" Nov 24 12:28:48 crc kubenswrapper[5072]: I1124 12:28:48.899443 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8ea00932-15b6-4ea5-8845-2bf946d7eaca-host\") pod \"crc-debug-xcw4k\" (UID: \"8ea00932-15b6-4ea5-8845-2bf946d7eaca\") " pod="openshift-must-gather-hvgw9/crc-debug-xcw4k" Nov 24 12:28:48 crc kubenswrapper[5072]: I1124 12:28:48.931242 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bhmw\" (UniqueName: \"kubernetes.io/projected/8ea00932-15b6-4ea5-8845-2bf946d7eaca-kube-api-access-9bhmw\") pod \"crc-debug-xcw4k\" (UID: \"8ea00932-15b6-4ea5-8845-2bf946d7eaca\") " pod="openshift-must-gather-hvgw9/crc-debug-xcw4k" Nov 24 12:28:48 crc kubenswrapper[5072]: I1124 12:28:48.977134 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hvgw9/crc-debug-xcw4k" Nov 24 12:28:49 crc kubenswrapper[5072]: I1124 12:28:49.435995 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hvgw9/crc-debug-xcw4k" event={"ID":"8ea00932-15b6-4ea5-8845-2bf946d7eaca","Type":"ContainerStarted","Data":"9c571fb6bb6a171958eb7bda1220326140e13b00bbde779f6e77b5a19f24d6e0"} Nov 24 12:28:49 crc kubenswrapper[5072]: I1124 12:28:49.436274 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hvgw9/crc-debug-xcw4k" event={"ID":"8ea00932-15b6-4ea5-8845-2bf946d7eaca","Type":"ContainerStarted","Data":"d0413b276bda30b488c6b63bc5478ed47a54f873d70ca90719c169c6cddf1731"} Nov 24 12:28:50 crc kubenswrapper[5072]: I1124 12:28:50.470702 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-hvgw9/crc-debug-xcw4k" podStartSLOduration=2.470683492 podStartE2EDuration="2.470683492s" podCreationTimestamp="2025-11-24 12:28:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:28:50.468863726 +0000 UTC m=+4782.180388192" watchObservedRunningTime="2025-11-24 12:28:50.470683492 +0000 UTC m=+4782.182207968" Nov 24 12:29:31 crc kubenswrapper[5072]: I1124 12:29:31.796601 5072 generic.go:334] "Generic (PLEG): container finished" podID="8ea00932-15b6-4ea5-8845-2bf946d7eaca" containerID="9c571fb6bb6a171958eb7bda1220326140e13b00bbde779f6e77b5a19f24d6e0" exitCode=0 Nov 24 12:29:31 crc kubenswrapper[5072]: I1124 12:29:31.796755 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hvgw9/crc-debug-xcw4k" event={"ID":"8ea00932-15b6-4ea5-8845-2bf946d7eaca","Type":"ContainerDied","Data":"9c571fb6bb6a171958eb7bda1220326140e13b00bbde779f6e77b5a19f24d6e0"} Nov 24 12:29:32 crc kubenswrapper[5072]: I1124 12:29:32.339023 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nshgw"] Nov 24 12:29:32 crc kubenswrapper[5072]: I1124 12:29:32.341266 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nshgw" Nov 24 12:29:32 crc kubenswrapper[5072]: I1124 12:29:32.361213 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nshgw"] Nov 24 12:29:32 crc kubenswrapper[5072]: I1124 12:29:32.383804 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stmpj\" (UniqueName: \"kubernetes.io/projected/b4185a6b-ac1e-4148-b701-40f94500340a-kube-api-access-stmpj\") pod \"community-operators-nshgw\" (UID: \"b4185a6b-ac1e-4148-b701-40f94500340a\") " pod="openshift-marketplace/community-operators-nshgw" Nov 24 12:29:32 crc kubenswrapper[5072]: I1124 12:29:32.383879 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4185a6b-ac1e-4148-b701-40f94500340a-utilities\") pod \"community-operators-nshgw\" (UID: \"b4185a6b-ac1e-4148-b701-40f94500340a\") " pod="openshift-marketplace/community-operators-nshgw" Nov 24 12:29:32 crc kubenswrapper[5072]: I1124 12:29:32.383953 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4185a6b-ac1e-4148-b701-40f94500340a-catalog-content\") pod \"community-operators-nshgw\" (UID: \"b4185a6b-ac1e-4148-b701-40f94500340a\") " pod="openshift-marketplace/community-operators-nshgw" Nov 24 12:29:32 crc kubenswrapper[5072]: I1124 12:29:32.485777 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stmpj\" (UniqueName: \"kubernetes.io/projected/b4185a6b-ac1e-4148-b701-40f94500340a-kube-api-access-stmpj\") pod \"community-operators-nshgw\" (UID: \"b4185a6b-ac1e-4148-b701-40f94500340a\") " pod="openshift-marketplace/community-operators-nshgw" Nov 24 12:29:32 crc kubenswrapper[5072]: I1124 12:29:32.485838 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4185a6b-ac1e-4148-b701-40f94500340a-utilities\") pod \"community-operators-nshgw\" (UID: \"b4185a6b-ac1e-4148-b701-40f94500340a\") " pod="openshift-marketplace/community-operators-nshgw" Nov 24 12:29:32 crc kubenswrapper[5072]: I1124 12:29:32.485895 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4185a6b-ac1e-4148-b701-40f94500340a-catalog-content\") pod \"community-operators-nshgw\" (UID: \"b4185a6b-ac1e-4148-b701-40f94500340a\") " pod="openshift-marketplace/community-operators-nshgw" Nov 24 12:29:32 crc kubenswrapper[5072]: I1124 12:29:32.486401 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4185a6b-ac1e-4148-b701-40f94500340a-catalog-content\") pod \"community-operators-nshgw\" (UID: \"b4185a6b-ac1e-4148-b701-40f94500340a\") " pod="openshift-marketplace/community-operators-nshgw" Nov 24 12:29:32 crc kubenswrapper[5072]: I1124 12:29:32.486505 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4185a6b-ac1e-4148-b701-40f94500340a-utilities\") pod \"community-operators-nshgw\" (UID: \"b4185a6b-ac1e-4148-b701-40f94500340a\") " pod="openshift-marketplace/community-operators-nshgw" Nov 24 12:29:32 crc kubenswrapper[5072]: I1124 12:29:32.513188 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stmpj\" (UniqueName: \"kubernetes.io/projected/b4185a6b-ac1e-4148-b701-40f94500340a-kube-api-access-stmpj\") pod \"community-operators-nshgw\" (UID: \"b4185a6b-ac1e-4148-b701-40f94500340a\") " pod="openshift-marketplace/community-operators-nshgw" Nov 24 12:29:32 crc kubenswrapper[5072]: I1124 12:29:32.666914 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nshgw" Nov 24 12:29:32 crc kubenswrapper[5072]: I1124 12:29:32.950186 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hvgw9/crc-debug-xcw4k" Nov 24 12:29:32 crc kubenswrapper[5072]: I1124 12:29:32.999660 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8ea00932-15b6-4ea5-8845-2bf946d7eaca-host\") pod \"8ea00932-15b6-4ea5-8845-2bf946d7eaca\" (UID: \"8ea00932-15b6-4ea5-8845-2bf946d7eaca\") " Nov 24 12:29:32 crc kubenswrapper[5072]: I1124 12:29:32.999745 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9bhmw\" (UniqueName: \"kubernetes.io/projected/8ea00932-15b6-4ea5-8845-2bf946d7eaca-kube-api-access-9bhmw\") pod \"8ea00932-15b6-4ea5-8845-2bf946d7eaca\" (UID: \"8ea00932-15b6-4ea5-8845-2bf946d7eaca\") " Nov 24 12:29:33 crc kubenswrapper[5072]: I1124 12:29:33.001794 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ea00932-15b6-4ea5-8845-2bf946d7eaca-host" (OuterVolumeSpecName: "host") pod "8ea00932-15b6-4ea5-8845-2bf946d7eaca" (UID: "8ea00932-15b6-4ea5-8845-2bf946d7eaca"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 12:29:33 crc kubenswrapper[5072]: I1124 12:29:33.001869 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-hvgw9/crc-debug-xcw4k"] Nov 24 12:29:33 crc kubenswrapper[5072]: I1124 12:29:33.009650 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ea00932-15b6-4ea5-8845-2bf946d7eaca-kube-api-access-9bhmw" (OuterVolumeSpecName: "kube-api-access-9bhmw") pod "8ea00932-15b6-4ea5-8845-2bf946d7eaca" (UID: "8ea00932-15b6-4ea5-8845-2bf946d7eaca"). InnerVolumeSpecName "kube-api-access-9bhmw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:29:33 crc kubenswrapper[5072]: I1124 12:29:33.033745 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-hvgw9/crc-debug-xcw4k"] Nov 24 12:29:33 crc kubenswrapper[5072]: I1124 12:29:33.101472 5072 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8ea00932-15b6-4ea5-8845-2bf946d7eaca-host\") on node \"crc\" DevicePath \"\"" Nov 24 12:29:33 crc kubenswrapper[5072]: I1124 12:29:33.101505 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9bhmw\" (UniqueName: \"kubernetes.io/projected/8ea00932-15b6-4ea5-8845-2bf946d7eaca-kube-api-access-9bhmw\") on node \"crc\" DevicePath \"\"" Nov 24 12:29:33 crc kubenswrapper[5072]: I1124 12:29:33.281876 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nshgw"] Nov 24 12:29:33 crc kubenswrapper[5072]: I1124 12:29:33.834627 5072 generic.go:334] "Generic (PLEG): container finished" podID="b4185a6b-ac1e-4148-b701-40f94500340a" containerID="e739e506cc3a847ad2659176e7ee0c7952b57317ba420c11c68093a4b29359b6" exitCode=0 Nov 24 12:29:33 crc kubenswrapper[5072]: I1124 12:29:33.834695 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nshgw" event={"ID":"b4185a6b-ac1e-4148-b701-40f94500340a","Type":"ContainerDied","Data":"e739e506cc3a847ad2659176e7ee0c7952b57317ba420c11c68093a4b29359b6"} Nov 24 12:29:33 crc kubenswrapper[5072]: I1124 12:29:33.834720 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nshgw" event={"ID":"b4185a6b-ac1e-4148-b701-40f94500340a","Type":"ContainerStarted","Data":"eb757d90da63504b3cac79993574b7f6997c1a48ca06131786406fd490e7689c"} Nov 24 12:29:33 crc kubenswrapper[5072]: I1124 12:29:33.836481 5072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0413b276bda30b488c6b63bc5478ed47a54f873d70ca90719c169c6cddf1731" Nov 24 12:29:33 crc kubenswrapper[5072]: I1124 12:29:33.836555 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hvgw9/crc-debug-xcw4k" Nov 24 12:29:34 crc kubenswrapper[5072]: I1124 12:29:34.213471 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-hvgw9/crc-debug-8pldq"] Nov 24 12:29:34 crc kubenswrapper[5072]: E1124 12:29:34.213854 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ea00932-15b6-4ea5-8845-2bf946d7eaca" containerName="container-00" Nov 24 12:29:34 crc kubenswrapper[5072]: I1124 12:29:34.213871 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ea00932-15b6-4ea5-8845-2bf946d7eaca" containerName="container-00" Nov 24 12:29:34 crc kubenswrapper[5072]: I1124 12:29:34.214054 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ea00932-15b6-4ea5-8845-2bf946d7eaca" containerName="container-00" Nov 24 12:29:34 crc kubenswrapper[5072]: I1124 12:29:34.214684 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hvgw9/crc-debug-8pldq" Nov 24 12:29:34 crc kubenswrapper[5072]: I1124 12:29:34.230206 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b57796b3-2fe5-46e7-8039-1b91c63823e9-host\") pod \"crc-debug-8pldq\" (UID: \"b57796b3-2fe5-46e7-8039-1b91c63823e9\") " pod="openshift-must-gather-hvgw9/crc-debug-8pldq" Nov 24 12:29:34 crc kubenswrapper[5072]: I1124 12:29:34.230281 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4c94l\" (UniqueName: \"kubernetes.io/projected/b57796b3-2fe5-46e7-8039-1b91c63823e9-kube-api-access-4c94l\") pod \"crc-debug-8pldq\" (UID: \"b57796b3-2fe5-46e7-8039-1b91c63823e9\") " pod="openshift-must-gather-hvgw9/crc-debug-8pldq" Nov 24 12:29:34 crc kubenswrapper[5072]: I1124 12:29:34.333162 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b57796b3-2fe5-46e7-8039-1b91c63823e9-host\") pod \"crc-debug-8pldq\" (UID: \"b57796b3-2fe5-46e7-8039-1b91c63823e9\") " pod="openshift-must-gather-hvgw9/crc-debug-8pldq" Nov 24 12:29:34 crc kubenswrapper[5072]: I1124 12:29:34.333234 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4c94l\" (UniqueName: \"kubernetes.io/projected/b57796b3-2fe5-46e7-8039-1b91c63823e9-kube-api-access-4c94l\") pod \"crc-debug-8pldq\" (UID: \"b57796b3-2fe5-46e7-8039-1b91c63823e9\") " pod="openshift-must-gather-hvgw9/crc-debug-8pldq" Nov 24 12:29:34 crc kubenswrapper[5072]: I1124 12:29:34.333340 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b57796b3-2fe5-46e7-8039-1b91c63823e9-host\") pod \"crc-debug-8pldq\" (UID: \"b57796b3-2fe5-46e7-8039-1b91c63823e9\") " pod="openshift-must-gather-hvgw9/crc-debug-8pldq" Nov 24 12:29:34 crc kubenswrapper[5072]: I1124 12:29:34.351727 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4c94l\" (UniqueName: \"kubernetes.io/projected/b57796b3-2fe5-46e7-8039-1b91c63823e9-kube-api-access-4c94l\") pod \"crc-debug-8pldq\" (UID: \"b57796b3-2fe5-46e7-8039-1b91c63823e9\") " pod="openshift-must-gather-hvgw9/crc-debug-8pldq" Nov 24 12:29:34 crc kubenswrapper[5072]: I1124 12:29:34.530536 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hvgw9/crc-debug-8pldq" Nov 24 12:29:34 crc kubenswrapper[5072]: I1124 12:29:34.857133 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hvgw9/crc-debug-8pldq" event={"ID":"b57796b3-2fe5-46e7-8039-1b91c63823e9","Type":"ContainerStarted","Data":"8ff479af5e3c5a6fe11c463018fa8a62f9dfbd4917bffdd352881ceebab642a7"} Nov 24 12:29:35 crc kubenswrapper[5072]: I1124 12:29:35.028116 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ea00932-15b6-4ea5-8845-2bf946d7eaca" path="/var/lib/kubelet/pods/8ea00932-15b6-4ea5-8845-2bf946d7eaca/volumes" Nov 24 12:29:35 crc kubenswrapper[5072]: I1124 12:29:35.867399 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nshgw" event={"ID":"b4185a6b-ac1e-4148-b701-40f94500340a","Type":"ContainerStarted","Data":"bace9750351bf26b56507fc90d3fdad2a0c68fbe90e864a1bdfc8c6e3116212d"} Nov 24 12:29:35 crc kubenswrapper[5072]: I1124 12:29:35.868772 5072 generic.go:334] "Generic (PLEG): container finished" podID="b57796b3-2fe5-46e7-8039-1b91c63823e9" containerID="0b4e5684d3590ff65e325e0e99643a1255964ebd31990185010d18e19291a009" exitCode=0 Nov 24 12:29:35 crc kubenswrapper[5072]: I1124 12:29:35.868807 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hvgw9/crc-debug-8pldq" event={"ID":"b57796b3-2fe5-46e7-8039-1b91c63823e9","Type":"ContainerDied","Data":"0b4e5684d3590ff65e325e0e99643a1255964ebd31990185010d18e19291a009"} Nov 24 12:29:37 crc kubenswrapper[5072]: I1124 12:29:37.133973 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hvgw9/crc-debug-8pldq" Nov 24 12:29:37 crc kubenswrapper[5072]: I1124 12:29:37.179539 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b57796b3-2fe5-46e7-8039-1b91c63823e9-host\") pod \"b57796b3-2fe5-46e7-8039-1b91c63823e9\" (UID: \"b57796b3-2fe5-46e7-8039-1b91c63823e9\") " Nov 24 12:29:37 crc kubenswrapper[5072]: I1124 12:29:37.179593 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4c94l\" (UniqueName: \"kubernetes.io/projected/b57796b3-2fe5-46e7-8039-1b91c63823e9-kube-api-access-4c94l\") pod \"b57796b3-2fe5-46e7-8039-1b91c63823e9\" (UID: \"b57796b3-2fe5-46e7-8039-1b91c63823e9\") " Nov 24 12:29:37 crc kubenswrapper[5072]: I1124 12:29:37.179846 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b57796b3-2fe5-46e7-8039-1b91c63823e9-host" (OuterVolumeSpecName: "host") pod "b57796b3-2fe5-46e7-8039-1b91c63823e9" (UID: "b57796b3-2fe5-46e7-8039-1b91c63823e9"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 12:29:37 crc kubenswrapper[5072]: I1124 12:29:37.179983 5072 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b57796b3-2fe5-46e7-8039-1b91c63823e9-host\") on node \"crc\" DevicePath \"\"" Nov 24 12:29:37 crc kubenswrapper[5072]: I1124 12:29:37.184553 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b57796b3-2fe5-46e7-8039-1b91c63823e9-kube-api-access-4c94l" (OuterVolumeSpecName: "kube-api-access-4c94l") pod "b57796b3-2fe5-46e7-8039-1b91c63823e9" (UID: "b57796b3-2fe5-46e7-8039-1b91c63823e9"). InnerVolumeSpecName "kube-api-access-4c94l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:29:37 crc kubenswrapper[5072]: I1124 12:29:37.281153 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4c94l\" (UniqueName: \"kubernetes.io/projected/b57796b3-2fe5-46e7-8039-1b91c63823e9-kube-api-access-4c94l\") on node \"crc\" DevicePath \"\"" Nov 24 12:29:37 crc kubenswrapper[5072]: I1124 12:29:37.477533 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-hvgw9/crc-debug-8pldq"] Nov 24 12:29:37 crc kubenswrapper[5072]: I1124 12:29:37.485763 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-hvgw9/crc-debug-8pldq"] Nov 24 12:29:37 crc kubenswrapper[5072]: I1124 12:29:37.892764 5072 generic.go:334] "Generic (PLEG): container finished" podID="b4185a6b-ac1e-4148-b701-40f94500340a" containerID="bace9750351bf26b56507fc90d3fdad2a0c68fbe90e864a1bdfc8c6e3116212d" exitCode=0 Nov 24 12:29:37 crc kubenswrapper[5072]: I1124 12:29:37.892848 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nshgw" event={"ID":"b4185a6b-ac1e-4148-b701-40f94500340a","Type":"ContainerDied","Data":"bace9750351bf26b56507fc90d3fdad2a0c68fbe90e864a1bdfc8c6e3116212d"} Nov 24 12:29:37 crc kubenswrapper[5072]: I1124 12:29:37.895511 5072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ff479af5e3c5a6fe11c463018fa8a62f9dfbd4917bffdd352881ceebab642a7" Nov 24 12:29:37 crc kubenswrapper[5072]: I1124 12:29:37.895686 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hvgw9/crc-debug-8pldq" Nov 24 12:29:38 crc kubenswrapper[5072]: E1124 12:29:38.115954 5072 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb57796b3_2fe5_46e7_8039_1b91c63823e9.slice/crio-8ff479af5e3c5a6fe11c463018fa8a62f9dfbd4917bffdd352881ceebab642a7\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb57796b3_2fe5_46e7_8039_1b91c63823e9.slice\": RecentStats: unable to find data in memory cache]" Nov 24 12:29:38 crc kubenswrapper[5072]: I1124 12:29:38.666502 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-hvgw9/crc-debug-544ss"] Nov 24 12:29:38 crc kubenswrapper[5072]: E1124 12:29:38.667935 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b57796b3-2fe5-46e7-8039-1b91c63823e9" containerName="container-00" Nov 24 12:29:38 crc kubenswrapper[5072]: I1124 12:29:38.667966 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="b57796b3-2fe5-46e7-8039-1b91c63823e9" containerName="container-00" Nov 24 12:29:38 crc kubenswrapper[5072]: I1124 12:29:38.668196 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="b57796b3-2fe5-46e7-8039-1b91c63823e9" containerName="container-00" Nov 24 12:29:38 crc kubenswrapper[5072]: I1124 12:29:38.668967 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hvgw9/crc-debug-544ss" Nov 24 12:29:38 crc kubenswrapper[5072]: I1124 12:29:38.812629 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8da5aa10-a6d2-4b6a-8ff7-a8efd7f7c130-host\") pod \"crc-debug-544ss\" (UID: \"8da5aa10-a6d2-4b6a-8ff7-a8efd7f7c130\") " pod="openshift-must-gather-hvgw9/crc-debug-544ss" Nov 24 12:29:38 crc kubenswrapper[5072]: I1124 12:29:38.812982 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4hmd\" (UniqueName: \"kubernetes.io/projected/8da5aa10-a6d2-4b6a-8ff7-a8efd7f7c130-kube-api-access-t4hmd\") pod \"crc-debug-544ss\" (UID: \"8da5aa10-a6d2-4b6a-8ff7-a8efd7f7c130\") " pod="openshift-must-gather-hvgw9/crc-debug-544ss" Nov 24 12:29:38 crc kubenswrapper[5072]: I1124 12:29:38.908957 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nshgw" event={"ID":"b4185a6b-ac1e-4148-b701-40f94500340a","Type":"ContainerStarted","Data":"267f4247ad219da017ec34003fd523399e30999d78c66f58df411c91f77ca78e"} Nov 24 12:29:38 crc kubenswrapper[5072]: I1124 12:29:38.914169 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4hmd\" (UniqueName: \"kubernetes.io/projected/8da5aa10-a6d2-4b6a-8ff7-a8efd7f7c130-kube-api-access-t4hmd\") pod \"crc-debug-544ss\" (UID: \"8da5aa10-a6d2-4b6a-8ff7-a8efd7f7c130\") " pod="openshift-must-gather-hvgw9/crc-debug-544ss" Nov 24 12:29:38 crc kubenswrapper[5072]: I1124 12:29:38.914305 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8da5aa10-a6d2-4b6a-8ff7-a8efd7f7c130-host\") pod \"crc-debug-544ss\" (UID: \"8da5aa10-a6d2-4b6a-8ff7-a8efd7f7c130\") " pod="openshift-must-gather-hvgw9/crc-debug-544ss" Nov 24 12:29:38 crc kubenswrapper[5072]: I1124 12:29:38.914428 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8da5aa10-a6d2-4b6a-8ff7-a8efd7f7c130-host\") pod \"crc-debug-544ss\" (UID: \"8da5aa10-a6d2-4b6a-8ff7-a8efd7f7c130\") " pod="openshift-must-gather-hvgw9/crc-debug-544ss" Nov 24 12:29:38 crc kubenswrapper[5072]: I1124 12:29:38.931921 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nshgw" podStartSLOduration=2.4671375700000002 podStartE2EDuration="6.931905549s" podCreationTimestamp="2025-11-24 12:29:32 +0000 UTC" firstStartedPulling="2025-11-24 12:29:33.836182146 +0000 UTC m=+4825.547706622" lastFinishedPulling="2025-11-24 12:29:38.300950125 +0000 UTC m=+4830.012474601" observedRunningTime="2025-11-24 12:29:38.928151054 +0000 UTC m=+4830.639675540" watchObservedRunningTime="2025-11-24 12:29:38.931905549 +0000 UTC m=+4830.643430025" Nov 24 12:29:38 crc kubenswrapper[5072]: I1124 12:29:38.933672 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4hmd\" (UniqueName: \"kubernetes.io/projected/8da5aa10-a6d2-4b6a-8ff7-a8efd7f7c130-kube-api-access-t4hmd\") pod \"crc-debug-544ss\" (UID: \"8da5aa10-a6d2-4b6a-8ff7-a8efd7f7c130\") " pod="openshift-must-gather-hvgw9/crc-debug-544ss" Nov 24 12:29:38 crc kubenswrapper[5072]: I1124 12:29:38.983056 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hvgw9/crc-debug-544ss" Nov 24 12:29:39 crc kubenswrapper[5072]: W1124 12:29:39.018126 5072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8da5aa10_a6d2_4b6a_8ff7_a8efd7f7c130.slice/crio-e8998de536e54b1bdb0bb348a78cfc2e582d7117131016d34548f77e2a0a32a8 WatchSource:0}: Error finding container e8998de536e54b1bdb0bb348a78cfc2e582d7117131016d34548f77e2a0a32a8: Status 404 returned error can't find the container with id e8998de536e54b1bdb0bb348a78cfc2e582d7117131016d34548f77e2a0a32a8 Nov 24 12:29:39 crc kubenswrapper[5072]: I1124 12:29:39.028025 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b57796b3-2fe5-46e7-8039-1b91c63823e9" path="/var/lib/kubelet/pods/b57796b3-2fe5-46e7-8039-1b91c63823e9/volumes" Nov 24 12:29:39 crc kubenswrapper[5072]: I1124 12:29:39.930183 5072 generic.go:334] "Generic (PLEG): container finished" podID="8da5aa10-a6d2-4b6a-8ff7-a8efd7f7c130" containerID="9bf2dc3059bd3a69af5bdf81f0c6168b7e6935daa7a4a1a39f7b2bec4a8aa92f" exitCode=0 Nov 24 12:29:39 crc kubenswrapper[5072]: I1124 12:29:39.930408 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hvgw9/crc-debug-544ss" event={"ID":"8da5aa10-a6d2-4b6a-8ff7-a8efd7f7c130","Type":"ContainerDied","Data":"9bf2dc3059bd3a69af5bdf81f0c6168b7e6935daa7a4a1a39f7b2bec4a8aa92f"} Nov 24 12:29:39 crc kubenswrapper[5072]: I1124 12:29:39.930572 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hvgw9/crc-debug-544ss" event={"ID":"8da5aa10-a6d2-4b6a-8ff7-a8efd7f7c130","Type":"ContainerStarted","Data":"e8998de536e54b1bdb0bb348a78cfc2e582d7117131016d34548f77e2a0a32a8"} Nov 24 12:29:39 crc kubenswrapper[5072]: I1124 12:29:39.968164 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-hvgw9/crc-debug-544ss"] Nov 24 12:29:39 crc kubenswrapper[5072]: I1124 12:29:39.988020 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-hvgw9/crc-debug-544ss"] Nov 24 12:29:41 crc kubenswrapper[5072]: I1124 12:29:41.064947 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hvgw9/crc-debug-544ss" Nov 24 12:29:41 crc kubenswrapper[5072]: I1124 12:29:41.169339 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t4hmd\" (UniqueName: \"kubernetes.io/projected/8da5aa10-a6d2-4b6a-8ff7-a8efd7f7c130-kube-api-access-t4hmd\") pod \"8da5aa10-a6d2-4b6a-8ff7-a8efd7f7c130\" (UID: \"8da5aa10-a6d2-4b6a-8ff7-a8efd7f7c130\") " Nov 24 12:29:41 crc kubenswrapper[5072]: I1124 12:29:41.169753 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8da5aa10-a6d2-4b6a-8ff7-a8efd7f7c130-host\") pod \"8da5aa10-a6d2-4b6a-8ff7-a8efd7f7c130\" (UID: \"8da5aa10-a6d2-4b6a-8ff7-a8efd7f7c130\") " Nov 24 12:29:41 crc kubenswrapper[5072]: I1124 12:29:41.169880 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8da5aa10-a6d2-4b6a-8ff7-a8efd7f7c130-host" (OuterVolumeSpecName: "host") pod "8da5aa10-a6d2-4b6a-8ff7-a8efd7f7c130" (UID: "8da5aa10-a6d2-4b6a-8ff7-a8efd7f7c130"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 12:29:41 crc kubenswrapper[5072]: I1124 12:29:41.170667 5072 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8da5aa10-a6d2-4b6a-8ff7-a8efd7f7c130-host\") on node \"crc\" DevicePath \"\"" Nov 24 12:29:41 crc kubenswrapper[5072]: I1124 12:29:41.175511 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8da5aa10-a6d2-4b6a-8ff7-a8efd7f7c130-kube-api-access-t4hmd" (OuterVolumeSpecName: "kube-api-access-t4hmd") pod "8da5aa10-a6d2-4b6a-8ff7-a8efd7f7c130" (UID: "8da5aa10-a6d2-4b6a-8ff7-a8efd7f7c130"). InnerVolumeSpecName "kube-api-access-t4hmd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:29:41 crc kubenswrapper[5072]: I1124 12:29:41.272698 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t4hmd\" (UniqueName: \"kubernetes.io/projected/8da5aa10-a6d2-4b6a-8ff7-a8efd7f7c130-kube-api-access-t4hmd\") on node \"crc\" DevicePath \"\"" Nov 24 12:29:41 crc kubenswrapper[5072]: I1124 12:29:41.953545 5072 scope.go:117] "RemoveContainer" containerID="9bf2dc3059bd3a69af5bdf81f0c6168b7e6935daa7a4a1a39f7b2bec4a8aa92f" Nov 24 12:29:41 crc kubenswrapper[5072]: I1124 12:29:41.953619 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hvgw9/crc-debug-544ss" Nov 24 12:29:42 crc kubenswrapper[5072]: I1124 12:29:42.667463 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nshgw" Nov 24 12:29:42 crc kubenswrapper[5072]: I1124 12:29:42.668835 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nshgw" Nov 24 12:29:42 crc kubenswrapper[5072]: I1124 12:29:42.712586 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nshgw" Nov 24 12:29:43 crc kubenswrapper[5072]: I1124 12:29:43.031832 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8da5aa10-a6d2-4b6a-8ff7-a8efd7f7c130" path="/var/lib/kubelet/pods/8da5aa10-a6d2-4b6a-8ff7-a8efd7f7c130/volumes" Nov 24 12:29:43 crc kubenswrapper[5072]: I1124 12:29:43.645508 5072 patch_prober.go:28] interesting pod/machine-config-daemon-jfxnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:29:43 crc kubenswrapper[5072]: I1124 12:29:43.645578 5072 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:29:44 crc kubenswrapper[5072]: I1124 12:29:44.048038 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nshgw" Nov 24 12:29:44 crc kubenswrapper[5072]: I1124 12:29:44.117333 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nshgw"] Nov 24 12:29:45 crc kubenswrapper[5072]: I1124 12:29:45.995112 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nshgw" podUID="b4185a6b-ac1e-4148-b701-40f94500340a" containerName="registry-server" containerID="cri-o://267f4247ad219da017ec34003fd523399e30999d78c66f58df411c91f77ca78e" gracePeriod=2 Nov 24 12:29:46 crc kubenswrapper[5072]: I1124 12:29:46.465212 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nshgw" Nov 24 12:29:46 crc kubenswrapper[5072]: I1124 12:29:46.517539 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4185a6b-ac1e-4148-b701-40f94500340a-catalog-content\") pod \"b4185a6b-ac1e-4148-b701-40f94500340a\" (UID: \"b4185a6b-ac1e-4148-b701-40f94500340a\") " Nov 24 12:29:46 crc kubenswrapper[5072]: I1124 12:29:46.517604 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-stmpj\" (UniqueName: \"kubernetes.io/projected/b4185a6b-ac1e-4148-b701-40f94500340a-kube-api-access-stmpj\") pod \"b4185a6b-ac1e-4148-b701-40f94500340a\" (UID: \"b4185a6b-ac1e-4148-b701-40f94500340a\") " Nov 24 12:29:46 crc kubenswrapper[5072]: I1124 12:29:46.517702 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4185a6b-ac1e-4148-b701-40f94500340a-utilities\") pod \"b4185a6b-ac1e-4148-b701-40f94500340a\" (UID: \"b4185a6b-ac1e-4148-b701-40f94500340a\") " Nov 24 12:29:46 crc kubenswrapper[5072]: I1124 12:29:46.519101 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4185a6b-ac1e-4148-b701-40f94500340a-utilities" (OuterVolumeSpecName: "utilities") pod "b4185a6b-ac1e-4148-b701-40f94500340a" (UID: "b4185a6b-ac1e-4148-b701-40f94500340a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:29:46 crc kubenswrapper[5072]: I1124 12:29:46.524508 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4185a6b-ac1e-4148-b701-40f94500340a-kube-api-access-stmpj" (OuterVolumeSpecName: "kube-api-access-stmpj") pod "b4185a6b-ac1e-4148-b701-40f94500340a" (UID: "b4185a6b-ac1e-4148-b701-40f94500340a"). InnerVolumeSpecName "kube-api-access-stmpj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:29:46 crc kubenswrapper[5072]: I1124 12:29:46.619496 5072 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4185a6b-ac1e-4148-b701-40f94500340a-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:29:46 crc kubenswrapper[5072]: I1124 12:29:46.619536 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-stmpj\" (UniqueName: \"kubernetes.io/projected/b4185a6b-ac1e-4148-b701-40f94500340a-kube-api-access-stmpj\") on node \"crc\" DevicePath \"\"" Nov 24 12:29:47 crc kubenswrapper[5072]: I1124 12:29:47.013847 5072 generic.go:334] "Generic (PLEG): container finished" podID="b4185a6b-ac1e-4148-b701-40f94500340a" containerID="267f4247ad219da017ec34003fd523399e30999d78c66f58df411c91f77ca78e" exitCode=0 Nov 24 12:29:47 crc kubenswrapper[5072]: I1124 12:29:47.013892 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nshgw" Nov 24 12:29:47 crc kubenswrapper[5072]: I1124 12:29:47.013907 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nshgw" event={"ID":"b4185a6b-ac1e-4148-b701-40f94500340a","Type":"ContainerDied","Data":"267f4247ad219da017ec34003fd523399e30999d78c66f58df411c91f77ca78e"} Nov 24 12:29:47 crc kubenswrapper[5072]: I1124 12:29:47.013943 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nshgw" event={"ID":"b4185a6b-ac1e-4148-b701-40f94500340a","Type":"ContainerDied","Data":"eb757d90da63504b3cac79993574b7f6997c1a48ca06131786406fd490e7689c"} Nov 24 12:29:47 crc kubenswrapper[5072]: I1124 12:29:47.013991 5072 scope.go:117] "RemoveContainer" containerID="267f4247ad219da017ec34003fd523399e30999d78c66f58df411c91f77ca78e" Nov 24 12:29:47 crc kubenswrapper[5072]: I1124 12:29:47.034706 5072 scope.go:117] "RemoveContainer" containerID="bace9750351bf26b56507fc90d3fdad2a0c68fbe90e864a1bdfc8c6e3116212d" Nov 24 12:29:47 crc kubenswrapper[5072]: I1124 12:29:47.056717 5072 scope.go:117] "RemoveContainer" containerID="e739e506cc3a847ad2659176e7ee0c7952b57317ba420c11c68093a4b29359b6" Nov 24 12:29:47 crc kubenswrapper[5072]: I1124 12:29:47.092061 5072 scope.go:117] "RemoveContainer" containerID="267f4247ad219da017ec34003fd523399e30999d78c66f58df411c91f77ca78e" Nov 24 12:29:47 crc kubenswrapper[5072]: E1124 12:29:47.092423 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"267f4247ad219da017ec34003fd523399e30999d78c66f58df411c91f77ca78e\": container with ID starting with 267f4247ad219da017ec34003fd523399e30999d78c66f58df411c91f77ca78e not found: ID does not exist" containerID="267f4247ad219da017ec34003fd523399e30999d78c66f58df411c91f77ca78e" Nov 24 12:29:47 crc kubenswrapper[5072]: I1124 12:29:47.092460 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"267f4247ad219da017ec34003fd523399e30999d78c66f58df411c91f77ca78e"} err="failed to get container status \"267f4247ad219da017ec34003fd523399e30999d78c66f58df411c91f77ca78e\": rpc error: code = NotFound desc = could not find container \"267f4247ad219da017ec34003fd523399e30999d78c66f58df411c91f77ca78e\": container with ID starting with 267f4247ad219da017ec34003fd523399e30999d78c66f58df411c91f77ca78e not found: ID does not exist" Nov 24 12:29:47 crc kubenswrapper[5072]: I1124 12:29:47.092482 5072 scope.go:117] "RemoveContainer" containerID="bace9750351bf26b56507fc90d3fdad2a0c68fbe90e864a1bdfc8c6e3116212d" Nov 24 12:29:47 crc kubenswrapper[5072]: E1124 12:29:47.092793 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bace9750351bf26b56507fc90d3fdad2a0c68fbe90e864a1bdfc8c6e3116212d\": container with ID starting with bace9750351bf26b56507fc90d3fdad2a0c68fbe90e864a1bdfc8c6e3116212d not found: ID does not exist" containerID="bace9750351bf26b56507fc90d3fdad2a0c68fbe90e864a1bdfc8c6e3116212d" Nov 24 12:29:47 crc kubenswrapper[5072]: I1124 12:29:47.092818 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bace9750351bf26b56507fc90d3fdad2a0c68fbe90e864a1bdfc8c6e3116212d"} err="failed to get container status \"bace9750351bf26b56507fc90d3fdad2a0c68fbe90e864a1bdfc8c6e3116212d\": rpc error: code = NotFound desc = could not find container \"bace9750351bf26b56507fc90d3fdad2a0c68fbe90e864a1bdfc8c6e3116212d\": container with ID starting with bace9750351bf26b56507fc90d3fdad2a0c68fbe90e864a1bdfc8c6e3116212d not found: ID does not exist" Nov 24 12:29:47 crc kubenswrapper[5072]: I1124 12:29:47.092834 5072 scope.go:117] "RemoveContainer" containerID="e739e506cc3a847ad2659176e7ee0c7952b57317ba420c11c68093a4b29359b6" Nov 24 12:29:47 crc kubenswrapper[5072]: E1124 12:29:47.093066 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e739e506cc3a847ad2659176e7ee0c7952b57317ba420c11c68093a4b29359b6\": container with ID starting with e739e506cc3a847ad2659176e7ee0c7952b57317ba420c11c68093a4b29359b6 not found: ID does not exist" containerID="e739e506cc3a847ad2659176e7ee0c7952b57317ba420c11c68093a4b29359b6" Nov 24 12:29:47 crc kubenswrapper[5072]: I1124 12:29:47.093091 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e739e506cc3a847ad2659176e7ee0c7952b57317ba420c11c68093a4b29359b6"} err="failed to get container status \"e739e506cc3a847ad2659176e7ee0c7952b57317ba420c11c68093a4b29359b6\": rpc error: code = NotFound desc = could not find container \"e739e506cc3a847ad2659176e7ee0c7952b57317ba420c11c68093a4b29359b6\": container with ID starting with e739e506cc3a847ad2659176e7ee0c7952b57317ba420c11c68093a4b29359b6 not found: ID does not exist" Nov 24 12:29:47 crc kubenswrapper[5072]: I1124 12:29:47.106703 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4185a6b-ac1e-4148-b701-40f94500340a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b4185a6b-ac1e-4148-b701-40f94500340a" (UID: "b4185a6b-ac1e-4148-b701-40f94500340a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:29:47 crc kubenswrapper[5072]: I1124 12:29:47.129642 5072 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4185a6b-ac1e-4148-b701-40f94500340a-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:29:47 crc kubenswrapper[5072]: I1124 12:29:47.356453 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nshgw"] Nov 24 12:29:47 crc kubenswrapper[5072]: I1124 12:29:47.364740 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nshgw"] Nov 24 12:29:49 crc kubenswrapper[5072]: I1124 12:29:49.035995 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4185a6b-ac1e-4148-b701-40f94500340a" path="/var/lib/kubelet/pods/b4185a6b-ac1e-4148-b701-40f94500340a/volumes" Nov 24 12:30:00 crc kubenswrapper[5072]: I1124 12:30:00.162821 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399790-95xz6"] Nov 24 12:30:00 crc kubenswrapper[5072]: E1124 12:30:00.164567 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8da5aa10-a6d2-4b6a-8ff7-a8efd7f7c130" containerName="container-00" Nov 24 12:30:00 crc kubenswrapper[5072]: I1124 12:30:00.164590 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="8da5aa10-a6d2-4b6a-8ff7-a8efd7f7c130" containerName="container-00" Nov 24 12:30:00 crc kubenswrapper[5072]: E1124 12:30:00.164618 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4185a6b-ac1e-4148-b701-40f94500340a" containerName="extract-utilities" Nov 24 12:30:00 crc kubenswrapper[5072]: I1124 12:30:00.164627 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4185a6b-ac1e-4148-b701-40f94500340a" containerName="extract-utilities" Nov 24 12:30:00 crc kubenswrapper[5072]: E1124 12:30:00.164641 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4185a6b-ac1e-4148-b701-40f94500340a" containerName="registry-server" Nov 24 12:30:00 crc kubenswrapper[5072]: I1124 12:30:00.164648 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4185a6b-ac1e-4148-b701-40f94500340a" containerName="registry-server" Nov 24 12:30:00 crc kubenswrapper[5072]: E1124 12:30:00.164729 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4185a6b-ac1e-4148-b701-40f94500340a" containerName="extract-content" Nov 24 12:30:00 crc kubenswrapper[5072]: I1124 12:30:00.164792 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4185a6b-ac1e-4148-b701-40f94500340a" containerName="extract-content" Nov 24 12:30:00 crc kubenswrapper[5072]: I1124 12:30:00.165014 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4185a6b-ac1e-4148-b701-40f94500340a" containerName="registry-server" Nov 24 12:30:00 crc kubenswrapper[5072]: I1124 12:30:00.165036 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="8da5aa10-a6d2-4b6a-8ff7-a8efd7f7c130" containerName="container-00" Nov 24 12:30:00 crc kubenswrapper[5072]: I1124 12:30:00.165956 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399790-95xz6" Nov 24 12:30:00 crc kubenswrapper[5072]: I1124 12:30:00.171423 5072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 24 12:30:00 crc kubenswrapper[5072]: I1124 12:30:00.174165 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399790-95xz6"] Nov 24 12:30:00 crc kubenswrapper[5072]: I1124 12:30:00.176246 5072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 12:30:00 crc kubenswrapper[5072]: I1124 12:30:00.296897 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/65729592-ebe2-4752-885e-7fb08c984125-config-volume\") pod \"collect-profiles-29399790-95xz6\" (UID: \"65729592-ebe2-4752-885e-7fb08c984125\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399790-95xz6" Nov 24 12:30:00 crc kubenswrapper[5072]: I1124 12:30:00.297050 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhs4p\" (UniqueName: \"kubernetes.io/projected/65729592-ebe2-4752-885e-7fb08c984125-kube-api-access-hhs4p\") pod \"collect-profiles-29399790-95xz6\" (UID: \"65729592-ebe2-4752-885e-7fb08c984125\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399790-95xz6" Nov 24 12:30:00 crc kubenswrapper[5072]: I1124 12:30:00.297077 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/65729592-ebe2-4752-885e-7fb08c984125-secret-volume\") pod \"collect-profiles-29399790-95xz6\" (UID: \"65729592-ebe2-4752-885e-7fb08c984125\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399790-95xz6" Nov 24 12:30:00 crc kubenswrapper[5072]: I1124 12:30:00.399618 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/65729592-ebe2-4752-885e-7fb08c984125-config-volume\") pod \"collect-profiles-29399790-95xz6\" (UID: \"65729592-ebe2-4752-885e-7fb08c984125\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399790-95xz6" Nov 24 12:30:00 crc kubenswrapper[5072]: I1124 12:30:00.399776 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhs4p\" (UniqueName: \"kubernetes.io/projected/65729592-ebe2-4752-885e-7fb08c984125-kube-api-access-hhs4p\") pod \"collect-profiles-29399790-95xz6\" (UID: \"65729592-ebe2-4752-885e-7fb08c984125\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399790-95xz6" Nov 24 12:30:00 crc kubenswrapper[5072]: I1124 12:30:00.399821 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/65729592-ebe2-4752-885e-7fb08c984125-secret-volume\") pod \"collect-profiles-29399790-95xz6\" (UID: \"65729592-ebe2-4752-885e-7fb08c984125\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399790-95xz6" Nov 24 12:30:00 crc kubenswrapper[5072]: I1124 12:30:00.400713 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/65729592-ebe2-4752-885e-7fb08c984125-config-volume\") pod \"collect-profiles-29399790-95xz6\" (UID: \"65729592-ebe2-4752-885e-7fb08c984125\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399790-95xz6" Nov 24 12:30:00 crc kubenswrapper[5072]: I1124 12:30:00.405479 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/65729592-ebe2-4752-885e-7fb08c984125-secret-volume\") pod \"collect-profiles-29399790-95xz6\" (UID: \"65729592-ebe2-4752-885e-7fb08c984125\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399790-95xz6" Nov 24 12:30:00 crc kubenswrapper[5072]: I1124 12:30:00.421846 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhs4p\" (UniqueName: \"kubernetes.io/projected/65729592-ebe2-4752-885e-7fb08c984125-kube-api-access-hhs4p\") pod \"collect-profiles-29399790-95xz6\" (UID: \"65729592-ebe2-4752-885e-7fb08c984125\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399790-95xz6" Nov 24 12:30:00 crc kubenswrapper[5072]: I1124 12:30:00.494688 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399790-95xz6" Nov 24 12:30:00 crc kubenswrapper[5072]: I1124 12:30:00.949722 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399790-95xz6"] Nov 24 12:30:01 crc kubenswrapper[5072]: I1124 12:30:01.163997 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399790-95xz6" event={"ID":"65729592-ebe2-4752-885e-7fb08c984125","Type":"ContainerStarted","Data":"064f8deba2b44b217fde8552b43f534c1895b693714f6ac30627f3b2779861a9"} Nov 24 12:30:01 crc kubenswrapper[5072]: I1124 12:30:01.164050 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399790-95xz6" event={"ID":"65729592-ebe2-4752-885e-7fb08c984125","Type":"ContainerStarted","Data":"70904b9e6eeb9ba8ec63fda619cdf0e50000860e7fd1f0b183ac1d629d91bdd7"} Nov 24 12:30:02 crc kubenswrapper[5072]: I1124 12:30:02.173002 5072 generic.go:334] "Generic (PLEG): container finished" podID="65729592-ebe2-4752-885e-7fb08c984125" containerID="064f8deba2b44b217fde8552b43f534c1895b693714f6ac30627f3b2779861a9" exitCode=0 Nov 24 12:30:02 crc kubenswrapper[5072]: I1124 12:30:02.173275 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399790-95xz6" event={"ID":"65729592-ebe2-4752-885e-7fb08c984125","Type":"ContainerDied","Data":"064f8deba2b44b217fde8552b43f534c1895b693714f6ac30627f3b2779861a9"} Nov 24 12:30:04 crc kubenswrapper[5072]: I1124 12:30:04.159527 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399790-95xz6" Nov 24 12:30:04 crc kubenswrapper[5072]: I1124 12:30:04.193797 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399790-95xz6" event={"ID":"65729592-ebe2-4752-885e-7fb08c984125","Type":"ContainerDied","Data":"70904b9e6eeb9ba8ec63fda619cdf0e50000860e7fd1f0b183ac1d629d91bdd7"} Nov 24 12:30:04 crc kubenswrapper[5072]: I1124 12:30:04.193832 5072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70904b9e6eeb9ba8ec63fda619cdf0e50000860e7fd1f0b183ac1d629d91bdd7" Nov 24 12:30:04 crc kubenswrapper[5072]: I1124 12:30:04.193885 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399790-95xz6" Nov 24 12:30:04 crc kubenswrapper[5072]: I1124 12:30:04.287003 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/65729592-ebe2-4752-885e-7fb08c984125-secret-volume\") pod \"65729592-ebe2-4752-885e-7fb08c984125\" (UID: \"65729592-ebe2-4752-885e-7fb08c984125\") " Nov 24 12:30:04 crc kubenswrapper[5072]: I1124 12:30:04.287280 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hhs4p\" (UniqueName: \"kubernetes.io/projected/65729592-ebe2-4752-885e-7fb08c984125-kube-api-access-hhs4p\") pod \"65729592-ebe2-4752-885e-7fb08c984125\" (UID: \"65729592-ebe2-4752-885e-7fb08c984125\") " Nov 24 12:30:04 crc kubenswrapper[5072]: I1124 12:30:04.287525 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/65729592-ebe2-4752-885e-7fb08c984125-config-volume\") pod \"65729592-ebe2-4752-885e-7fb08c984125\" (UID: \"65729592-ebe2-4752-885e-7fb08c984125\") " Nov 24 12:30:04 crc kubenswrapper[5072]: I1124 12:30:04.288016 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65729592-ebe2-4752-885e-7fb08c984125-config-volume" (OuterVolumeSpecName: "config-volume") pod "65729592-ebe2-4752-885e-7fb08c984125" (UID: "65729592-ebe2-4752-885e-7fb08c984125"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:30:04 crc kubenswrapper[5072]: I1124 12:30:04.288581 5072 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/65729592-ebe2-4752-885e-7fb08c984125-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 12:30:04 crc kubenswrapper[5072]: I1124 12:30:04.292730 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65729592-ebe2-4752-885e-7fb08c984125-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "65729592-ebe2-4752-885e-7fb08c984125" (UID: "65729592-ebe2-4752-885e-7fb08c984125"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:30:04 crc kubenswrapper[5072]: I1124 12:30:04.298277 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65729592-ebe2-4752-885e-7fb08c984125-kube-api-access-hhs4p" (OuterVolumeSpecName: "kube-api-access-hhs4p") pod "65729592-ebe2-4752-885e-7fb08c984125" (UID: "65729592-ebe2-4752-885e-7fb08c984125"). InnerVolumeSpecName "kube-api-access-hhs4p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:30:04 crc kubenswrapper[5072]: I1124 12:30:04.402676 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hhs4p\" (UniqueName: \"kubernetes.io/projected/65729592-ebe2-4752-885e-7fb08c984125-kube-api-access-hhs4p\") on node \"crc\" DevicePath \"\"" Nov 24 12:30:04 crc kubenswrapper[5072]: I1124 12:30:04.402732 5072 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/65729592-ebe2-4752-885e-7fb08c984125-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 24 12:30:05 crc kubenswrapper[5072]: I1124 12:30:05.252279 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399745-lr9s2"] Nov 24 12:30:05 crc kubenswrapper[5072]: I1124 12:30:05.259177 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399745-lr9s2"] Nov 24 12:30:07 crc kubenswrapper[5072]: I1124 12:30:07.043346 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb3542d8-1d20-441f-8af8-031a8559c49b" path="/var/lib/kubelet/pods/fb3542d8-1d20-441f-8af8-031a8559c49b/volumes" Nov 24 12:30:13 crc kubenswrapper[5072]: I1124 12:30:13.645172 5072 patch_prober.go:28] interesting pod/machine-config-daemon-jfxnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:30:13 crc kubenswrapper[5072]: I1124 12:30:13.645782 5072 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:30:23 crc kubenswrapper[5072]: I1124 12:30:23.162853 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7785cf9ff8-jrntg_02bf4aaa-02e9-42b0-96e7-182557310711/barbican-api/0.log" Nov 24 12:30:23 crc kubenswrapper[5072]: I1124 12:30:23.322415 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7785cf9ff8-jrntg_02bf4aaa-02e9-42b0-96e7-182557310711/barbican-api-log/0.log" Nov 24 12:30:23 crc kubenswrapper[5072]: I1124 12:30:23.412884 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-56f6884b8b-d9lh4_17dcf560-c08b-4adb-b4e1-90887cddba39/barbican-keystone-listener/0.log" Nov 24 12:30:23 crc kubenswrapper[5072]: I1124 12:30:23.637805 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-56f6884b8b-d9lh4_17dcf560-c08b-4adb-b4e1-90887cddba39/barbican-keystone-listener-log/0.log" Nov 24 12:30:24 crc kubenswrapper[5072]: I1124 12:30:24.018830 5072 scope.go:117] "RemoveContainer" containerID="efd5842877ce866c92ce3b1b26eacbb8c5a7ba097d3f2d26e8e369edc733bba7" Nov 24 12:30:24 crc kubenswrapper[5072]: I1124 12:30:24.046438 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-55f6867c5c-rjpdx_522a3a4f-dbc9-4b6a-9bff-5df22b4cba44/barbican-worker-log/0.log" Nov 24 12:30:24 crc kubenswrapper[5072]: I1124 12:30:24.112932 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-55f6867c5c-rjpdx_522a3a4f-dbc9-4b6a-9bff-5df22b4cba44/barbican-worker/0.log" Nov 24 12:30:24 crc kubenswrapper[5072]: I1124 12:30:24.301978 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-cv2h4_ddef4dcc-c1f4-4057-8503-14afc5bffd37/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 12:30:24 crc kubenswrapper[5072]: I1124 12:30:24.338470 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_e6e58a4b-cc8d-45ea-8aad-10f44bcc2c21/ceilometer-central-agent/0.log" Nov 24 12:30:24 crc kubenswrapper[5072]: I1124 12:30:24.363606 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_e6e58a4b-cc8d-45ea-8aad-10f44bcc2c21/ceilometer-notification-agent/0.log" Nov 24 12:30:24 crc kubenswrapper[5072]: I1124 12:30:24.503398 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_e6e58a4b-cc8d-45ea-8aad-10f44bcc2c21/proxy-httpd/0.log" Nov 24 12:30:24 crc kubenswrapper[5072]: I1124 12:30:24.546199 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_e6e58a4b-cc8d-45ea-8aad-10f44bcc2c21/sg-core/0.log" Nov 24 12:30:24 crc kubenswrapper[5072]: I1124 12:30:24.604750 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-client-edpm-deployment-openstack-edpm-ipam-nr928_95c83f58-e5a9-4038-ae80-2ba999d47b81/ceph-client-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 12:30:24 crc kubenswrapper[5072]: I1124 12:30:24.742730 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-h9kpr_42275dab-0c0f-488a-9d9f-00d08fd1a9fb/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 12:30:24 crc kubenswrapper[5072]: I1124 12:30:24.886735 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_83c629ab-d9bd-4c85-b3e8-7d43a3d1c495/cinder-api/0.log" Nov 24 12:30:24 crc kubenswrapper[5072]: I1124 12:30:24.926928 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_83c629ab-d9bd-4c85-b3e8-7d43a3d1c495/cinder-api-log/0.log" Nov 24 12:30:25 crc kubenswrapper[5072]: I1124 12:30:25.111805 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_e51194ec-7c1f-4609-996f-ee210bb13bb5/probe/0.log" Nov 24 12:30:25 crc kubenswrapper[5072]: I1124 12:30:25.229247 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_e51194ec-7c1f-4609-996f-ee210bb13bb5/cinder-backup/0.log" Nov 24 12:30:25 crc kubenswrapper[5072]: I1124 12:30:25.292259 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_5053f25d-e6d3-4a92-88f4-5659485403af/cinder-scheduler/0.log" Nov 24 12:30:25 crc kubenswrapper[5072]: I1124 12:30:25.422818 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_5053f25d-e6d3-4a92-88f4-5659485403af/probe/0.log" Nov 24 12:30:25 crc kubenswrapper[5072]: I1124 12:30:25.512688 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0/probe/0.log" Nov 24 12:30:25 crc kubenswrapper[5072]: I1124 12:30:25.575902 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_9ed8d6e1-fa71-401b-acd5-341fbc2ec5a0/cinder-volume/0.log" Nov 24 12:30:25 crc kubenswrapper[5072]: I1124 12:30:25.725799 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-5lhlt_3960ebf7-e874-4d40-9d12-759d8bf2b312/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 12:30:25 crc kubenswrapper[5072]: I1124 12:30:25.833177 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-vptlp_792ebb76-1e10-452d-a1e3-159bb5b80975/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 12:30:25 crc kubenswrapper[5072]: I1124 12:30:25.926483 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-76b5fdb995-g6frb_0307a1dc-4248-472b-9b5e-51f2f116ac64/init/0.log" Nov 24 12:30:26 crc kubenswrapper[5072]: I1124 12:30:26.150600 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-76b5fdb995-g6frb_0307a1dc-4248-472b-9b5e-51f2f116ac64/init/0.log" Nov 24 12:30:26 crc kubenswrapper[5072]: I1124 12:30:26.173424 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-76b5fdb995-g6frb_0307a1dc-4248-472b-9b5e-51f2f116ac64/dnsmasq-dns/0.log" Nov 24 12:30:26 crc kubenswrapper[5072]: I1124 12:30:26.213975 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_1d71c9a2-3657-43f6-aec2-b53e3ea8fc01/glance-httpd/0.log" Nov 24 12:30:26 crc kubenswrapper[5072]: I1124 12:30:26.358323 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_1d71c9a2-3657-43f6-aec2-b53e3ea8fc01/glance-log/0.log" Nov 24 12:30:26 crc kubenswrapper[5072]: I1124 12:30:26.371627 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_61880241-c7c3-4422-adbb-3e6323831d71/glance-httpd/0.log" Nov 24 12:30:26 crc kubenswrapper[5072]: I1124 12:30:26.452491 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_61880241-c7c3-4422-adbb-3e6323831d71/glance-log/0.log" Nov 24 12:30:26 crc kubenswrapper[5072]: I1124 12:30:26.645838 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-575b5d47b6-n66fd_78739666-79c8-4af9-9766-6793e7975629/horizon/0.log" Nov 24 12:30:26 crc kubenswrapper[5072]: I1124 12:30:26.684071 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-575b5d47b6-n66fd_78739666-79c8-4af9-9766-6793e7975629/horizon/1.log" Nov 24 12:30:26 crc kubenswrapper[5072]: I1124 12:30:26.703111 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-dcmv7_55863054-3da4-4d20-80f7-9dd43d6ce388/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 12:30:26 crc kubenswrapper[5072]: I1124 12:30:26.884634 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-575b5d47b6-n66fd_78739666-79c8-4af9-9766-6793e7975629/horizon-log/0.log" Nov 24 12:30:27 crc kubenswrapper[5072]: I1124 12:30:27.028264 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-lrxgj_b7687777-0417-42e1-8f0e-201de683f32d/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 12:30:27 crc kubenswrapper[5072]: I1124 12:30:27.337268 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_4d9aa589-2a3a-4e9a-a1d6-92fc939cf2f6/kube-state-metrics/0.log" Nov 24 12:30:27 crc kubenswrapper[5072]: I1124 12:30:27.367338 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29399761-642mr_360e5e7f-fc1f-4d24-8446-b97c9c04aa46/keystone-cron/0.log" Nov 24 12:30:27 crc kubenswrapper[5072]: I1124 12:30:27.656877 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-6cc7b79dbf-mkd8x_f71f36ff-e9cc-4207-8381-a4edf917c2b1/keystone-api/0.log" Nov 24 12:30:27 crc kubenswrapper[5072]: I1124 12:30:27.679786 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-n6dbq_619cab13-44ee-48c6-bf40-4baddd9ad88e/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 12:30:27 crc kubenswrapper[5072]: I1124 12:30:27.937198 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_7c1f9647-62ad-452d-84ae-81211ebc18b5/probe/0.log" Nov 24 12:30:28 crc kubenswrapper[5072]: I1124 12:30:28.354840 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_aee02894-118d-46a9-88b6-4e2099bdf16f/probe/0.log" Nov 24 12:30:28 crc kubenswrapper[5072]: I1124 12:30:28.361642 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_f4e064b6-df4e-436b-9dec-c72ff87569f2/manila-api/0.log" Nov 24 12:30:28 crc kubenswrapper[5072]: I1124 12:30:28.402187 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_7c1f9647-62ad-452d-84ae-81211ebc18b5/manila-scheduler/0.log" Nov 24 12:30:28 crc kubenswrapper[5072]: I1124 12:30:28.629598 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_aee02894-118d-46a9-88b6-4e2099bdf16f/manila-share/0.log" Nov 24 12:30:28 crc kubenswrapper[5072]: I1124 12:30:28.873534 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6dc7d7697-tf7nw_c1ae9399-6f4c-4053-84c8-821eb2867dc8/neutron-httpd/0.log" Nov 24 12:30:28 crc kubenswrapper[5072]: I1124 12:30:28.946173 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6dc7d7697-tf7nw_c1ae9399-6f4c-4053-84c8-821eb2867dc8/neutron-api/0.log" Nov 24 12:30:29 crc kubenswrapper[5072]: I1124 12:30:29.076148 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_f4e064b6-df4e-436b-9dec-c72ff87569f2/manila-api-log/0.log" Nov 24 12:30:29 crc kubenswrapper[5072]: I1124 12:30:29.112051 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-pfz95_45051007-ac2c-49b5-acda-c9fdccd8cf9d/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 12:30:29 crc kubenswrapper[5072]: I1124 12:30:29.480845 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_82f52ff9-d0f6-4a88-bc4e-47d4d47808ac/nova-api-log/0.log" Nov 24 12:30:29 crc kubenswrapper[5072]: I1124 12:30:29.680925 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_cf68ac0f-299c-4ed5-a198-30bd0b2a7544/nova-cell0-conductor-conductor/0.log" Nov 24 12:30:29 crc kubenswrapper[5072]: I1124 12:30:29.910925 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_42a95d10-e572-4170-aa79-9b98d2c290b7/nova-cell1-conductor-conductor/0.log" Nov 24 12:30:29 crc kubenswrapper[5072]: I1124 12:30:29.988068 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_82f52ff9-d0f6-4a88-bc4e-47d4d47808ac/nova-api-api/0.log" Nov 24 12:30:30 crc kubenswrapper[5072]: I1124 12:30:30.063020 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_8a061135-fd7e-4c6c-bbca-422e684c0ccb/nova-cell1-novncproxy-novncproxy/0.log" Nov 24 12:30:30 crc kubenswrapper[5072]: I1124 12:30:30.147273 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-gpkb7_a25d738b-a5be-44f2-86f2-9b554c3f7947/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 12:30:30 crc kubenswrapper[5072]: I1124 12:30:30.349912 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_cb7d5b02-88e5-4f50-8039-3d573e832977/nova-metadata-log/0.log" Nov 24 12:30:30 crc kubenswrapper[5072]: I1124 12:30:30.591049 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_e05f8763-9e64-4bf6-84c8-25df03057309/mysql-bootstrap/0.log" Nov 24 12:30:30 crc kubenswrapper[5072]: I1124 12:30:30.671309 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_c842f0bb-64ee-4e70-a276-cf281480cf05/nova-scheduler-scheduler/0.log" Nov 24 12:30:30 crc kubenswrapper[5072]: I1124 12:30:30.827113 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_e05f8763-9e64-4bf6-84c8-25df03057309/galera/0.log" Nov 24 12:30:30 crc kubenswrapper[5072]: I1124 12:30:30.848863 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_e05f8763-9e64-4bf6-84c8-25df03057309/mysql-bootstrap/0.log" Nov 24 12:30:31 crc kubenswrapper[5072]: I1124 12:30:31.034802 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_0f143b81-90ef-461e-a3b5-36ceb68eda94/mysql-bootstrap/0.log" Nov 24 12:30:31 crc kubenswrapper[5072]: I1124 12:30:31.246853 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_0f143b81-90ef-461e-a3b5-36ceb68eda94/mysql-bootstrap/0.log" Nov 24 12:30:31 crc kubenswrapper[5072]: I1124 12:30:31.315590 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_0f143b81-90ef-461e-a3b5-36ceb68eda94/galera/0.log" Nov 24 12:30:31 crc kubenswrapper[5072]: I1124 12:30:31.470308 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_36162589-ddbd-4386-82e5-62d4d73d41b7/openstackclient/0.log" Nov 24 12:30:31 crc kubenswrapper[5072]: I1124 12:30:31.562948 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ltkhm_d1f48ba7-b537-4282-9eef-aee78410afcb/ovn-controller/0.log" Nov 24 12:30:32 crc kubenswrapper[5072]: I1124 12:30:32.184590 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-dwffh_6dc3beca-8832-4852-a397-cca5accca1a1/openstack-network-exporter/0.log" Nov 24 12:30:32 crc kubenswrapper[5072]: I1124 12:30:32.271159 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_cb7d5b02-88e5-4f50-8039-3d573e832977/nova-metadata-metadata/0.log" Nov 24 12:30:32 crc kubenswrapper[5072]: I1124 12:30:32.398924 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-7tcxz_a15ce4b3-7344-4b9f-983a-0065209e9d68/ovsdb-server-init/0.log" Nov 24 12:30:32 crc kubenswrapper[5072]: I1124 12:30:32.554637 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-7tcxz_a15ce4b3-7344-4b9f-983a-0065209e9d68/ovsdb-server-init/0.log" Nov 24 12:30:32 crc kubenswrapper[5072]: I1124 12:30:32.604002 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-7tcxz_a15ce4b3-7344-4b9f-983a-0065209e9d68/ovs-vswitchd/0.log" Nov 24 12:30:32 crc kubenswrapper[5072]: I1124 12:30:32.621336 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-7tcxz_a15ce4b3-7344-4b9f-983a-0065209e9d68/ovsdb-server/0.log" Nov 24 12:30:32 crc kubenswrapper[5072]: I1124 12:30:32.849764 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-qk9gt_60fbd22d-6dd6-4bdf-aa92-3b4682feeee0/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 12:30:32 crc kubenswrapper[5072]: I1124 12:30:32.872514 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_67176bb7-8d1f-453f-b403-7e2f323f41f8/openstack-network-exporter/0.log" Nov 24 12:30:32 crc kubenswrapper[5072]: I1124 12:30:32.921408 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_67176bb7-8d1f-453f-b403-7e2f323f41f8/ovn-northd/0.log" Nov 24 12:30:33 crc kubenswrapper[5072]: I1124 12:30:33.010883 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_e8ca3957-ce1c-49e8-a56b-d0f406d2e078/openstack-network-exporter/0.log" Nov 24 12:30:33 crc kubenswrapper[5072]: I1124 12:30:33.127297 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_e8ca3957-ce1c-49e8-a56b-d0f406d2e078/ovsdbserver-nb/0.log" Nov 24 12:30:33 crc kubenswrapper[5072]: I1124 12:30:33.228199 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_c95fc4be-5531-4d4d-98a5-aeb6d64b732d/openstack-network-exporter/0.log" Nov 24 12:30:33 crc kubenswrapper[5072]: I1124 12:30:33.352474 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_c95fc4be-5531-4d4d-98a5-aeb6d64b732d/ovsdbserver-sb/0.log" Nov 24 12:30:33 crc kubenswrapper[5072]: I1124 12:30:33.495873 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-64d9f94c7b-p7b2p_35ccd8e2-71e0-4a36-a51a-5c9a4734b124/placement-log/0.log" Nov 24 12:30:33 crc kubenswrapper[5072]: I1124 12:30:33.508097 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-64d9f94c7b-p7b2p_35ccd8e2-71e0-4a36-a51a-5c9a4734b124/placement-api/0.log" Nov 24 12:30:33 crc kubenswrapper[5072]: I1124 12:30:33.969195 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_38928c57-6c7d-4fb6-afe8-ed2602e450c3/setup-container/0.log" Nov 24 12:30:34 crc kubenswrapper[5072]: I1124 12:30:34.143754 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_38928c57-6c7d-4fb6-afe8-ed2602e450c3/rabbitmq/0.log" Nov 24 12:30:34 crc kubenswrapper[5072]: I1124 12:30:34.222772 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_02112c1c-a6a9-42e6-938e-e3e8d7b7217c/setup-container/0.log" Nov 24 12:30:34 crc kubenswrapper[5072]: I1124 12:30:34.242905 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_38928c57-6c7d-4fb6-afe8-ed2602e450c3/setup-container/0.log" Nov 24 12:30:34 crc kubenswrapper[5072]: I1124 12:30:34.505498 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_02112c1c-a6a9-42e6-938e-e3e8d7b7217c/rabbitmq/0.log" Nov 24 12:30:34 crc kubenswrapper[5072]: I1124 12:30:34.520919 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_02112c1c-a6a9-42e6-938e-e3e8d7b7217c/setup-container/0.log" Nov 24 12:30:34 crc kubenswrapper[5072]: I1124 12:30:34.571918 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-fbs95_ed449e35-f14d-45cf-b172-49441c6d676a/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 12:30:34 crc kubenswrapper[5072]: I1124 12:30:34.730233 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-xdvcd_0dcc0eb2-52d6-4d82-bddd-960848462a81/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 12:30:34 crc kubenswrapper[5072]: I1124 12:30:34.801357 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-9klcc_d97f4dff-1854-4cf0-9546-1626e9a5856b/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 12:30:35 crc kubenswrapper[5072]: I1124 12:30:35.046902 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-p68cc_c8ddc412-753d-44ff-9ac9-39a003a786dd/ssh-known-hosts-edpm-deployment/0.log" Nov 24 12:30:35 crc kubenswrapper[5072]: I1124 12:30:35.128282 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_c4384a66-1728-45a3-9ab4-d1479c51cd18/tempest-tests-tempest-tests-runner/0.log" Nov 24 12:30:35 crc kubenswrapper[5072]: I1124 12:30:35.265789 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_5e7f7b49-4b5e-4050-bfdb-0cea02628c47/test-operator-logs-container/0.log" Nov 24 12:30:35 crc kubenswrapper[5072]: I1124 12:30:35.464277 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-nw2kj_2f1ddd2f-edb5-4613-9fde-a27861d899bc/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 12:30:43 crc kubenswrapper[5072]: I1124 12:30:43.644495 5072 patch_prober.go:28] interesting pod/machine-config-daemon-jfxnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:30:43 crc kubenswrapper[5072]: I1124 12:30:43.646047 5072 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:30:43 crc kubenswrapper[5072]: I1124 12:30:43.646175 5072 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" Nov 24 12:30:43 crc kubenswrapper[5072]: I1124 12:30:43.647010 5072 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7b04b3f19e5637c82668c12efbec9e34299e5f49bfee3074b7fc7f031d0a99f9"} pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 12:30:43 crc kubenswrapper[5072]: I1124 12:30:43.647168 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" containerName="machine-config-daemon" containerID="cri-o://7b04b3f19e5637c82668c12efbec9e34299e5f49bfee3074b7fc7f031d0a99f9" gracePeriod=600 Nov 24 12:30:44 crc kubenswrapper[5072]: I1124 12:30:44.538242 5072 generic.go:334] "Generic (PLEG): container finished" podID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" containerID="7b04b3f19e5637c82668c12efbec9e34299e5f49bfee3074b7fc7f031d0a99f9" exitCode=0 Nov 24 12:30:44 crc kubenswrapper[5072]: I1124 12:30:44.538585 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" event={"ID":"85ee6420-36f0-467c-acf4-ebea8b02c8d5","Type":"ContainerDied","Data":"7b04b3f19e5637c82668c12efbec9e34299e5f49bfee3074b7fc7f031d0a99f9"} Nov 24 12:30:44 crc kubenswrapper[5072]: I1124 12:30:44.538740 5072 scope.go:117] "RemoveContainer" containerID="5e4b2551d31676c56045004e4ca1ab40457429150ff7753248ba4a9525c16c9e" Nov 24 12:30:45 crc kubenswrapper[5072]: I1124 12:30:45.551768 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" event={"ID":"85ee6420-36f0-467c-acf4-ebea8b02c8d5","Type":"ContainerStarted","Data":"19c25482ac3f796b948d13f3b52c86e92224ddeeedd2b5a203612de4f14f6e8c"} Nov 24 12:30:58 crc kubenswrapper[5072]: I1124 12:30:58.571590 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_f0ecdfec-d313-40dc-97a6-344109151fe8/memcached/0.log" Nov 24 12:31:05 crc kubenswrapper[5072]: I1124 12:31:05.447317 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9b2186798aeb926003696bc84f4630fc1fe1628e77d31f0b55ade92554p4x65_e7f9a3f4-4e91-406d-b8da-1bf99ac318bd/util/0.log" Nov 24 12:31:05 crc kubenswrapper[5072]: I1124 12:31:05.591510 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9b2186798aeb926003696bc84f4630fc1fe1628e77d31f0b55ade92554p4x65_e7f9a3f4-4e91-406d-b8da-1bf99ac318bd/util/0.log" Nov 24 12:31:05 crc kubenswrapper[5072]: I1124 12:31:05.622619 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9b2186798aeb926003696bc84f4630fc1fe1628e77d31f0b55ade92554p4x65_e7f9a3f4-4e91-406d-b8da-1bf99ac318bd/pull/0.log" Nov 24 12:31:05 crc kubenswrapper[5072]: I1124 12:31:05.669108 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9b2186798aeb926003696bc84f4630fc1fe1628e77d31f0b55ade92554p4x65_e7f9a3f4-4e91-406d-b8da-1bf99ac318bd/pull/0.log" Nov 24 12:31:05 crc kubenswrapper[5072]: I1124 12:31:05.841530 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9b2186798aeb926003696bc84f4630fc1fe1628e77d31f0b55ade92554p4x65_e7f9a3f4-4e91-406d-b8da-1bf99ac318bd/util/0.log" Nov 24 12:31:05 crc kubenswrapper[5072]: I1124 12:31:05.846461 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9b2186798aeb926003696bc84f4630fc1fe1628e77d31f0b55ade92554p4x65_e7f9a3f4-4e91-406d-b8da-1bf99ac318bd/pull/0.log" Nov 24 12:31:05 crc kubenswrapper[5072]: I1124 12:31:05.852265 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9b2186798aeb926003696bc84f4630fc1fe1628e77d31f0b55ade92554p4x65_e7f9a3f4-4e91-406d-b8da-1bf99ac318bd/extract/0.log" Nov 24 12:31:06 crc kubenswrapper[5072]: I1124 12:31:06.062684 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-86dc4d89c8-4jwxd_a4945263-5f74-4c93-b782-8a381e40275c/manager/0.log" Nov 24 12:31:06 crc kubenswrapper[5072]: I1124 12:31:06.085305 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-86dc4d89c8-4jwxd_a4945263-5f74-4c93-b782-8a381e40275c/kube-rbac-proxy/0.log" Nov 24 12:31:06 crc kubenswrapper[5072]: I1124 12:31:06.876385 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-7d695c9b56-bpsnt_500235e4-633d-486d-8ea9-bc0830747b6f/kube-rbac-proxy/0.log" Nov 24 12:31:06 crc kubenswrapper[5072]: I1124 12:31:06.896680 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-79856dc55c-756nd_459e53de-60cc-4763-a093-4940428df8c3/kube-rbac-proxy/0.log" Nov 24 12:31:06 crc kubenswrapper[5072]: I1124 12:31:06.983749 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-79856dc55c-756nd_459e53de-60cc-4763-a093-4940428df8c3/manager/0.log" Nov 24 12:31:07 crc kubenswrapper[5072]: I1124 12:31:07.099206 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-7d695c9b56-bpsnt_500235e4-633d-486d-8ea9-bc0830747b6f/manager/0.log" Nov 24 12:31:07 crc kubenswrapper[5072]: I1124 12:31:07.119282 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-68b95954c9-5s9dg_67cd7ebd-5d77-4c59-a1af-2283997e4de4/kube-rbac-proxy/0.log" Nov 24 12:31:07 crc kubenswrapper[5072]: I1124 12:31:07.263665 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-68b95954c9-5s9dg_67cd7ebd-5d77-4c59-a1af-2283997e4de4/manager/0.log" Nov 24 12:31:07 crc kubenswrapper[5072]: I1124 12:31:07.330551 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-774b86978c-qn647_62a8ddcc-1b1e-4bd6-8e4b-41273932a900/kube-rbac-proxy/0.log" Nov 24 12:31:07 crc kubenswrapper[5072]: I1124 12:31:07.341511 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-774b86978c-qn647_62a8ddcc-1b1e-4bd6-8e4b-41273932a900/manager/0.log" Nov 24 12:31:07 crc kubenswrapper[5072]: I1124 12:31:07.509507 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c9694994-wkqz4_bdcb07cf-3d31-40c8-bd3b-1c791408a3b9/kube-rbac-proxy/0.log" Nov 24 12:31:07 crc kubenswrapper[5072]: I1124 12:31:07.553639 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c9694994-wkqz4_bdcb07cf-3d31-40c8-bd3b-1c791408a3b9/manager/0.log" Nov 24 12:31:07 crc kubenswrapper[5072]: I1124 12:31:07.667758 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-858778c9dc-lrk4z_e8ca42b5-22f1-4101-bbf6-d053bda8b6f2/kube-rbac-proxy/0.log" Nov 24 12:31:07 crc kubenswrapper[5072]: I1124 12:31:07.790512 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5bfcdc958c-7mzzw_d7f60d9f-304e-4531-aeec-6c4a576d3a1e/kube-rbac-proxy/0.log" Nov 24 12:31:07 crc kubenswrapper[5072]: I1124 12:31:07.821660 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-858778c9dc-lrk4z_e8ca42b5-22f1-4101-bbf6-d053bda8b6f2/manager/0.log" Nov 24 12:31:07 crc kubenswrapper[5072]: I1124 12:31:07.937487 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5bfcdc958c-7mzzw_d7f60d9f-304e-4531-aeec-6c4a576d3a1e/manager/0.log" Nov 24 12:31:08 crc kubenswrapper[5072]: I1124 12:31:08.028552 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-748dc6576f-rbff2_39f25192-6179-44cd-894a-0ebf01a675e1/kube-rbac-proxy/0.log" Nov 24 12:31:08 crc kubenswrapper[5072]: I1124 12:31:08.029959 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-748dc6576f-rbff2_39f25192-6179-44cd-894a-0ebf01a675e1/manager/0.log" Nov 24 12:31:08 crc kubenswrapper[5072]: I1124 12:31:08.192410 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-6588bc459f-mnxdw_7bf279a5-5615-474c-8f17-0066eb4a681d/kube-rbac-proxy/0.log" Nov 24 12:31:08 crc kubenswrapper[5072]: I1124 12:31:08.283770 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-cb6c4fdb7-vwkpc_9696dd76-5a2d-46d8-b344-bde781c44bd9/kube-rbac-proxy/0.log" Nov 24 12:31:08 crc kubenswrapper[5072]: I1124 12:31:08.304806 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-6588bc459f-mnxdw_7bf279a5-5615-474c-8f17-0066eb4a681d/manager/0.log" Nov 24 12:31:08 crc kubenswrapper[5072]: I1124 12:31:08.363145 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-cb6c4fdb7-vwkpc_9696dd76-5a2d-46d8-b344-bde781c44bd9/manager/0.log" Nov 24 12:31:08 crc kubenswrapper[5072]: I1124 12:31:08.497304 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7c57c8bbc4-b7nnc_82a02d23-10da-4e39-a81a-9f63180ecc89/kube-rbac-proxy/0.log" Nov 24 12:31:08 crc kubenswrapper[5072]: I1124 12:31:08.525538 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7c57c8bbc4-b7nnc_82a02d23-10da-4e39-a81a-9f63180ecc89/manager/0.log" Nov 24 12:31:08 crc kubenswrapper[5072]: I1124 12:31:08.585687 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-r7mbw_fc8a9f5f-37fe-417e-9016-886b359a5a71/kube-rbac-proxy/0.log" Nov 24 12:31:08 crc kubenswrapper[5072]: I1124 12:31:08.746110 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-fd75fd47d-4z4cm_1b89d966-3ff3-451d-859c-0198a7cde893/kube-rbac-proxy/0.log" Nov 24 12:31:08 crc kubenswrapper[5072]: I1124 12:31:08.753936 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-r7mbw_fc8a9f5f-37fe-417e-9016-886b359a5a71/manager/0.log" Nov 24 12:31:08 crc kubenswrapper[5072]: I1124 12:31:08.766048 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-fd75fd47d-4z4cm_1b89d966-3ff3-451d-859c-0198a7cde893/manager/0.log" Nov 24 12:31:08 crc kubenswrapper[5072]: I1124 12:31:08.936424 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-544b9bb9-5sknj_ff7d4c70-56ad-4baa-b7eb-bba77d3811bb/kube-rbac-proxy/0.log" Nov 24 12:31:08 crc kubenswrapper[5072]: I1124 12:31:08.968257 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-544b9bb9-5sknj_ff7d4c70-56ad-4baa-b7eb-bba77d3811bb/manager/0.log" Nov 24 12:31:09 crc kubenswrapper[5072]: I1124 12:31:09.234462 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-fj9hm_647cb5b8-46fc-4c8d-90af-18ef37a34807/registry-server/0.log" Nov 24 12:31:09 crc kubenswrapper[5072]: I1124 12:31:09.331725 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-68868f9b94-xzgj7_cf28b96d-16c5-40f6-a588-0a77f527d52d/operator/0.log" Nov 24 12:31:09 crc kubenswrapper[5072]: I1124 12:31:09.465933 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-66cf5c67ff-p6hcl_edb8360f-2977-47c4-9029-02341a92a6de/kube-rbac-proxy/0.log" Nov 24 12:31:09 crc kubenswrapper[5072]: I1124 12:31:09.581578 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5db546f9d9-jh4nt_64a55d3a-a7ab-4bce-8497-1992e9591a90/kube-rbac-proxy/0.log" Nov 24 12:31:09 crc kubenswrapper[5072]: I1124 12:31:09.586853 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-66cf5c67ff-p6hcl_edb8360f-2977-47c4-9029-02341a92a6de/manager/0.log" Nov 24 12:31:09 crc kubenswrapper[5072]: I1124 12:31:09.838678 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5db546f9d9-jh4nt_64a55d3a-a7ab-4bce-8497-1992e9591a90/manager/0.log" Nov 24 12:31:09 crc kubenswrapper[5072]: I1124 12:31:09.844753 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-lgdqp_88168be8-a585-468a-a983-f56bbb31b4a0/operator/0.log" Nov 24 12:31:10 crc kubenswrapper[5072]: I1124 12:31:10.005759 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6fdc4fcf86-r7bsx_321368f6-c64b-4d58-ae2a-e939d6d447f7/kube-rbac-proxy/0.log" Nov 24 12:31:10 crc kubenswrapper[5072]: I1124 12:31:10.104478 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6fdc4fcf86-r7bsx_321368f6-c64b-4d58-ae2a-e939d6d447f7/manager/0.log" Nov 24 12:31:10 crc kubenswrapper[5072]: I1124 12:31:10.182750 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-567f98c9d-cfj6h_7c599673-db2a-4c37-88fa-45e7166f6c20/kube-rbac-proxy/0.log" Nov 24 12:31:10 crc kubenswrapper[5072]: I1124 12:31:10.278465 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-698dfbd98-5pfmt_ae6c4b3b-27a4-4d23-bdd0-0ea9e100d400/manager/0.log" Nov 24 12:31:10 crc kubenswrapper[5072]: I1124 12:31:10.384736 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cb74df96-dvldw_cd9a8dda-b29e-4e10-837a-d00bdcf6bdaa/kube-rbac-proxy/0.log" Nov 24 12:31:10 crc kubenswrapper[5072]: I1124 12:31:10.431353 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cb74df96-dvldw_cd9a8dda-b29e-4e10-837a-d00bdcf6bdaa/manager/0.log" Nov 24 12:31:10 crc kubenswrapper[5072]: I1124 12:31:10.436101 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-567f98c9d-cfj6h_7c599673-db2a-4c37-88fa-45e7166f6c20/manager/0.log" Nov 24 12:31:10 crc kubenswrapper[5072]: I1124 12:31:10.560656 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-864885998-bz2zj_0d17eb13-802b-4d4a-b221-1481e16e1110/kube-rbac-proxy/0.log" Nov 24 12:31:10 crc kubenswrapper[5072]: I1124 12:31:10.580678 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-864885998-bz2zj_0d17eb13-802b-4d4a-b221-1481e16e1110/manager/0.log" Nov 24 12:31:28 crc kubenswrapper[5072]: I1124 12:31:28.939326 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-nwsjb_7b8bcc47-53bd-45a5-937f-b515a314f662/control-plane-machine-set-operator/0.log" Nov 24 12:31:29 crc kubenswrapper[5072]: I1124 12:31:29.542243 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-dzh8r_bcbc6938-ae1b-4306-a73d-7f2c5dc64047/kube-rbac-proxy/0.log" Nov 24 12:31:29 crc kubenswrapper[5072]: I1124 12:31:29.601233 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-dzh8r_bcbc6938-ae1b-4306-a73d-7f2c5dc64047/machine-api-operator/0.log" Nov 24 12:31:42 crc kubenswrapper[5072]: I1124 12:31:42.838228 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-g8nvp_69649578-7c12-47bd-900a-a6ebe612c305/cert-manager-controller/0.log" Nov 24 12:31:42 crc kubenswrapper[5072]: I1124 12:31:42.991922 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-v62vq_01b23be1-c336-40a5-8b57-60ed5edddef1/cert-manager-cainjector/0.log" Nov 24 12:31:43 crc kubenswrapper[5072]: I1124 12:31:43.028647 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-hcmw7_5da70e2a-5e52-437b-b1e4-fee7f8460a72/cert-manager-webhook/0.log" Nov 24 12:31:54 crc kubenswrapper[5072]: I1124 12:31:54.442458 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5874bd7bc5-ppjv5_abe6e260-c56f-46ff-b5a7-a7da6df2b64f/nmstate-console-plugin/0.log" Nov 24 12:31:54 crc kubenswrapper[5072]: I1124 12:31:54.609292 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-hhvlc_9b1242fa-766e-4ef6-b41f-0cc670aa35c2/nmstate-handler/0.log" Nov 24 12:31:54 crc kubenswrapper[5072]: I1124 12:31:54.656257 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-2ntqs_186c5c36-95cc-427c-af18-4ba4d0c8ea58/nmstate-metrics/0.log" Nov 24 12:31:54 crc kubenswrapper[5072]: I1124 12:31:54.660498 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-2ntqs_186c5c36-95cc-427c-af18-4ba4d0c8ea58/kube-rbac-proxy/0.log" Nov 24 12:31:54 crc kubenswrapper[5072]: I1124 12:31:54.842273 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-557fdffb88-q824z_b5b7e963-3dd2-4073-9297-2b03a0411ff3/nmstate-operator/0.log" Nov 24 12:31:54 crc kubenswrapper[5072]: I1124 12:31:54.961499 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-6b89b748d8-9x2g2_56a60d6f-8026-4722-95ad-aa81efc124f8/nmstate-webhook/0.log" Nov 24 12:32:09 crc kubenswrapper[5072]: I1124 12:32:09.858464 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-54sxn_b9a94a05-9a99-48b5-8ba7-a1bd99f05577/kube-rbac-proxy/0.log" Nov 24 12:32:09 crc kubenswrapper[5072]: I1124 12:32:09.937780 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-54sxn_b9a94a05-9a99-48b5-8ba7-a1bd99f05577/controller/0.log" Nov 24 12:32:10 crc kubenswrapper[5072]: I1124 12:32:10.025216 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2nhqx_b1d8a0f3-7f9b-4e19-bfcf-addd8fff3b88/cp-frr-files/0.log" Nov 24 12:32:10 crc kubenswrapper[5072]: I1124 12:32:10.313159 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2nhqx_b1d8a0f3-7f9b-4e19-bfcf-addd8fff3b88/cp-metrics/0.log" Nov 24 12:32:10 crc kubenswrapper[5072]: I1124 12:32:10.327504 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2nhqx_b1d8a0f3-7f9b-4e19-bfcf-addd8fff3b88/cp-frr-files/0.log" Nov 24 12:32:10 crc kubenswrapper[5072]: I1124 12:32:10.338665 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2nhqx_b1d8a0f3-7f9b-4e19-bfcf-addd8fff3b88/cp-reloader/0.log" Nov 24 12:32:10 crc kubenswrapper[5072]: I1124 12:32:10.355036 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2nhqx_b1d8a0f3-7f9b-4e19-bfcf-addd8fff3b88/cp-reloader/0.log" Nov 24 12:32:11 crc kubenswrapper[5072]: I1124 12:32:11.012848 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2nhqx_b1d8a0f3-7f9b-4e19-bfcf-addd8fff3b88/cp-metrics/0.log" Nov 24 12:32:11 crc kubenswrapper[5072]: I1124 12:32:11.012965 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2nhqx_b1d8a0f3-7f9b-4e19-bfcf-addd8fff3b88/cp-reloader/0.log" Nov 24 12:32:11 crc kubenswrapper[5072]: I1124 12:32:11.016216 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2nhqx_b1d8a0f3-7f9b-4e19-bfcf-addd8fff3b88/cp-frr-files/0.log" Nov 24 12:32:11 crc kubenswrapper[5072]: I1124 12:32:11.019883 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2nhqx_b1d8a0f3-7f9b-4e19-bfcf-addd8fff3b88/cp-metrics/0.log" Nov 24 12:32:11 crc kubenswrapper[5072]: I1124 12:32:11.177067 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2nhqx_b1d8a0f3-7f9b-4e19-bfcf-addd8fff3b88/cp-reloader/0.log" Nov 24 12:32:11 crc kubenswrapper[5072]: I1124 12:32:11.179852 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2nhqx_b1d8a0f3-7f9b-4e19-bfcf-addd8fff3b88/cp-metrics/0.log" Nov 24 12:32:11 crc kubenswrapper[5072]: I1124 12:32:11.183486 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2nhqx_b1d8a0f3-7f9b-4e19-bfcf-addd8fff3b88/cp-frr-files/0.log" Nov 24 12:32:11 crc kubenswrapper[5072]: I1124 12:32:11.196128 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2nhqx_b1d8a0f3-7f9b-4e19-bfcf-addd8fff3b88/controller/0.log" Nov 24 12:32:11 crc kubenswrapper[5072]: I1124 12:32:11.379466 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2nhqx_b1d8a0f3-7f9b-4e19-bfcf-addd8fff3b88/kube-rbac-proxy-frr/0.log" Nov 24 12:32:11 crc kubenswrapper[5072]: I1124 12:32:11.397393 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2nhqx_b1d8a0f3-7f9b-4e19-bfcf-addd8fff3b88/kube-rbac-proxy/0.log" Nov 24 12:32:11 crc kubenswrapper[5072]: I1124 12:32:11.416501 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2nhqx_b1d8a0f3-7f9b-4e19-bfcf-addd8fff3b88/frr-metrics/0.log" Nov 24 12:32:11 crc kubenswrapper[5072]: I1124 12:32:11.625554 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2nhqx_b1d8a0f3-7f9b-4e19-bfcf-addd8fff3b88/reloader/0.log" Nov 24 12:32:11 crc kubenswrapper[5072]: I1124 12:32:11.693669 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-6998585d5-mjmzs_a4839b57-91b0-4472-ac9e-fd342a3430c0/frr-k8s-webhook-server/0.log" Nov 24 12:32:11 crc kubenswrapper[5072]: I1124 12:32:11.911638 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-b6dc8dd56-6d5x5_30512acc-64dc-4a20-88e5-565a69d8f95c/manager/0.log" Nov 24 12:32:12 crc kubenswrapper[5072]: I1124 12:32:12.094977 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-75d856c88d-rz946_e3c19ac2-dba1-4b49-acb0-1f93285f60b2/webhook-server/0.log" Nov 24 12:32:12 crc kubenswrapper[5072]: I1124 12:32:12.227773 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-xc9ht_e5b09acb-4f8f-45f4-b669-c491f59a52e1/kube-rbac-proxy/0.log" Nov 24 12:32:12 crc kubenswrapper[5072]: I1124 12:32:12.812620 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-xc9ht_e5b09acb-4f8f-45f4-b669-c491f59a52e1/speaker/0.log" Nov 24 12:32:13 crc kubenswrapper[5072]: I1124 12:32:13.154302 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2nhqx_b1d8a0f3-7f9b-4e19-bfcf-addd8fff3b88/frr/0.log" Nov 24 12:32:25 crc kubenswrapper[5072]: I1124 12:32:25.925677 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ez65cw_0b557c16-ec3a-4ee2-96cb-f1fbcfa23f76/util/0.log" Nov 24 12:32:26 crc kubenswrapper[5072]: I1124 12:32:26.106999 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ez65cw_0b557c16-ec3a-4ee2-96cb-f1fbcfa23f76/pull/0.log" Nov 24 12:32:26 crc kubenswrapper[5072]: I1124 12:32:26.140450 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ez65cw_0b557c16-ec3a-4ee2-96cb-f1fbcfa23f76/pull/0.log" Nov 24 12:32:26 crc kubenswrapper[5072]: I1124 12:32:26.172106 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ez65cw_0b557c16-ec3a-4ee2-96cb-f1fbcfa23f76/util/0.log" Nov 24 12:32:26 crc kubenswrapper[5072]: I1124 12:32:26.369114 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ez65cw_0b557c16-ec3a-4ee2-96cb-f1fbcfa23f76/extract/0.log" Nov 24 12:32:26 crc kubenswrapper[5072]: I1124 12:32:26.369398 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ez65cw_0b557c16-ec3a-4ee2-96cb-f1fbcfa23f76/util/0.log" Nov 24 12:32:26 crc kubenswrapper[5072]: I1124 12:32:26.417289 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ez65cw_0b557c16-ec3a-4ee2-96cb-f1fbcfa23f76/pull/0.log" Nov 24 12:32:26 crc kubenswrapper[5072]: I1124 12:32:26.515178 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-b8kkq_0b414b96-7437-45fe-82ff-663bdd600440/extract-utilities/0.log" Nov 24 12:32:26 crc kubenswrapper[5072]: I1124 12:32:26.747604 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-b8kkq_0b414b96-7437-45fe-82ff-663bdd600440/extract-content/0.log" Nov 24 12:32:26 crc kubenswrapper[5072]: I1124 12:32:26.778261 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-b8kkq_0b414b96-7437-45fe-82ff-663bdd600440/extract-utilities/0.log" Nov 24 12:32:26 crc kubenswrapper[5072]: I1124 12:32:26.806709 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-b8kkq_0b414b96-7437-45fe-82ff-663bdd600440/extract-content/0.log" Nov 24 12:32:26 crc kubenswrapper[5072]: I1124 12:32:26.924863 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-b8kkq_0b414b96-7437-45fe-82ff-663bdd600440/extract-utilities/0.log" Nov 24 12:32:26 crc kubenswrapper[5072]: I1124 12:32:26.966333 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-b8kkq_0b414b96-7437-45fe-82ff-663bdd600440/extract-content/0.log" Nov 24 12:32:27 crc kubenswrapper[5072]: I1124 12:32:27.171544 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4nsmr_38853327-58cd-437a-9f17-6558118671bf/extract-utilities/0.log" Nov 24 12:32:27 crc kubenswrapper[5072]: I1124 12:32:27.419142 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4nsmr_38853327-58cd-437a-9f17-6558118671bf/extract-content/0.log" Nov 24 12:32:27 crc kubenswrapper[5072]: I1124 12:32:27.429146 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4nsmr_38853327-58cd-437a-9f17-6558118671bf/extract-utilities/0.log" Nov 24 12:32:27 crc kubenswrapper[5072]: I1124 12:32:27.439727 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4nsmr_38853327-58cd-437a-9f17-6558118671bf/extract-content/0.log" Nov 24 12:32:27 crc kubenswrapper[5072]: I1124 12:32:27.839713 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4nsmr_38853327-58cd-437a-9f17-6558118671bf/extract-content/0.log" Nov 24 12:32:27 crc kubenswrapper[5072]: I1124 12:32:27.883806 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-b8kkq_0b414b96-7437-45fe-82ff-663bdd600440/registry-server/0.log" Nov 24 12:32:27 crc kubenswrapper[5072]: I1124 12:32:27.898049 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4nsmr_38853327-58cd-437a-9f17-6558118671bf/extract-utilities/0.log" Nov 24 12:32:28 crc kubenswrapper[5072]: I1124 12:32:28.118812 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6dxr5m_e5fd58fa-412d-4812-b49a-ad193626aed8/util/0.log" Nov 24 12:32:28 crc kubenswrapper[5072]: I1124 12:32:28.533353 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4nsmr_38853327-58cd-437a-9f17-6558118671bf/registry-server/0.log" Nov 24 12:32:28 crc kubenswrapper[5072]: I1124 12:32:28.923608 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6dxr5m_e5fd58fa-412d-4812-b49a-ad193626aed8/pull/0.log" Nov 24 12:32:28 crc kubenswrapper[5072]: I1124 12:32:28.942351 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6dxr5m_e5fd58fa-412d-4812-b49a-ad193626aed8/util/0.log" Nov 24 12:32:28 crc kubenswrapper[5072]: I1124 12:32:28.951320 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6dxr5m_e5fd58fa-412d-4812-b49a-ad193626aed8/pull/0.log" Nov 24 12:32:29 crc kubenswrapper[5072]: I1124 12:32:29.122277 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6dxr5m_e5fd58fa-412d-4812-b49a-ad193626aed8/pull/0.log" Nov 24 12:32:29 crc kubenswrapper[5072]: I1124 12:32:29.142872 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6dxr5m_e5fd58fa-412d-4812-b49a-ad193626aed8/util/0.log" Nov 24 12:32:29 crc kubenswrapper[5072]: I1124 12:32:29.256931 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6dxr5m_e5fd58fa-412d-4812-b49a-ad193626aed8/extract/0.log" Nov 24 12:32:29 crc kubenswrapper[5072]: I1124 12:32:29.312729 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-4scvq_f3db2294-11de-44ff-ac29-e9f1bcf6cd24/marketplace-operator/0.log" Nov 24 12:32:29 crc kubenswrapper[5072]: I1124 12:32:29.444337 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4jrmf_afa685e2-1d27-44a0-bdb9-ee494b9e8190/extract-utilities/0.log" Nov 24 12:32:29 crc kubenswrapper[5072]: I1124 12:32:29.669326 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4jrmf_afa685e2-1d27-44a0-bdb9-ee494b9e8190/extract-content/0.log" Nov 24 12:32:29 crc kubenswrapper[5072]: I1124 12:32:29.669493 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4jrmf_afa685e2-1d27-44a0-bdb9-ee494b9e8190/extract-content/0.log" Nov 24 12:32:29 crc kubenswrapper[5072]: I1124 12:32:29.681658 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4jrmf_afa685e2-1d27-44a0-bdb9-ee494b9e8190/extract-utilities/0.log" Nov 24 12:32:29 crc kubenswrapper[5072]: I1124 12:32:29.859355 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4jrmf_afa685e2-1d27-44a0-bdb9-ee494b9e8190/extract-content/0.log" Nov 24 12:32:29 crc kubenswrapper[5072]: I1124 12:32:29.864719 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4jrmf_afa685e2-1d27-44a0-bdb9-ee494b9e8190/extract-utilities/0.log" Nov 24 12:32:29 crc kubenswrapper[5072]: I1124 12:32:29.970478 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-j5htq_8b8c141a-32f9-41ba-95af-8448cf8cd002/extract-utilities/0.log" Nov 24 12:32:30 crc kubenswrapper[5072]: I1124 12:32:30.047162 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4jrmf_afa685e2-1d27-44a0-bdb9-ee494b9e8190/registry-server/0.log" Nov 24 12:32:30 crc kubenswrapper[5072]: I1124 12:32:30.170858 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-j5htq_8b8c141a-32f9-41ba-95af-8448cf8cd002/extract-utilities/0.log" Nov 24 12:32:30 crc kubenswrapper[5072]: I1124 12:32:30.173008 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-j5htq_8b8c141a-32f9-41ba-95af-8448cf8cd002/extract-content/0.log" Nov 24 12:32:30 crc kubenswrapper[5072]: I1124 12:32:30.214164 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-j5htq_8b8c141a-32f9-41ba-95af-8448cf8cd002/extract-content/0.log" Nov 24 12:32:30 crc kubenswrapper[5072]: I1124 12:32:30.971826 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-j5htq_8b8c141a-32f9-41ba-95af-8448cf8cd002/extract-utilities/0.log" Nov 24 12:32:30 crc kubenswrapper[5072]: I1124 12:32:30.981241 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-j5htq_8b8c141a-32f9-41ba-95af-8448cf8cd002/extract-content/0.log" Nov 24 12:32:31 crc kubenswrapper[5072]: I1124 12:32:31.188171 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-j5htq_8b8c141a-32f9-41ba-95af-8448cf8cd002/registry-server/0.log" Nov 24 12:33:13 crc kubenswrapper[5072]: I1124 12:33:13.645021 5072 patch_prober.go:28] interesting pod/machine-config-daemon-jfxnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:33:13 crc kubenswrapper[5072]: I1124 12:33:13.645422 5072 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:33:43 crc kubenswrapper[5072]: I1124 12:33:43.645614 5072 patch_prober.go:28] interesting pod/machine-config-daemon-jfxnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:33:43 crc kubenswrapper[5072]: I1124 12:33:43.646542 5072 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:34:13 crc kubenswrapper[5072]: I1124 12:34:13.644992 5072 patch_prober.go:28] interesting pod/machine-config-daemon-jfxnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:34:13 crc kubenswrapper[5072]: I1124 12:34:13.645649 5072 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:34:13 crc kubenswrapper[5072]: I1124 12:34:13.645699 5072 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" Nov 24 12:34:13 crc kubenswrapper[5072]: I1124 12:34:13.646631 5072 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"19c25482ac3f796b948d13f3b52c86e92224ddeeedd2b5a203612de4f14f6e8c"} pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 12:34:13 crc kubenswrapper[5072]: I1124 12:34:13.646754 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" containerName="machine-config-daemon" containerID="cri-o://19c25482ac3f796b948d13f3b52c86e92224ddeeedd2b5a203612de4f14f6e8c" gracePeriod=600 Nov 24 12:34:13 crc kubenswrapper[5072]: E1124 12:34:13.943902 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 12:34:14 crc kubenswrapper[5072]: I1124 12:34:14.530262 5072 generic.go:334] "Generic (PLEG): container finished" podID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" containerID="19c25482ac3f796b948d13f3b52c86e92224ddeeedd2b5a203612de4f14f6e8c" exitCode=0 Nov 24 12:34:14 crc kubenswrapper[5072]: I1124 12:34:14.530502 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" event={"ID":"85ee6420-36f0-467c-acf4-ebea8b02c8d5","Type":"ContainerDied","Data":"19c25482ac3f796b948d13f3b52c86e92224ddeeedd2b5a203612de4f14f6e8c"} Nov 24 12:34:14 crc kubenswrapper[5072]: I1124 12:34:14.530844 5072 scope.go:117] "RemoveContainer" containerID="7b04b3f19e5637c82668c12efbec9e34299e5f49bfee3074b7fc7f031d0a99f9" Nov 24 12:34:14 crc kubenswrapper[5072]: I1124 12:34:14.532309 5072 scope.go:117] "RemoveContainer" containerID="19c25482ac3f796b948d13f3b52c86e92224ddeeedd2b5a203612de4f14f6e8c" Nov 24 12:34:14 crc kubenswrapper[5072]: E1124 12:34:14.533031 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 12:34:27 crc kubenswrapper[5072]: I1124 12:34:27.694783 5072 generic.go:334] "Generic (PLEG): container finished" podID="84996fa3-ea52-4f6d-a4e2-5512ae4c119b" containerID="e7b8de71cdf8221471791a413a4dfdaa5eaada918fae64ff3b76739c35567c62" exitCode=0 Nov 24 12:34:27 crc kubenswrapper[5072]: I1124 12:34:27.695158 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hvgw9/must-gather-2zwx9" event={"ID":"84996fa3-ea52-4f6d-a4e2-5512ae4c119b","Type":"ContainerDied","Data":"e7b8de71cdf8221471791a413a4dfdaa5eaada918fae64ff3b76739c35567c62"} Nov 24 12:34:27 crc kubenswrapper[5072]: I1124 12:34:27.697135 5072 scope.go:117] "RemoveContainer" containerID="e7b8de71cdf8221471791a413a4dfdaa5eaada918fae64ff3b76739c35567c62" Nov 24 12:34:28 crc kubenswrapper[5072]: I1124 12:34:28.428760 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-hvgw9_must-gather-2zwx9_84996fa3-ea52-4f6d-a4e2-5512ae4c119b/gather/0.log" Nov 24 12:34:29 crc kubenswrapper[5072]: I1124 12:34:29.022451 5072 scope.go:117] "RemoveContainer" containerID="19c25482ac3f796b948d13f3b52c86e92224ddeeedd2b5a203612de4f14f6e8c" Nov 24 12:34:29 crc kubenswrapper[5072]: E1124 12:34:29.022711 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 12:34:39 crc kubenswrapper[5072]: I1124 12:34:39.057621 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-hvgw9/must-gather-2zwx9"] Nov 24 12:34:39 crc kubenswrapper[5072]: I1124 12:34:39.058297 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-hvgw9/must-gather-2zwx9" podUID="84996fa3-ea52-4f6d-a4e2-5512ae4c119b" containerName="copy" containerID="cri-o://54eae4c0d781d7c31dab54c2d662b47c6e9e5f9ea3a6b60ecd3f0096d285c50d" gracePeriod=2 Nov 24 12:34:39 crc kubenswrapper[5072]: I1124 12:34:39.067190 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-hvgw9/must-gather-2zwx9"] Nov 24 12:34:39 crc kubenswrapper[5072]: I1124 12:34:39.644300 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-hvgw9_must-gather-2zwx9_84996fa3-ea52-4f6d-a4e2-5512ae4c119b/copy/0.log" Nov 24 12:34:39 crc kubenswrapper[5072]: I1124 12:34:39.645054 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hvgw9/must-gather-2zwx9" Nov 24 12:34:39 crc kubenswrapper[5072]: I1124 12:34:39.770092 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gq858\" (UniqueName: \"kubernetes.io/projected/84996fa3-ea52-4f6d-a4e2-5512ae4c119b-kube-api-access-gq858\") pod \"84996fa3-ea52-4f6d-a4e2-5512ae4c119b\" (UID: \"84996fa3-ea52-4f6d-a4e2-5512ae4c119b\") " Nov 24 12:34:39 crc kubenswrapper[5072]: I1124 12:34:39.770174 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/84996fa3-ea52-4f6d-a4e2-5512ae4c119b-must-gather-output\") pod \"84996fa3-ea52-4f6d-a4e2-5512ae4c119b\" (UID: \"84996fa3-ea52-4f6d-a4e2-5512ae4c119b\") " Nov 24 12:34:39 crc kubenswrapper[5072]: I1124 12:34:39.776602 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84996fa3-ea52-4f6d-a4e2-5512ae4c119b-kube-api-access-gq858" (OuterVolumeSpecName: "kube-api-access-gq858") pod "84996fa3-ea52-4f6d-a4e2-5512ae4c119b" (UID: "84996fa3-ea52-4f6d-a4e2-5512ae4c119b"). InnerVolumeSpecName "kube-api-access-gq858". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:34:39 crc kubenswrapper[5072]: I1124 12:34:39.856767 5072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-hvgw9_must-gather-2zwx9_84996fa3-ea52-4f6d-a4e2-5512ae4c119b/copy/0.log" Nov 24 12:34:39 crc kubenswrapper[5072]: I1124 12:34:39.857297 5072 generic.go:334] "Generic (PLEG): container finished" podID="84996fa3-ea52-4f6d-a4e2-5512ae4c119b" containerID="54eae4c0d781d7c31dab54c2d662b47c6e9e5f9ea3a6b60ecd3f0096d285c50d" exitCode=143 Nov 24 12:34:39 crc kubenswrapper[5072]: I1124 12:34:39.857483 5072 scope.go:117] "RemoveContainer" containerID="54eae4c0d781d7c31dab54c2d662b47c6e9e5f9ea3a6b60ecd3f0096d285c50d" Nov 24 12:34:39 crc kubenswrapper[5072]: I1124 12:34:39.857747 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hvgw9/must-gather-2zwx9" Nov 24 12:34:39 crc kubenswrapper[5072]: I1124 12:34:39.873204 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gq858\" (UniqueName: \"kubernetes.io/projected/84996fa3-ea52-4f6d-a4e2-5512ae4c119b-kube-api-access-gq858\") on node \"crc\" DevicePath \"\"" Nov 24 12:34:39 crc kubenswrapper[5072]: I1124 12:34:39.916668 5072 scope.go:117] "RemoveContainer" containerID="e7b8de71cdf8221471791a413a4dfdaa5eaada918fae64ff3b76739c35567c62" Nov 24 12:34:39 crc kubenswrapper[5072]: I1124 12:34:39.988864 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84996fa3-ea52-4f6d-a4e2-5512ae4c119b-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "84996fa3-ea52-4f6d-a4e2-5512ae4c119b" (UID: "84996fa3-ea52-4f6d-a4e2-5512ae4c119b"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:34:40 crc kubenswrapper[5072]: I1124 12:34:40.040792 5072 scope.go:117] "RemoveContainer" containerID="54eae4c0d781d7c31dab54c2d662b47c6e9e5f9ea3a6b60ecd3f0096d285c50d" Nov 24 12:34:40 crc kubenswrapper[5072]: E1124 12:34:40.041181 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54eae4c0d781d7c31dab54c2d662b47c6e9e5f9ea3a6b60ecd3f0096d285c50d\": container with ID starting with 54eae4c0d781d7c31dab54c2d662b47c6e9e5f9ea3a6b60ecd3f0096d285c50d not found: ID does not exist" containerID="54eae4c0d781d7c31dab54c2d662b47c6e9e5f9ea3a6b60ecd3f0096d285c50d" Nov 24 12:34:40 crc kubenswrapper[5072]: I1124 12:34:40.041225 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54eae4c0d781d7c31dab54c2d662b47c6e9e5f9ea3a6b60ecd3f0096d285c50d"} err="failed to get container status \"54eae4c0d781d7c31dab54c2d662b47c6e9e5f9ea3a6b60ecd3f0096d285c50d\": rpc error: code = NotFound desc = could not find container \"54eae4c0d781d7c31dab54c2d662b47c6e9e5f9ea3a6b60ecd3f0096d285c50d\": container with ID starting with 54eae4c0d781d7c31dab54c2d662b47c6e9e5f9ea3a6b60ecd3f0096d285c50d not found: ID does not exist" Nov 24 12:34:40 crc kubenswrapper[5072]: I1124 12:34:40.041250 5072 scope.go:117] "RemoveContainer" containerID="e7b8de71cdf8221471791a413a4dfdaa5eaada918fae64ff3b76739c35567c62" Nov 24 12:34:40 crc kubenswrapper[5072]: E1124 12:34:40.045359 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7b8de71cdf8221471791a413a4dfdaa5eaada918fae64ff3b76739c35567c62\": container with ID starting with e7b8de71cdf8221471791a413a4dfdaa5eaada918fae64ff3b76739c35567c62 not found: ID does not exist" containerID="e7b8de71cdf8221471791a413a4dfdaa5eaada918fae64ff3b76739c35567c62" Nov 24 12:34:40 crc kubenswrapper[5072]: I1124 12:34:40.045427 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7b8de71cdf8221471791a413a4dfdaa5eaada918fae64ff3b76739c35567c62"} err="failed to get container status \"e7b8de71cdf8221471791a413a4dfdaa5eaada918fae64ff3b76739c35567c62\": rpc error: code = NotFound desc = could not find container \"e7b8de71cdf8221471791a413a4dfdaa5eaada918fae64ff3b76739c35567c62\": container with ID starting with e7b8de71cdf8221471791a413a4dfdaa5eaada918fae64ff3b76739c35567c62 not found: ID does not exist" Nov 24 12:34:40 crc kubenswrapper[5072]: I1124 12:34:40.084387 5072 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/84996fa3-ea52-4f6d-a4e2-5512ae4c119b-must-gather-output\") on node \"crc\" DevicePath \"\"" Nov 24 12:34:41 crc kubenswrapper[5072]: I1124 12:34:41.016615 5072 scope.go:117] "RemoveContainer" containerID="19c25482ac3f796b948d13f3b52c86e92224ddeeedd2b5a203612de4f14f6e8c" Nov 24 12:34:41 crc kubenswrapper[5072]: E1124 12:34:41.017057 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 12:34:41 crc kubenswrapper[5072]: I1124 12:34:41.027671 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84996fa3-ea52-4f6d-a4e2-5512ae4c119b" path="/var/lib/kubelet/pods/84996fa3-ea52-4f6d-a4e2-5512ae4c119b/volumes" Nov 24 12:34:52 crc kubenswrapper[5072]: I1124 12:34:52.016831 5072 scope.go:117] "RemoveContainer" containerID="19c25482ac3f796b948d13f3b52c86e92224ddeeedd2b5a203612de4f14f6e8c" Nov 24 12:34:52 crc kubenswrapper[5072]: E1124 12:34:52.017750 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 12:35:06 crc kubenswrapper[5072]: I1124 12:35:06.016222 5072 scope.go:117] "RemoveContainer" containerID="19c25482ac3f796b948d13f3b52c86e92224ddeeedd2b5a203612de4f14f6e8c" Nov 24 12:35:06 crc kubenswrapper[5072]: E1124 12:35:06.017079 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 12:35:21 crc kubenswrapper[5072]: I1124 12:35:21.016865 5072 scope.go:117] "RemoveContainer" containerID="19c25482ac3f796b948d13f3b52c86e92224ddeeedd2b5a203612de4f14f6e8c" Nov 24 12:35:21 crc kubenswrapper[5072]: E1124 12:35:21.017623 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 12:35:24 crc kubenswrapper[5072]: I1124 12:35:24.209390 5072 scope.go:117] "RemoveContainer" containerID="9c571fb6bb6a171958eb7bda1220326140e13b00bbde779f6e77b5a19f24d6e0" Nov 24 12:35:34 crc kubenswrapper[5072]: I1124 12:35:34.016734 5072 scope.go:117] "RemoveContainer" containerID="19c25482ac3f796b948d13f3b52c86e92224ddeeedd2b5a203612de4f14f6e8c" Nov 24 12:35:34 crc kubenswrapper[5072]: E1124 12:35:34.017550 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 12:35:44 crc kubenswrapper[5072]: I1124 12:35:44.827299 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jj65b"] Nov 24 12:35:44 crc kubenswrapper[5072]: E1124 12:35:44.828201 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84996fa3-ea52-4f6d-a4e2-5512ae4c119b" containerName="gather" Nov 24 12:35:44 crc kubenswrapper[5072]: I1124 12:35:44.828213 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="84996fa3-ea52-4f6d-a4e2-5512ae4c119b" containerName="gather" Nov 24 12:35:44 crc kubenswrapper[5072]: E1124 12:35:44.828239 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65729592-ebe2-4752-885e-7fb08c984125" containerName="collect-profiles" Nov 24 12:35:44 crc kubenswrapper[5072]: I1124 12:35:44.828245 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="65729592-ebe2-4752-885e-7fb08c984125" containerName="collect-profiles" Nov 24 12:35:44 crc kubenswrapper[5072]: E1124 12:35:44.828265 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84996fa3-ea52-4f6d-a4e2-5512ae4c119b" containerName="copy" Nov 24 12:35:44 crc kubenswrapper[5072]: I1124 12:35:44.828270 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="84996fa3-ea52-4f6d-a4e2-5512ae4c119b" containerName="copy" Nov 24 12:35:44 crc kubenswrapper[5072]: I1124 12:35:44.828463 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="65729592-ebe2-4752-885e-7fb08c984125" containerName="collect-profiles" Nov 24 12:35:44 crc kubenswrapper[5072]: I1124 12:35:44.828480 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="84996fa3-ea52-4f6d-a4e2-5512ae4c119b" containerName="copy" Nov 24 12:35:44 crc kubenswrapper[5072]: I1124 12:35:44.828491 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="84996fa3-ea52-4f6d-a4e2-5512ae4c119b" containerName="gather" Nov 24 12:35:44 crc kubenswrapper[5072]: I1124 12:35:44.829948 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jj65b" Nov 24 12:35:44 crc kubenswrapper[5072]: I1124 12:35:44.842109 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jj65b"] Nov 24 12:35:44 crc kubenswrapper[5072]: I1124 12:35:44.883479 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c52a7992-0755-40db-ad2d-2dff12d2e8e2-catalog-content\") pod \"certified-operators-jj65b\" (UID: \"c52a7992-0755-40db-ad2d-2dff12d2e8e2\") " pod="openshift-marketplace/certified-operators-jj65b" Nov 24 12:35:44 crc kubenswrapper[5072]: I1124 12:35:44.883547 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c52a7992-0755-40db-ad2d-2dff12d2e8e2-utilities\") pod \"certified-operators-jj65b\" (UID: \"c52a7992-0755-40db-ad2d-2dff12d2e8e2\") " pod="openshift-marketplace/certified-operators-jj65b" Nov 24 12:35:44 crc kubenswrapper[5072]: I1124 12:35:44.883623 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7gwq\" (UniqueName: \"kubernetes.io/projected/c52a7992-0755-40db-ad2d-2dff12d2e8e2-kube-api-access-l7gwq\") pod \"certified-operators-jj65b\" (UID: \"c52a7992-0755-40db-ad2d-2dff12d2e8e2\") " pod="openshift-marketplace/certified-operators-jj65b" Nov 24 12:35:44 crc kubenswrapper[5072]: I1124 12:35:44.986116 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c52a7992-0755-40db-ad2d-2dff12d2e8e2-catalog-content\") pod \"certified-operators-jj65b\" (UID: \"c52a7992-0755-40db-ad2d-2dff12d2e8e2\") " pod="openshift-marketplace/certified-operators-jj65b" Nov 24 12:35:44 crc kubenswrapper[5072]: I1124 12:35:44.986170 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c52a7992-0755-40db-ad2d-2dff12d2e8e2-utilities\") pod \"certified-operators-jj65b\" (UID: \"c52a7992-0755-40db-ad2d-2dff12d2e8e2\") " pod="openshift-marketplace/certified-operators-jj65b" Nov 24 12:35:44 crc kubenswrapper[5072]: I1124 12:35:44.986216 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7gwq\" (UniqueName: \"kubernetes.io/projected/c52a7992-0755-40db-ad2d-2dff12d2e8e2-kube-api-access-l7gwq\") pod \"certified-operators-jj65b\" (UID: \"c52a7992-0755-40db-ad2d-2dff12d2e8e2\") " pod="openshift-marketplace/certified-operators-jj65b" Nov 24 12:35:44 crc kubenswrapper[5072]: I1124 12:35:44.987154 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c52a7992-0755-40db-ad2d-2dff12d2e8e2-catalog-content\") pod \"certified-operators-jj65b\" (UID: \"c52a7992-0755-40db-ad2d-2dff12d2e8e2\") " pod="openshift-marketplace/certified-operators-jj65b" Nov 24 12:35:44 crc kubenswrapper[5072]: I1124 12:35:44.987260 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c52a7992-0755-40db-ad2d-2dff12d2e8e2-utilities\") pod \"certified-operators-jj65b\" (UID: \"c52a7992-0755-40db-ad2d-2dff12d2e8e2\") " pod="openshift-marketplace/certified-operators-jj65b" Nov 24 12:35:45 crc kubenswrapper[5072]: I1124 12:35:45.016260 5072 scope.go:117] "RemoveContainer" containerID="19c25482ac3f796b948d13f3b52c86e92224ddeeedd2b5a203612de4f14f6e8c" Nov 24 12:35:45 crc kubenswrapper[5072]: E1124 12:35:45.016565 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 12:35:45 crc kubenswrapper[5072]: I1124 12:35:45.385866 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7gwq\" (UniqueName: \"kubernetes.io/projected/c52a7992-0755-40db-ad2d-2dff12d2e8e2-kube-api-access-l7gwq\") pod \"certified-operators-jj65b\" (UID: \"c52a7992-0755-40db-ad2d-2dff12d2e8e2\") " pod="openshift-marketplace/certified-operators-jj65b" Nov 24 12:35:45 crc kubenswrapper[5072]: I1124 12:35:45.456178 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jj65b" Nov 24 12:35:45 crc kubenswrapper[5072]: I1124 12:35:45.929669 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jj65b"] Nov 24 12:35:46 crc kubenswrapper[5072]: I1124 12:35:46.480882 5072 generic.go:334] "Generic (PLEG): container finished" podID="c52a7992-0755-40db-ad2d-2dff12d2e8e2" containerID="4a5e7ba310a0dc097b202bcaaf2feee366aebf2e26743e447a831fff2b0ea7fe" exitCode=0 Nov 24 12:35:46 crc kubenswrapper[5072]: I1124 12:35:46.480937 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jj65b" event={"ID":"c52a7992-0755-40db-ad2d-2dff12d2e8e2","Type":"ContainerDied","Data":"4a5e7ba310a0dc097b202bcaaf2feee366aebf2e26743e447a831fff2b0ea7fe"} Nov 24 12:35:46 crc kubenswrapper[5072]: I1124 12:35:46.481158 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jj65b" event={"ID":"c52a7992-0755-40db-ad2d-2dff12d2e8e2","Type":"ContainerStarted","Data":"c9144f133e0ffeda38f835069a7ff5d8ba180869ca1ad7aa1339c0ee68eff275"} Nov 24 12:35:46 crc kubenswrapper[5072]: I1124 12:35:46.483149 5072 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 12:35:48 crc kubenswrapper[5072]: I1124 12:35:48.500988 5072 generic.go:334] "Generic (PLEG): container finished" podID="c52a7992-0755-40db-ad2d-2dff12d2e8e2" containerID="2efb8518f95c54b8b925bc8e92ecebfdbdeb6dd2e51f7d95ad598c474382d973" exitCode=0 Nov 24 12:35:48 crc kubenswrapper[5072]: I1124 12:35:48.501094 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jj65b" event={"ID":"c52a7992-0755-40db-ad2d-2dff12d2e8e2","Type":"ContainerDied","Data":"2efb8518f95c54b8b925bc8e92ecebfdbdeb6dd2e51f7d95ad598c474382d973"} Nov 24 12:35:50 crc kubenswrapper[5072]: I1124 12:35:50.519046 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jj65b" event={"ID":"c52a7992-0755-40db-ad2d-2dff12d2e8e2","Type":"ContainerStarted","Data":"e84cd81cf2db19ce328267c9e4b153bf0a4039f4d63fa3e3f2ca6aec4f15b2f2"} Nov 24 12:35:50 crc kubenswrapper[5072]: I1124 12:35:50.545649 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jj65b" podStartSLOduration=3.238338982 podStartE2EDuration="6.545630233s" podCreationTimestamp="2025-11-24 12:35:44 +0000 UTC" firstStartedPulling="2025-11-24 12:35:46.482949152 +0000 UTC m=+5198.194473628" lastFinishedPulling="2025-11-24 12:35:49.790240393 +0000 UTC m=+5201.501764879" observedRunningTime="2025-11-24 12:35:50.536412314 +0000 UTC m=+5202.247936790" watchObservedRunningTime="2025-11-24 12:35:50.545630233 +0000 UTC m=+5202.257154719" Nov 24 12:35:55 crc kubenswrapper[5072]: I1124 12:35:55.456905 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jj65b" Nov 24 12:35:55 crc kubenswrapper[5072]: I1124 12:35:55.457507 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jj65b" Nov 24 12:35:56 crc kubenswrapper[5072]: I1124 12:35:56.128284 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jj65b" Nov 24 12:35:56 crc kubenswrapper[5072]: I1124 12:35:56.176661 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jj65b" Nov 24 12:35:56 crc kubenswrapper[5072]: I1124 12:35:56.376311 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jj65b"] Nov 24 12:35:57 crc kubenswrapper[5072]: I1124 12:35:57.597157 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jj65b" podUID="c52a7992-0755-40db-ad2d-2dff12d2e8e2" containerName="registry-server" containerID="cri-o://e84cd81cf2db19ce328267c9e4b153bf0a4039f4d63fa3e3f2ca6aec4f15b2f2" gracePeriod=2 Nov 24 12:35:58 crc kubenswrapper[5072]: I1124 12:35:58.607047 5072 generic.go:334] "Generic (PLEG): container finished" podID="c52a7992-0755-40db-ad2d-2dff12d2e8e2" containerID="e84cd81cf2db19ce328267c9e4b153bf0a4039f4d63fa3e3f2ca6aec4f15b2f2" exitCode=0 Nov 24 12:35:58 crc kubenswrapper[5072]: I1124 12:35:58.607143 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jj65b" event={"ID":"c52a7992-0755-40db-ad2d-2dff12d2e8e2","Type":"ContainerDied","Data":"e84cd81cf2db19ce328267c9e4b153bf0a4039f4d63fa3e3f2ca6aec4f15b2f2"} Nov 24 12:35:59 crc kubenswrapper[5072]: I1124 12:35:59.023436 5072 scope.go:117] "RemoveContainer" containerID="19c25482ac3f796b948d13f3b52c86e92224ddeeedd2b5a203612de4f14f6e8c" Nov 24 12:35:59 crc kubenswrapper[5072]: E1124 12:35:59.024449 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 12:35:59 crc kubenswrapper[5072]: I1124 12:35:59.108916 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jj65b" Nov 24 12:35:59 crc kubenswrapper[5072]: I1124 12:35:59.189513 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c52a7992-0755-40db-ad2d-2dff12d2e8e2-catalog-content\") pod \"c52a7992-0755-40db-ad2d-2dff12d2e8e2\" (UID: \"c52a7992-0755-40db-ad2d-2dff12d2e8e2\") " Nov 24 12:35:59 crc kubenswrapper[5072]: I1124 12:35:59.189673 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c52a7992-0755-40db-ad2d-2dff12d2e8e2-utilities\") pod \"c52a7992-0755-40db-ad2d-2dff12d2e8e2\" (UID: \"c52a7992-0755-40db-ad2d-2dff12d2e8e2\") " Nov 24 12:35:59 crc kubenswrapper[5072]: I1124 12:35:59.189718 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7gwq\" (UniqueName: \"kubernetes.io/projected/c52a7992-0755-40db-ad2d-2dff12d2e8e2-kube-api-access-l7gwq\") pod \"c52a7992-0755-40db-ad2d-2dff12d2e8e2\" (UID: \"c52a7992-0755-40db-ad2d-2dff12d2e8e2\") " Nov 24 12:35:59 crc kubenswrapper[5072]: I1124 12:35:59.191161 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c52a7992-0755-40db-ad2d-2dff12d2e8e2-utilities" (OuterVolumeSpecName: "utilities") pod "c52a7992-0755-40db-ad2d-2dff12d2e8e2" (UID: "c52a7992-0755-40db-ad2d-2dff12d2e8e2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:35:59 crc kubenswrapper[5072]: I1124 12:35:59.196206 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c52a7992-0755-40db-ad2d-2dff12d2e8e2-kube-api-access-l7gwq" (OuterVolumeSpecName: "kube-api-access-l7gwq") pod "c52a7992-0755-40db-ad2d-2dff12d2e8e2" (UID: "c52a7992-0755-40db-ad2d-2dff12d2e8e2"). InnerVolumeSpecName "kube-api-access-l7gwq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:35:59 crc kubenswrapper[5072]: I1124 12:35:59.291870 5072 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c52a7992-0755-40db-ad2d-2dff12d2e8e2-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:35:59 crc kubenswrapper[5072]: I1124 12:35:59.291919 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l7gwq\" (UniqueName: \"kubernetes.io/projected/c52a7992-0755-40db-ad2d-2dff12d2e8e2-kube-api-access-l7gwq\") on node \"crc\" DevicePath \"\"" Nov 24 12:35:59 crc kubenswrapper[5072]: I1124 12:35:59.412225 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c52a7992-0755-40db-ad2d-2dff12d2e8e2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c52a7992-0755-40db-ad2d-2dff12d2e8e2" (UID: "c52a7992-0755-40db-ad2d-2dff12d2e8e2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:35:59 crc kubenswrapper[5072]: I1124 12:35:59.496413 5072 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c52a7992-0755-40db-ad2d-2dff12d2e8e2-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:35:59 crc kubenswrapper[5072]: I1124 12:35:59.619579 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jj65b" event={"ID":"c52a7992-0755-40db-ad2d-2dff12d2e8e2","Type":"ContainerDied","Data":"c9144f133e0ffeda38f835069a7ff5d8ba180869ca1ad7aa1339c0ee68eff275"} Nov 24 12:35:59 crc kubenswrapper[5072]: I1124 12:35:59.619637 5072 scope.go:117] "RemoveContainer" containerID="e84cd81cf2db19ce328267c9e4b153bf0a4039f4d63fa3e3f2ca6aec4f15b2f2" Nov 24 12:35:59 crc kubenswrapper[5072]: I1124 12:35:59.619677 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jj65b" Nov 24 12:35:59 crc kubenswrapper[5072]: I1124 12:35:59.641262 5072 scope.go:117] "RemoveContainer" containerID="2efb8518f95c54b8b925bc8e92ecebfdbdeb6dd2e51f7d95ad598c474382d973" Nov 24 12:35:59 crc kubenswrapper[5072]: I1124 12:35:59.655049 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jj65b"] Nov 24 12:35:59 crc kubenswrapper[5072]: I1124 12:35:59.662564 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jj65b"] Nov 24 12:35:59 crc kubenswrapper[5072]: I1124 12:35:59.674303 5072 scope.go:117] "RemoveContainer" containerID="4a5e7ba310a0dc097b202bcaaf2feee366aebf2e26743e447a831fff2b0ea7fe" Nov 24 12:36:01 crc kubenswrapper[5072]: I1124 12:36:01.044436 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c52a7992-0755-40db-ad2d-2dff12d2e8e2" path="/var/lib/kubelet/pods/c52a7992-0755-40db-ad2d-2dff12d2e8e2/volumes" Nov 24 12:36:10 crc kubenswrapper[5072]: I1124 12:36:10.016844 5072 scope.go:117] "RemoveContainer" containerID="19c25482ac3f796b948d13f3b52c86e92224ddeeedd2b5a203612de4f14f6e8c" Nov 24 12:36:10 crc kubenswrapper[5072]: E1124 12:36:10.017717 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 12:36:18 crc kubenswrapper[5072]: I1124 12:36:18.751832 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hkztx"] Nov 24 12:36:18 crc kubenswrapper[5072]: E1124 12:36:18.752803 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c52a7992-0755-40db-ad2d-2dff12d2e8e2" containerName="extract-utilities" Nov 24 12:36:18 crc kubenswrapper[5072]: I1124 12:36:18.752814 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="c52a7992-0755-40db-ad2d-2dff12d2e8e2" containerName="extract-utilities" Nov 24 12:36:18 crc kubenswrapper[5072]: E1124 12:36:18.752823 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c52a7992-0755-40db-ad2d-2dff12d2e8e2" containerName="extract-content" Nov 24 12:36:18 crc kubenswrapper[5072]: I1124 12:36:18.752828 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="c52a7992-0755-40db-ad2d-2dff12d2e8e2" containerName="extract-content" Nov 24 12:36:18 crc kubenswrapper[5072]: E1124 12:36:18.752834 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c52a7992-0755-40db-ad2d-2dff12d2e8e2" containerName="registry-server" Nov 24 12:36:18 crc kubenswrapper[5072]: I1124 12:36:18.752840 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="c52a7992-0755-40db-ad2d-2dff12d2e8e2" containerName="registry-server" Nov 24 12:36:18 crc kubenswrapper[5072]: I1124 12:36:18.753069 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="c52a7992-0755-40db-ad2d-2dff12d2e8e2" containerName="registry-server" Nov 24 12:36:18 crc kubenswrapper[5072]: I1124 12:36:18.754593 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hkztx" Nov 24 12:36:18 crc kubenswrapper[5072]: I1124 12:36:18.759924 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hkztx"] Nov 24 12:36:18 crc kubenswrapper[5072]: I1124 12:36:18.798722 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfclx\" (UniqueName: \"kubernetes.io/projected/d1e63169-bdcd-4caa-921f-8d421b69d523-kube-api-access-lfclx\") pod \"redhat-operators-hkztx\" (UID: \"d1e63169-bdcd-4caa-921f-8d421b69d523\") " pod="openshift-marketplace/redhat-operators-hkztx" Nov 24 12:36:18 crc kubenswrapper[5072]: I1124 12:36:18.799168 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1e63169-bdcd-4caa-921f-8d421b69d523-catalog-content\") pod \"redhat-operators-hkztx\" (UID: \"d1e63169-bdcd-4caa-921f-8d421b69d523\") " pod="openshift-marketplace/redhat-operators-hkztx" Nov 24 12:36:18 crc kubenswrapper[5072]: I1124 12:36:18.799271 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1e63169-bdcd-4caa-921f-8d421b69d523-utilities\") pod \"redhat-operators-hkztx\" (UID: \"d1e63169-bdcd-4caa-921f-8d421b69d523\") " pod="openshift-marketplace/redhat-operators-hkztx" Nov 24 12:36:18 crc kubenswrapper[5072]: I1124 12:36:18.900849 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfclx\" (UniqueName: \"kubernetes.io/projected/d1e63169-bdcd-4caa-921f-8d421b69d523-kube-api-access-lfclx\") pod \"redhat-operators-hkztx\" (UID: \"d1e63169-bdcd-4caa-921f-8d421b69d523\") " pod="openshift-marketplace/redhat-operators-hkztx" Nov 24 12:36:18 crc kubenswrapper[5072]: I1124 12:36:18.900960 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1e63169-bdcd-4caa-921f-8d421b69d523-catalog-content\") pod \"redhat-operators-hkztx\" (UID: \"d1e63169-bdcd-4caa-921f-8d421b69d523\") " pod="openshift-marketplace/redhat-operators-hkztx" Nov 24 12:36:18 crc kubenswrapper[5072]: I1124 12:36:18.901084 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1e63169-bdcd-4caa-921f-8d421b69d523-utilities\") pod \"redhat-operators-hkztx\" (UID: \"d1e63169-bdcd-4caa-921f-8d421b69d523\") " pod="openshift-marketplace/redhat-operators-hkztx" Nov 24 12:36:18 crc kubenswrapper[5072]: I1124 12:36:18.901692 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1e63169-bdcd-4caa-921f-8d421b69d523-utilities\") pod \"redhat-operators-hkztx\" (UID: \"d1e63169-bdcd-4caa-921f-8d421b69d523\") " pod="openshift-marketplace/redhat-operators-hkztx" Nov 24 12:36:18 crc kubenswrapper[5072]: I1124 12:36:18.901850 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1e63169-bdcd-4caa-921f-8d421b69d523-catalog-content\") pod \"redhat-operators-hkztx\" (UID: \"d1e63169-bdcd-4caa-921f-8d421b69d523\") " pod="openshift-marketplace/redhat-operators-hkztx" Nov 24 12:36:19 crc kubenswrapper[5072]: I1124 12:36:19.285506 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfclx\" (UniqueName: \"kubernetes.io/projected/d1e63169-bdcd-4caa-921f-8d421b69d523-kube-api-access-lfclx\") pod \"redhat-operators-hkztx\" (UID: \"d1e63169-bdcd-4caa-921f-8d421b69d523\") " pod="openshift-marketplace/redhat-operators-hkztx" Nov 24 12:36:19 crc kubenswrapper[5072]: I1124 12:36:19.394589 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hkztx" Nov 24 12:36:19 crc kubenswrapper[5072]: I1124 12:36:19.909101 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hkztx"] Nov 24 12:36:20 crc kubenswrapper[5072]: I1124 12:36:20.858583 5072 generic.go:334] "Generic (PLEG): container finished" podID="d1e63169-bdcd-4caa-921f-8d421b69d523" containerID="2008bdaff8aed4831a4aefe5ecc172b9f560329b6f030f37dbfc2c9b876392c5" exitCode=0 Nov 24 12:36:20 crc kubenswrapper[5072]: I1124 12:36:20.859563 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hkztx" event={"ID":"d1e63169-bdcd-4caa-921f-8d421b69d523","Type":"ContainerDied","Data":"2008bdaff8aed4831a4aefe5ecc172b9f560329b6f030f37dbfc2c9b876392c5"} Nov 24 12:36:20 crc kubenswrapper[5072]: I1124 12:36:20.859713 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hkztx" event={"ID":"d1e63169-bdcd-4caa-921f-8d421b69d523","Type":"ContainerStarted","Data":"94ce87cfa15ef13c4389028893054b36eedc9c7e6ad2297b7e6747004349b548"} Nov 24 12:36:22 crc kubenswrapper[5072]: I1124 12:36:22.017661 5072 scope.go:117] "RemoveContainer" containerID="19c25482ac3f796b948d13f3b52c86e92224ddeeedd2b5a203612de4f14f6e8c" Nov 24 12:36:22 crc kubenswrapper[5072]: E1124 12:36:22.018196 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 12:36:22 crc kubenswrapper[5072]: I1124 12:36:22.879521 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hkztx" event={"ID":"d1e63169-bdcd-4caa-921f-8d421b69d523","Type":"ContainerStarted","Data":"08dfa5892fc688bcfa1a8bc71fcad5deef3bfb74b370c8576d0548f7a2928f34"} Nov 24 12:36:24 crc kubenswrapper[5072]: I1124 12:36:24.312560 5072 scope.go:117] "RemoveContainer" containerID="0b4e5684d3590ff65e325e0e99643a1255964ebd31990185010d18e19291a009" Nov 24 12:36:27 crc kubenswrapper[5072]: I1124 12:36:27.931318 5072 generic.go:334] "Generic (PLEG): container finished" podID="d1e63169-bdcd-4caa-921f-8d421b69d523" containerID="08dfa5892fc688bcfa1a8bc71fcad5deef3bfb74b370c8576d0548f7a2928f34" exitCode=0 Nov 24 12:36:27 crc kubenswrapper[5072]: I1124 12:36:27.931417 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hkztx" event={"ID":"d1e63169-bdcd-4caa-921f-8d421b69d523","Type":"ContainerDied","Data":"08dfa5892fc688bcfa1a8bc71fcad5deef3bfb74b370c8576d0548f7a2928f34"} Nov 24 12:36:28 crc kubenswrapper[5072]: I1124 12:36:28.944345 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hkztx" event={"ID":"d1e63169-bdcd-4caa-921f-8d421b69d523","Type":"ContainerStarted","Data":"38baace5653ae8516b8766c118e30dbe21bc2dd63280971db8e64842aeaa5134"} Nov 24 12:36:28 crc kubenswrapper[5072]: I1124 12:36:28.977967 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hkztx" podStartSLOduration=3.464298176 podStartE2EDuration="10.977942667s" podCreationTimestamp="2025-11-24 12:36:18 +0000 UTC" firstStartedPulling="2025-11-24 12:36:20.862773799 +0000 UTC m=+5232.574298275" lastFinishedPulling="2025-11-24 12:36:28.37641827 +0000 UTC m=+5240.087942766" observedRunningTime="2025-11-24 12:36:28.96482918 +0000 UTC m=+5240.676353676" watchObservedRunningTime="2025-11-24 12:36:28.977942667 +0000 UTC m=+5240.689467153" Nov 24 12:36:29 crc kubenswrapper[5072]: I1124 12:36:29.395908 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hkztx" Nov 24 12:36:29 crc kubenswrapper[5072]: I1124 12:36:29.396738 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hkztx" Nov 24 12:36:30 crc kubenswrapper[5072]: I1124 12:36:30.465506 5072 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hkztx" podUID="d1e63169-bdcd-4caa-921f-8d421b69d523" containerName="registry-server" probeResult="failure" output=< Nov 24 12:36:30 crc kubenswrapper[5072]: timeout: failed to connect service ":50051" within 1s Nov 24 12:36:30 crc kubenswrapper[5072]: > Nov 24 12:36:36 crc kubenswrapper[5072]: I1124 12:36:36.016852 5072 scope.go:117] "RemoveContainer" containerID="19c25482ac3f796b948d13f3b52c86e92224ddeeedd2b5a203612de4f14f6e8c" Nov 24 12:36:36 crc kubenswrapper[5072]: E1124 12:36:36.018018 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 12:36:39 crc kubenswrapper[5072]: I1124 12:36:39.481165 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hkztx" Nov 24 12:36:39 crc kubenswrapper[5072]: I1124 12:36:39.555962 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hkztx" Nov 24 12:36:39 crc kubenswrapper[5072]: I1124 12:36:39.734083 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hkztx"] Nov 24 12:36:41 crc kubenswrapper[5072]: I1124 12:36:41.052772 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hkztx" podUID="d1e63169-bdcd-4caa-921f-8d421b69d523" containerName="registry-server" containerID="cri-o://38baace5653ae8516b8766c118e30dbe21bc2dd63280971db8e64842aeaa5134" gracePeriod=2 Nov 24 12:36:42 crc kubenswrapper[5072]: I1124 12:36:42.063550 5072 generic.go:334] "Generic (PLEG): container finished" podID="d1e63169-bdcd-4caa-921f-8d421b69d523" containerID="38baace5653ae8516b8766c118e30dbe21bc2dd63280971db8e64842aeaa5134" exitCode=0 Nov 24 12:36:42 crc kubenswrapper[5072]: I1124 12:36:42.063638 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hkztx" event={"ID":"d1e63169-bdcd-4caa-921f-8d421b69d523","Type":"ContainerDied","Data":"38baace5653ae8516b8766c118e30dbe21bc2dd63280971db8e64842aeaa5134"} Nov 24 12:36:42 crc kubenswrapper[5072]: I1124 12:36:42.221427 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hkztx" Nov 24 12:36:42 crc kubenswrapper[5072]: I1124 12:36:42.337627 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1e63169-bdcd-4caa-921f-8d421b69d523-catalog-content\") pod \"d1e63169-bdcd-4caa-921f-8d421b69d523\" (UID: \"d1e63169-bdcd-4caa-921f-8d421b69d523\") " Nov 24 12:36:42 crc kubenswrapper[5072]: I1124 12:36:42.337790 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1e63169-bdcd-4caa-921f-8d421b69d523-utilities\") pod \"d1e63169-bdcd-4caa-921f-8d421b69d523\" (UID: \"d1e63169-bdcd-4caa-921f-8d421b69d523\") " Nov 24 12:36:42 crc kubenswrapper[5072]: I1124 12:36:42.338030 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lfclx\" (UniqueName: \"kubernetes.io/projected/d1e63169-bdcd-4caa-921f-8d421b69d523-kube-api-access-lfclx\") pod \"d1e63169-bdcd-4caa-921f-8d421b69d523\" (UID: \"d1e63169-bdcd-4caa-921f-8d421b69d523\") " Nov 24 12:36:42 crc kubenswrapper[5072]: I1124 12:36:42.340657 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1e63169-bdcd-4caa-921f-8d421b69d523-utilities" (OuterVolumeSpecName: "utilities") pod "d1e63169-bdcd-4caa-921f-8d421b69d523" (UID: "d1e63169-bdcd-4caa-921f-8d421b69d523"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:36:42 crc kubenswrapper[5072]: I1124 12:36:42.345409 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1e63169-bdcd-4caa-921f-8d421b69d523-kube-api-access-lfclx" (OuterVolumeSpecName: "kube-api-access-lfclx") pod "d1e63169-bdcd-4caa-921f-8d421b69d523" (UID: "d1e63169-bdcd-4caa-921f-8d421b69d523"). InnerVolumeSpecName "kube-api-access-lfclx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:36:42 crc kubenswrapper[5072]: I1124 12:36:42.422446 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1e63169-bdcd-4caa-921f-8d421b69d523-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d1e63169-bdcd-4caa-921f-8d421b69d523" (UID: "d1e63169-bdcd-4caa-921f-8d421b69d523"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:36:42 crc kubenswrapper[5072]: I1124 12:36:42.516693 5072 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1e63169-bdcd-4caa-921f-8d421b69d523-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:36:42 crc kubenswrapper[5072]: I1124 12:36:42.516728 5072 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1e63169-bdcd-4caa-921f-8d421b69d523-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:36:42 crc kubenswrapper[5072]: I1124 12:36:42.516741 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lfclx\" (UniqueName: \"kubernetes.io/projected/d1e63169-bdcd-4caa-921f-8d421b69d523-kube-api-access-lfclx\") on node \"crc\" DevicePath \"\"" Nov 24 12:36:43 crc kubenswrapper[5072]: I1124 12:36:43.207585 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hkztx" Nov 24 12:36:43 crc kubenswrapper[5072]: I1124 12:36:43.216741 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hkztx" event={"ID":"d1e63169-bdcd-4caa-921f-8d421b69d523","Type":"ContainerDied","Data":"94ce87cfa15ef13c4389028893054b36eedc9c7e6ad2297b7e6747004349b548"} Nov 24 12:36:43 crc kubenswrapper[5072]: I1124 12:36:43.216796 5072 scope.go:117] "RemoveContainer" containerID="38baace5653ae8516b8766c118e30dbe21bc2dd63280971db8e64842aeaa5134" Nov 24 12:36:43 crc kubenswrapper[5072]: I1124 12:36:43.253500 5072 scope.go:117] "RemoveContainer" containerID="08dfa5892fc688bcfa1a8bc71fcad5deef3bfb74b370c8576d0548f7a2928f34" Nov 24 12:36:43 crc kubenswrapper[5072]: I1124 12:36:43.262073 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hkztx"] Nov 24 12:36:43 crc kubenswrapper[5072]: I1124 12:36:43.275839 5072 scope.go:117] "RemoveContainer" containerID="2008bdaff8aed4831a4aefe5ecc172b9f560329b6f030f37dbfc2c9b876392c5" Nov 24 12:36:43 crc kubenswrapper[5072]: I1124 12:36:43.278596 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hkztx"] Nov 24 12:36:45 crc kubenswrapper[5072]: I1124 12:36:45.072685 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1e63169-bdcd-4caa-921f-8d421b69d523" path="/var/lib/kubelet/pods/d1e63169-bdcd-4caa-921f-8d421b69d523/volumes" Nov 24 12:36:48 crc kubenswrapper[5072]: I1124 12:36:48.016339 5072 scope.go:117] "RemoveContainer" containerID="19c25482ac3f796b948d13f3b52c86e92224ddeeedd2b5a203612de4f14f6e8c" Nov 24 12:36:48 crc kubenswrapper[5072]: E1124 12:36:48.018636 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 12:37:01 crc kubenswrapper[5072]: I1124 12:37:01.018151 5072 scope.go:117] "RemoveContainer" containerID="19c25482ac3f796b948d13f3b52c86e92224ddeeedd2b5a203612de4f14f6e8c" Nov 24 12:37:01 crc kubenswrapper[5072]: E1124 12:37:01.019935 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 12:37:12 crc kubenswrapper[5072]: I1124 12:37:12.017025 5072 scope.go:117] "RemoveContainer" containerID="19c25482ac3f796b948d13f3b52c86e92224ddeeedd2b5a203612de4f14f6e8c" Nov 24 12:37:12 crc kubenswrapper[5072]: E1124 12:37:12.018022 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 12:37:24 crc kubenswrapper[5072]: I1124 12:37:24.017473 5072 scope.go:117] "RemoveContainer" containerID="19c25482ac3f796b948d13f3b52c86e92224ddeeedd2b5a203612de4f14f6e8c" Nov 24 12:37:24 crc kubenswrapper[5072]: E1124 12:37:24.019199 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 12:37:36 crc kubenswrapper[5072]: I1124 12:37:36.016179 5072 scope.go:117] "RemoveContainer" containerID="19c25482ac3f796b948d13f3b52c86e92224ddeeedd2b5a203612de4f14f6e8c" Nov 24 12:37:36 crc kubenswrapper[5072]: E1124 12:37:36.016867 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 12:37:48 crc kubenswrapper[5072]: I1124 12:37:48.016695 5072 scope.go:117] "RemoveContainer" containerID="19c25482ac3f796b948d13f3b52c86e92224ddeeedd2b5a203612de4f14f6e8c" Nov 24 12:37:48 crc kubenswrapper[5072]: E1124 12:37:48.017602 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 12:38:01 crc kubenswrapper[5072]: I1124 12:38:01.017295 5072 scope.go:117] "RemoveContainer" containerID="19c25482ac3f796b948d13f3b52c86e92224ddeeedd2b5a203612de4f14f6e8c" Nov 24 12:38:01 crc kubenswrapper[5072]: E1124 12:38:01.018726 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 12:38:16 crc kubenswrapper[5072]: I1124 12:38:16.016822 5072 scope.go:117] "RemoveContainer" containerID="19c25482ac3f796b948d13f3b52c86e92224ddeeedd2b5a203612de4f14f6e8c" Nov 24 12:38:16 crc kubenswrapper[5072]: E1124 12:38:16.017554 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 12:38:22 crc kubenswrapper[5072]: I1124 12:38:22.492600 5072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-sk4n2"] Nov 24 12:38:22 crc kubenswrapper[5072]: E1124 12:38:22.493533 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1e63169-bdcd-4caa-921f-8d421b69d523" containerName="extract-content" Nov 24 12:38:22 crc kubenswrapper[5072]: I1124 12:38:22.493547 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1e63169-bdcd-4caa-921f-8d421b69d523" containerName="extract-content" Nov 24 12:38:22 crc kubenswrapper[5072]: E1124 12:38:22.493568 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1e63169-bdcd-4caa-921f-8d421b69d523" containerName="registry-server" Nov 24 12:38:22 crc kubenswrapper[5072]: I1124 12:38:22.493574 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1e63169-bdcd-4caa-921f-8d421b69d523" containerName="registry-server" Nov 24 12:38:22 crc kubenswrapper[5072]: E1124 12:38:22.493597 5072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1e63169-bdcd-4caa-921f-8d421b69d523" containerName="extract-utilities" Nov 24 12:38:22 crc kubenswrapper[5072]: I1124 12:38:22.493603 5072 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1e63169-bdcd-4caa-921f-8d421b69d523" containerName="extract-utilities" Nov 24 12:38:22 crc kubenswrapper[5072]: I1124 12:38:22.493771 5072 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1e63169-bdcd-4caa-921f-8d421b69d523" containerName="registry-server" Nov 24 12:38:22 crc kubenswrapper[5072]: I1124 12:38:22.495113 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sk4n2" Nov 24 12:38:22 crc kubenswrapper[5072]: I1124 12:38:22.518463 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sk4n2"] Nov 24 12:38:22 crc kubenswrapper[5072]: I1124 12:38:22.612624 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwjlv\" (UniqueName: \"kubernetes.io/projected/a66ceb80-1985-4a36-b4af-f9e556b095d5-kube-api-access-lwjlv\") pod \"redhat-marketplace-sk4n2\" (UID: \"a66ceb80-1985-4a36-b4af-f9e556b095d5\") " pod="openshift-marketplace/redhat-marketplace-sk4n2" Nov 24 12:38:22 crc kubenswrapper[5072]: I1124 12:38:22.612990 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a66ceb80-1985-4a36-b4af-f9e556b095d5-catalog-content\") pod \"redhat-marketplace-sk4n2\" (UID: \"a66ceb80-1985-4a36-b4af-f9e556b095d5\") " pod="openshift-marketplace/redhat-marketplace-sk4n2" Nov 24 12:38:22 crc kubenswrapper[5072]: I1124 12:38:22.613129 5072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a66ceb80-1985-4a36-b4af-f9e556b095d5-utilities\") pod \"redhat-marketplace-sk4n2\" (UID: \"a66ceb80-1985-4a36-b4af-f9e556b095d5\") " pod="openshift-marketplace/redhat-marketplace-sk4n2" Nov 24 12:38:22 crc kubenswrapper[5072]: I1124 12:38:22.718783 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwjlv\" (UniqueName: \"kubernetes.io/projected/a66ceb80-1985-4a36-b4af-f9e556b095d5-kube-api-access-lwjlv\") pod \"redhat-marketplace-sk4n2\" (UID: \"a66ceb80-1985-4a36-b4af-f9e556b095d5\") " pod="openshift-marketplace/redhat-marketplace-sk4n2" Nov 24 12:38:22 crc kubenswrapper[5072]: I1124 12:38:22.718956 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a66ceb80-1985-4a36-b4af-f9e556b095d5-catalog-content\") pod \"redhat-marketplace-sk4n2\" (UID: \"a66ceb80-1985-4a36-b4af-f9e556b095d5\") " pod="openshift-marketplace/redhat-marketplace-sk4n2" Nov 24 12:38:22 crc kubenswrapper[5072]: I1124 12:38:22.719006 5072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a66ceb80-1985-4a36-b4af-f9e556b095d5-utilities\") pod \"redhat-marketplace-sk4n2\" (UID: \"a66ceb80-1985-4a36-b4af-f9e556b095d5\") " pod="openshift-marketplace/redhat-marketplace-sk4n2" Nov 24 12:38:22 crc kubenswrapper[5072]: I1124 12:38:22.719623 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a66ceb80-1985-4a36-b4af-f9e556b095d5-utilities\") pod \"redhat-marketplace-sk4n2\" (UID: \"a66ceb80-1985-4a36-b4af-f9e556b095d5\") " pod="openshift-marketplace/redhat-marketplace-sk4n2" Nov 24 12:38:22 crc kubenswrapper[5072]: I1124 12:38:22.720701 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a66ceb80-1985-4a36-b4af-f9e556b095d5-catalog-content\") pod \"redhat-marketplace-sk4n2\" (UID: \"a66ceb80-1985-4a36-b4af-f9e556b095d5\") " pod="openshift-marketplace/redhat-marketplace-sk4n2" Nov 24 12:38:22 crc kubenswrapper[5072]: I1124 12:38:22.781722 5072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwjlv\" (UniqueName: \"kubernetes.io/projected/a66ceb80-1985-4a36-b4af-f9e556b095d5-kube-api-access-lwjlv\") pod \"redhat-marketplace-sk4n2\" (UID: \"a66ceb80-1985-4a36-b4af-f9e556b095d5\") " pod="openshift-marketplace/redhat-marketplace-sk4n2" Nov 24 12:38:22 crc kubenswrapper[5072]: I1124 12:38:22.817829 5072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sk4n2" Nov 24 12:38:23 crc kubenswrapper[5072]: I1124 12:38:23.423722 5072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sk4n2"] Nov 24 12:38:24 crc kubenswrapper[5072]: I1124 12:38:24.343137 5072 generic.go:334] "Generic (PLEG): container finished" podID="a66ceb80-1985-4a36-b4af-f9e556b095d5" containerID="2fea6962bee6a0553ed040010d0eb58241c0f440ea55611ac7a516a2896e4e93" exitCode=0 Nov 24 12:38:24 crc kubenswrapper[5072]: I1124 12:38:24.343541 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sk4n2" event={"ID":"a66ceb80-1985-4a36-b4af-f9e556b095d5","Type":"ContainerDied","Data":"2fea6962bee6a0553ed040010d0eb58241c0f440ea55611ac7a516a2896e4e93"} Nov 24 12:38:24 crc kubenswrapper[5072]: I1124 12:38:24.343581 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sk4n2" event={"ID":"a66ceb80-1985-4a36-b4af-f9e556b095d5","Type":"ContainerStarted","Data":"7d0d92e38f7f39892524a5329cfc25fd94e85226f39995c1fcc0779639b920b8"} Nov 24 12:38:25 crc kubenswrapper[5072]: I1124 12:38:25.387542 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sk4n2" event={"ID":"a66ceb80-1985-4a36-b4af-f9e556b095d5","Type":"ContainerStarted","Data":"db0336db25ef945a59674ec499c612daa3ee0352309423ec694be7094d018939"} Nov 24 12:38:26 crc kubenswrapper[5072]: I1124 12:38:26.403501 5072 generic.go:334] "Generic (PLEG): container finished" podID="a66ceb80-1985-4a36-b4af-f9e556b095d5" containerID="db0336db25ef945a59674ec499c612daa3ee0352309423ec694be7094d018939" exitCode=0 Nov 24 12:38:26 crc kubenswrapper[5072]: I1124 12:38:26.403608 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sk4n2" event={"ID":"a66ceb80-1985-4a36-b4af-f9e556b095d5","Type":"ContainerDied","Data":"db0336db25ef945a59674ec499c612daa3ee0352309423ec694be7094d018939"} Nov 24 12:38:27 crc kubenswrapper[5072]: I1124 12:38:27.415735 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sk4n2" event={"ID":"a66ceb80-1985-4a36-b4af-f9e556b095d5","Type":"ContainerStarted","Data":"ab65d658671eca032cff584e064f720ec32c88ee6258fe8d101878626dd43fc5"} Nov 24 12:38:27 crc kubenswrapper[5072]: I1124 12:38:27.436297 5072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-sk4n2" podStartSLOduration=2.698455532 podStartE2EDuration="5.436279064s" podCreationTimestamp="2025-11-24 12:38:22 +0000 UTC" firstStartedPulling="2025-11-24 12:38:24.346226426 +0000 UTC m=+5356.057750952" lastFinishedPulling="2025-11-24 12:38:27.084050008 +0000 UTC m=+5358.795574484" observedRunningTime="2025-11-24 12:38:27.432252114 +0000 UTC m=+5359.143776590" watchObservedRunningTime="2025-11-24 12:38:27.436279064 +0000 UTC m=+5359.147803540" Nov 24 12:38:30 crc kubenswrapper[5072]: I1124 12:38:30.017035 5072 scope.go:117] "RemoveContainer" containerID="19c25482ac3f796b948d13f3b52c86e92224ddeeedd2b5a203612de4f14f6e8c" Nov 24 12:38:30 crc kubenswrapper[5072]: E1124 12:38:30.017504 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 12:38:32 crc kubenswrapper[5072]: I1124 12:38:32.818457 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-sk4n2" Nov 24 12:38:32 crc kubenswrapper[5072]: I1124 12:38:32.818993 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-sk4n2" Nov 24 12:38:32 crc kubenswrapper[5072]: I1124 12:38:32.868397 5072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-sk4n2" Nov 24 12:38:33 crc kubenswrapper[5072]: I1124 12:38:33.539384 5072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-sk4n2" Nov 24 12:38:33 crc kubenswrapper[5072]: I1124 12:38:33.599043 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sk4n2"] Nov 24 12:38:35 crc kubenswrapper[5072]: I1124 12:38:35.523886 5072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-sk4n2" podUID="a66ceb80-1985-4a36-b4af-f9e556b095d5" containerName="registry-server" containerID="cri-o://ab65d658671eca032cff584e064f720ec32c88ee6258fe8d101878626dd43fc5" gracePeriod=2 Nov 24 12:38:36 crc kubenswrapper[5072]: I1124 12:38:36.043191 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sk4n2" Nov 24 12:38:36 crc kubenswrapper[5072]: I1124 12:38:36.211353 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwjlv\" (UniqueName: \"kubernetes.io/projected/a66ceb80-1985-4a36-b4af-f9e556b095d5-kube-api-access-lwjlv\") pod \"a66ceb80-1985-4a36-b4af-f9e556b095d5\" (UID: \"a66ceb80-1985-4a36-b4af-f9e556b095d5\") " Nov 24 12:38:36 crc kubenswrapper[5072]: I1124 12:38:36.211448 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a66ceb80-1985-4a36-b4af-f9e556b095d5-catalog-content\") pod \"a66ceb80-1985-4a36-b4af-f9e556b095d5\" (UID: \"a66ceb80-1985-4a36-b4af-f9e556b095d5\") " Nov 24 12:38:36 crc kubenswrapper[5072]: I1124 12:38:36.211585 5072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a66ceb80-1985-4a36-b4af-f9e556b095d5-utilities\") pod \"a66ceb80-1985-4a36-b4af-f9e556b095d5\" (UID: \"a66ceb80-1985-4a36-b4af-f9e556b095d5\") " Nov 24 12:38:36 crc kubenswrapper[5072]: I1124 12:38:36.213545 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a66ceb80-1985-4a36-b4af-f9e556b095d5-utilities" (OuterVolumeSpecName: "utilities") pod "a66ceb80-1985-4a36-b4af-f9e556b095d5" (UID: "a66ceb80-1985-4a36-b4af-f9e556b095d5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:38:36 crc kubenswrapper[5072]: I1124 12:38:36.219206 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a66ceb80-1985-4a36-b4af-f9e556b095d5-kube-api-access-lwjlv" (OuterVolumeSpecName: "kube-api-access-lwjlv") pod "a66ceb80-1985-4a36-b4af-f9e556b095d5" (UID: "a66ceb80-1985-4a36-b4af-f9e556b095d5"). InnerVolumeSpecName "kube-api-access-lwjlv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:38:36 crc kubenswrapper[5072]: I1124 12:38:36.284645 5072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a66ceb80-1985-4a36-b4af-f9e556b095d5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a66ceb80-1985-4a36-b4af-f9e556b095d5" (UID: "a66ceb80-1985-4a36-b4af-f9e556b095d5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:38:36 crc kubenswrapper[5072]: I1124 12:38:36.315291 5072 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a66ceb80-1985-4a36-b4af-f9e556b095d5-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:38:36 crc kubenswrapper[5072]: I1124 12:38:36.315335 5072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lwjlv\" (UniqueName: \"kubernetes.io/projected/a66ceb80-1985-4a36-b4af-f9e556b095d5-kube-api-access-lwjlv\") on node \"crc\" DevicePath \"\"" Nov 24 12:38:36 crc kubenswrapper[5072]: I1124 12:38:36.315351 5072 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a66ceb80-1985-4a36-b4af-f9e556b095d5-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:38:36 crc kubenswrapper[5072]: I1124 12:38:36.537611 5072 generic.go:334] "Generic (PLEG): container finished" podID="a66ceb80-1985-4a36-b4af-f9e556b095d5" containerID="ab65d658671eca032cff584e064f720ec32c88ee6258fe8d101878626dd43fc5" exitCode=0 Nov 24 12:38:36 crc kubenswrapper[5072]: I1124 12:38:36.537657 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sk4n2" event={"ID":"a66ceb80-1985-4a36-b4af-f9e556b095d5","Type":"ContainerDied","Data":"ab65d658671eca032cff584e064f720ec32c88ee6258fe8d101878626dd43fc5"} Nov 24 12:38:36 crc kubenswrapper[5072]: I1124 12:38:36.537684 5072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sk4n2" event={"ID":"a66ceb80-1985-4a36-b4af-f9e556b095d5","Type":"ContainerDied","Data":"7d0d92e38f7f39892524a5329cfc25fd94e85226f39995c1fcc0779639b920b8"} Nov 24 12:38:36 crc kubenswrapper[5072]: I1124 12:38:36.537700 5072 scope.go:117] "RemoveContainer" containerID="ab65d658671eca032cff584e064f720ec32c88ee6258fe8d101878626dd43fc5" Nov 24 12:38:36 crc kubenswrapper[5072]: I1124 12:38:36.538827 5072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sk4n2" Nov 24 12:38:36 crc kubenswrapper[5072]: I1124 12:38:36.561278 5072 scope.go:117] "RemoveContainer" containerID="db0336db25ef945a59674ec499c612daa3ee0352309423ec694be7094d018939" Nov 24 12:38:36 crc kubenswrapper[5072]: I1124 12:38:36.581698 5072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sk4n2"] Nov 24 12:38:36 crc kubenswrapper[5072]: I1124 12:38:36.591778 5072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-sk4n2"] Nov 24 12:38:36 crc kubenswrapper[5072]: I1124 12:38:36.608893 5072 scope.go:117] "RemoveContainer" containerID="2fea6962bee6a0553ed040010d0eb58241c0f440ea55611ac7a516a2896e4e93" Nov 24 12:38:36 crc kubenswrapper[5072]: I1124 12:38:36.643148 5072 scope.go:117] "RemoveContainer" containerID="ab65d658671eca032cff584e064f720ec32c88ee6258fe8d101878626dd43fc5" Nov 24 12:38:36 crc kubenswrapper[5072]: E1124 12:38:36.643737 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab65d658671eca032cff584e064f720ec32c88ee6258fe8d101878626dd43fc5\": container with ID starting with ab65d658671eca032cff584e064f720ec32c88ee6258fe8d101878626dd43fc5 not found: ID does not exist" containerID="ab65d658671eca032cff584e064f720ec32c88ee6258fe8d101878626dd43fc5" Nov 24 12:38:36 crc kubenswrapper[5072]: I1124 12:38:36.643787 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab65d658671eca032cff584e064f720ec32c88ee6258fe8d101878626dd43fc5"} err="failed to get container status \"ab65d658671eca032cff584e064f720ec32c88ee6258fe8d101878626dd43fc5\": rpc error: code = NotFound desc = could not find container \"ab65d658671eca032cff584e064f720ec32c88ee6258fe8d101878626dd43fc5\": container with ID starting with ab65d658671eca032cff584e064f720ec32c88ee6258fe8d101878626dd43fc5 not found: ID does not exist" Nov 24 12:38:36 crc kubenswrapper[5072]: I1124 12:38:36.643816 5072 scope.go:117] "RemoveContainer" containerID="db0336db25ef945a59674ec499c612daa3ee0352309423ec694be7094d018939" Nov 24 12:38:36 crc kubenswrapper[5072]: E1124 12:38:36.644219 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db0336db25ef945a59674ec499c612daa3ee0352309423ec694be7094d018939\": container with ID starting with db0336db25ef945a59674ec499c612daa3ee0352309423ec694be7094d018939 not found: ID does not exist" containerID="db0336db25ef945a59674ec499c612daa3ee0352309423ec694be7094d018939" Nov 24 12:38:36 crc kubenswrapper[5072]: I1124 12:38:36.644247 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db0336db25ef945a59674ec499c612daa3ee0352309423ec694be7094d018939"} err="failed to get container status \"db0336db25ef945a59674ec499c612daa3ee0352309423ec694be7094d018939\": rpc error: code = NotFound desc = could not find container \"db0336db25ef945a59674ec499c612daa3ee0352309423ec694be7094d018939\": container with ID starting with db0336db25ef945a59674ec499c612daa3ee0352309423ec694be7094d018939 not found: ID does not exist" Nov 24 12:38:36 crc kubenswrapper[5072]: I1124 12:38:36.644266 5072 scope.go:117] "RemoveContainer" containerID="2fea6962bee6a0553ed040010d0eb58241c0f440ea55611ac7a516a2896e4e93" Nov 24 12:38:36 crc kubenswrapper[5072]: E1124 12:38:36.644567 5072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2fea6962bee6a0553ed040010d0eb58241c0f440ea55611ac7a516a2896e4e93\": container with ID starting with 2fea6962bee6a0553ed040010d0eb58241c0f440ea55611ac7a516a2896e4e93 not found: ID does not exist" containerID="2fea6962bee6a0553ed040010d0eb58241c0f440ea55611ac7a516a2896e4e93" Nov 24 12:38:36 crc kubenswrapper[5072]: I1124 12:38:36.644589 5072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2fea6962bee6a0553ed040010d0eb58241c0f440ea55611ac7a516a2896e4e93"} err="failed to get container status \"2fea6962bee6a0553ed040010d0eb58241c0f440ea55611ac7a516a2896e4e93\": rpc error: code = NotFound desc = could not find container \"2fea6962bee6a0553ed040010d0eb58241c0f440ea55611ac7a516a2896e4e93\": container with ID starting with 2fea6962bee6a0553ed040010d0eb58241c0f440ea55611ac7a516a2896e4e93 not found: ID does not exist" Nov 24 12:38:37 crc kubenswrapper[5072]: I1124 12:38:37.051649 5072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a66ceb80-1985-4a36-b4af-f9e556b095d5" path="/var/lib/kubelet/pods/a66ceb80-1985-4a36-b4af-f9e556b095d5/volumes" Nov 24 12:38:45 crc kubenswrapper[5072]: I1124 12:38:45.018936 5072 scope.go:117] "RemoveContainer" containerID="19c25482ac3f796b948d13f3b52c86e92224ddeeedd2b5a203612de4f14f6e8c" Nov 24 12:38:45 crc kubenswrapper[5072]: E1124 12:38:45.019638 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 12:39:00 crc kubenswrapper[5072]: I1124 12:39:00.016228 5072 scope.go:117] "RemoveContainer" containerID="19c25482ac3f796b948d13f3b52c86e92224ddeeedd2b5a203612de4f14f6e8c" Nov 24 12:39:00 crc kubenswrapper[5072]: E1124 12:39:00.017259 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5" Nov 24 12:39:11 crc kubenswrapper[5072]: I1124 12:39:11.023295 5072 scope.go:117] "RemoveContainer" containerID="19c25482ac3f796b948d13f3b52c86e92224ddeeedd2b5a203612de4f14f6e8c" Nov 24 12:39:11 crc kubenswrapper[5072]: E1124 12:39:11.024098 5072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jfxnb_openshift-machine-config-operator(85ee6420-36f0-467c-acf4-ebea8b02c8d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-jfxnb" podUID="85ee6420-36f0-467c-acf4-ebea8b02c8d5"